Mercurial > hg > ltpda
comparison m-toolbox/test/LTPDA_training/topic5/TrainigSession_T5_Ex05.m @ 0:f0afece42f48
Import.
author | Daniele Nicolodi <nicolodi@science.unitn.it> |
---|---|
date | Wed, 23 Nov 2011 19:22:13 +0100 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:f0afece42f48 |
---|---|
1 % Training session Topic 5 exercise 05 | |
2 % | |
3 % Fitting time series with xfit | |
4 % | |
5 % ao/xfit uses a full non-linear least square algorithm to fit data. Here | |
6 % the main features will be described to fit an up-chirp sine. | |
7 % | |
8 % 1) Load time series data | |
9 % 2) Define the model to fit | |
10 % 3) Inspect data in order to find an initial guess | |
11 % 4) Fit data with ao/xfit starting from that guess | |
12 % 5) Check results | |
13 % 6) Fit again by changing the fitting algorithm (fminunc will be used) | |
14 % 7) Check results | |
15 % 8) Fit again by using a Monte Carlo search | |
16 % 9) Check results | |
17 % 10) Final fit with the full model (data bias included) | |
18 % 11) Check results | |
19 % 12) Comments | |
20 % | |
21 % Trento, 01-03-2010 | |
22 % Giuseppe Congedo | |
23 % | |
24 % $Id: TrainigSession_T5_Ex05.m,v 1.3 2011/05/13 15:13:12 ingo Exp $ | |
25 % | |
26 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | |
27 | |
28 %% 1) load tsdata | |
29 | |
30 % load test noise AO from file | |
31 data = ao(plist('filename', 'topic5/T5_Ex05_TestNoise.xml')); | |
32 data.setName; | |
33 | |
34 % look at data | |
35 iplot(data) | |
36 % iplot(abs(fft(data))) | |
37 % iplot(psd(data)) | |
38 | |
39 %% 2) set the model | |
40 | |
41 % try a linearly chirped sine wave with a bias | |
42 mdl = smodel('a + P1.*sin(2.*pi.*(P2 + P3.*t).*t + P4)'); | |
43 mdl.setParams({'a', 'P1', 'P2', 'P3', 'P4'}); | |
44 mdl.setXvar('t'); | |
45 mdl.subs('a',5); % set additional parameters - data bias | |
46 | |
47 %% 3) try to find an initial guess | |
48 | |
49 % plot the tentative model | |
50 mdlTry = mdl.setValues([4 1e-4 1e-5 0]); | |
51 mdlTry.setXvals(data.x); | |
52 iplot(data,mdlTry.eval) | |
53 | |
54 %% 4) xfit from an initial guess | |
55 | |
56 % fit plist | |
57 plfit = plist('Function', mdl, ... | |
58 'P0', [4 1e-4 1e-5 0], ... % set initial guess for the parameters | |
59 'LB', [2 1e-6 1e-6 0], ... % set lower bounds for parameters | |
60 'UB', [5 1e-2 1e-2 2*pi] ... % set upper bounds for parameters | |
61 ); | |
62 | |
63 % do fit | |
64 params = xfit(data, plfit) | |
65 | |
66 %% 5) check results | |
67 | |
68 bestMdl = eval(params); | |
69 | |
70 % plotting results | |
71 iplot(data,bestMdl) | |
72 iplot(data-bestMdl) | |
73 iplot(psd(data-bestMdl)) | |
74 | |
75 % Comments. | |
76 % Fit is not so bad, but let's now try to change the fitting algorithm. | |
77 | |
78 %% 6) xfit from an initial guess (change algorithm) | |
79 | |
80 % fit plist | |
81 plfit = plist('Function', mdl, ... | |
82 'P0', [4 1e-4 1e-5 0], ... % set initial guess for the parameters | |
83 'LB', [2 1e-6 1e-6 0], ... % set lower bounds for parameters | |
84 'UB', [5 1e-2 1e-2 2*pi], ... % set upper bounds for parameters | |
85 'algorithm', 'fminunc'); % set fitting algorithm | |
86 | |
87 % do fit | |
88 params = xfit(data, plfit) | |
89 | |
90 %% 7) check results | |
91 | |
92 bestMdl = eval(params); | |
93 | |
94 % plotting results | |
95 iplot(data,bestMdl) | |
96 iplot(data-bestMdl) | |
97 iplot(psd(data-bestMdl)) | |
98 | |
99 % Comments. | |
100 % Fit is not so bad: an improvement with respect to using fminsearch. | |
101 % Let's now try to do a Monte Carlo search. | |
102 | |
103 %% 8) xfit from a Monte Carlo search (bounds are shrinked about the best-fit found above) | |
104 | |
105 % fit plist | |
106 plfit = plist('Function', mdl, ... | |
107 'LB', [2.5 1e-5 1e-6 0], ... % set lower bounds for parameters | |
108 'UB', [3.5 1e-3 1e-4 2*pi], ... % set upper bounds for parameters | |
109 'algorithm', 'fminunc', ... % set fitting algorithm | |
110 'MonteCarlo', 'y', 'Npoints', 1e4); % set the number of points in the parameter space | |
111 | |
112 % do fit | |
113 params = xfit(data, plfit) | |
114 | |
115 %% 9) check results | |
116 | |
117 bestMdl = eval(params); | |
118 | |
119 % plotting results | |
120 iplot(data,bestMdl) | |
121 iplot(data-bestMdl) | |
122 iplot(psd(data-bestMdl)) | |
123 | |
124 % Comments. | |
125 % Fit is not so bad: no further improvements with respect to the previous fit. | |
126 % Let's now try to fit the full model by doing a Monte Carlo search. | |
127 | |
128 %% 10) xfit with full model | |
129 | |
130 % set the full model (including the data bias) | |
131 mdl = smodel('a + P1.*sin(2.*pi.*(P2 + P3.*t).*t + P4)'); | |
132 mdl.setParams({'a', 'P1', 'P2', 'P3', 'P4'}); | |
133 mdl.setXvar('t'); | |
134 | |
135 % fit plist | |
136 plfit = plist('Function', mdl, ... | |
137 'P0', [5 3.01 7.27e-5 1.00e-5 0.325], ... % set initial guess for the parameters | |
138 'LB', [4 2.5 1e-5 1e-6 0], ... % set lower bounds for parameters | |
139 'UB', [6 3.5 1e-3 1e-4 2*pi], ... % set upper bounds for parameters | |
140 'algorithm', 'fminunc' ... % set fitting algorithm | |
141 ); | |
142 | |
143 % do fit | |
144 params = xfit(data, plfit) | |
145 | |
146 %% 11) check results | |
147 | |
148 bestMdl = eval(params); | |
149 | |
150 % plotting results | |
151 iplot(data,bestMdl) | |
152 iplot(data-bestMdl) | |
153 iplot(psd(data-bestMdl)) | |
154 | |
155 % Comments. | |
156 % Fit is good, even for the full model (including bias). | |
157 | |
158 %% 12) Comments | |
159 | |
160 % Let's compare the fit results with the real parameters. | |
161 % | |
162 % Firstly, data were generated with the set of parameters: | |
163 % a = 5 | |
164 % P1 = 3 | |
165 % P2 = 1e-4 | |
166 % P3 = 1e-5 | |
167 % P4 = 0.3 | |
168 % | |
169 % In the end, the fit results are | |
170 % chi2 = 1.03 | |
171 % dof = 995 | |
172 % a = 4.965 +/- 0.033 | |
173 % P1 = 3.018 +/- 0.047 | |
174 % P2 = 6.6e-5 +/- 3.1e-5 | |
175 % P3 = 1.0032e-5 +/- 3.1e-8 | |
176 % P4 = 0.334 +/- 0.043 | |
177 % Quite in well accordance with expected values! | |
178 % | |
179 % Are they biased? No, they are not. Indeed: | |
180 % (p_fit - p_true)/dp = ... | |
181 % 1.1 | |
182 % 0.39 | |
183 % 1.1 | |
184 % 1.1 | |
185 % 0.80 | |
186 % Very good. |