Lab 2.2 Using fitting as a tool, not like a fool

Introduction

In the last lab, we saw that we could use correlation to examine if there were significant linear relationships between 2 variables, and we examined both linear and non-linear fits of data.

Now you might be thinking: "Hot dog! I can fit my data to any function I want just by using the fit function. I will be able to explain everything in terms of an equation."

Unfortunately, the process of fitting has several real limits, for sensible reasons. In this lab, we will learn how to get the most out of fitting without falling into some of its more common traps.

How fitting is accomplished under the hood: local optimization

Suppose we were mapping the response field of a nerve fiber in someone's arm. We want to know the extent of the area on the skin that gives a response, perhaps with a small probe. We might want to fit the data to a normal (gaussian) distribution, because this function has a nice central peak that falls off with distance. Here is the data, in the file armdata.txt.

armdata = load('armdata.txt','-ascii');

figure;

plot(armdata(:,1),armdata(:,2),'k');

xlabel('Probe tap position (in mm)');

ylabel('Nerve impulses per second')

Now let's fit the data to a gaussian. We saw a gaussian function in Problem Set 1.2, Q1.3.1-2. (If you are only doing the labs and not doing the homework, take a minute to do Q1.3.1 of Problem Set 1.2 to get introduced to the gaussian function.) To do this, we are going to pull out the list of fitting options. There are a number of functions that we can use with variables of the fittype data type. To get a list, you can type

methods('fittype')

We will use the function fitoptions to read the fit options. This list of fitting options is a data type that we haven't seen before, called a structure:

gauss = fittype('a+b*exp(-((x-c).^2)/((2*d^2)))');

fo = fitoptions(gauss),

We will learn how to make our own structures down the road, but for now, all you need to know is that you can access the 'fields' of a structure using a period (dot). (Yes, strangely, the same symbol that you use to perform element-by-element arithmetic, but it does not have the same meaning when the variable is a structure.)

We'll edit the 'StartPoint' field to give an initial guess of our values a, b, c, and d:

fo.StartPoint = [0; 0.5; 0; 0.5]; % guess a=0, b=0.5, c=0; d=0.5

gauss = setoptions(gauss,fo); % install the options in the fittype

[nerve,nervegof] = fit(armdata(:,1),armdata(:,2),gauss)

hold on

h = plot(armdata(:,1),nerve(armdata(:,1)),'k--');

Q1: What does the fit look like? The variable nervegof is the "goodness of fit" parameters; it is a data structure with multiple entries, including the summed squared error between the data and the fit (parameter sse). What is the summed squared error between the data and the fit in this case? Is the fit what you expected?

Let's design our fit so we can watch it happen over time. What will happen is that the fitting algorithm will start off with our initial guess, and vary each parameter up and down slightly to see which direction reduces the error between the fit and the data. Then, it will adjust its guess slightly in that direction, and keep moving, until the error is as small as it can get. So that we can see the process happen, we'll pause with the pause command (which pauses the number of seconds requested). Here is a function to perform the fitting while we watch (watchfithappen.m, please put it in a new package called +fit):

watchfithappen.m

function [finalfit, finalfitgof, errorovertime] = ...

watchfithappen(x, y, fitt, N)

% WATCHFITHAPPEN - Watch a fit proceed

% [FINALFIT, FINALFITGOF] = dasw.fit.watchfithappen(X, Y, FITTYPEOBJ)

% WATCHFITHAPPEN will display the progress of a fit as it

% makes better and better guesses as to the fit parameters.

% Inputs:

% X, a column of X data

% Y, the values of Y that were measured at those X values

% FIT, a FITTYPEOBJ describing the fit to be used

% N the number of iterations to watch

% Outputs:

% FINALFIT - A fit object with the best fit parameters

% FINALFITGOF - The final goodness-of-fit parameters

% ERROROVERTIME - The sum of squared errors over time

i = 1;

figure;

plot(x,y,'k');

hold on;

h=plot(x,y,'k'); % plot a second plot to erase

xlabel('X');

ylabel('Y');

errorovertime = [];

while i<=N,

fo = fitoptions(fitt);

fo.MaxIter = i; % only let it run i iterations

fitt = setoptions(fitt,fo); % set those options

[finalfit,finalfitgof] = fit(x,y,fitt);

errorovertime(i) = finalfitgof.sse;

delete(h);

h=plot(x,finalfit(x),'b--');

pause(0.1);

i = i + 1;

end;

Now let's try using it to fit our data:

[nerve,nervegof,errorovertime]=dasw.fit.watchfithappen(armdata(:,1),armdata(:,2),gauss,30);

Let's plot the error over time:

figure(100);

plot(errorovertime);

xlabel('Fit iteration number');

ylabel('Sum of squared error between fit and data');

Leave this window open for the moment.

Q2: Does the error reach an apparent minimum point during this parameter search procedure?

BUT, you protest, you know, if it would just search for a value of c near 6, it would find a much better answer. Let's try it:

gauss2 = gauss;

fo = fitoptions(gauss2);

fo.StartPoint = [0; 0.5; 6; 0.5];

gauss2 = setoptions(gauss2,fo);

[nerve2,nervegof2,errortime2]=dasw.fit.watchfithappen(armdata(:,1),armdata(:,2),gauss2,30);

figure(100);

hold on;

plot(errortime2,'k-');

legend('Bad starting position','Good starting position');

Q3: Is the squared error better or worse with the better starting position?

You might ask, why doesn't the computer just try every possible combination of values for these variables to identify the "global" minimum error, rather than just searching locally? The answer is that you don't have the time for it to try every combination of all variables. Suppose the computer could evaluate 1000 fits per second. Suppose there are 4 variables to your fit, and suppose you evaluate all 2^64 possible real-number values (for the default variable type, which is a double). Then the total time it would take is ((2^64)^4/1000) seconds, or about 1.34x10^69 years, much, much, much longer than the age of the universe (about 13.7 x 10^9 years). So we can't quite do that.

Local error minima vs. the global minimum error

Rules of thumb for finding the global minimum error:

  • Use the simplest equation you can that describes your function; adding more parameters increases the likelihood that your fits will get stuck in a local error minimum

  • If possible, give the fitting function very good guesses as to the initial starting conditions that should be used

  • Repeat the fitting process from a variety of initial starting conditions; compare the results to see which starting position had the least error

  • If you know something about the limits of your variables (for example, if a variable in your equation cannot physically be less than 0, but mathematically it could be), make sure to include this information in your fit options.

Okay, let's try a few of these solutions for the problem at hand.

First, let's make a guess as to the starting values for our search. Let's base our guess on the data itself. Let's guess mean(y) for the constant offset parameter (a), and max(y) for the peak parameter (b). Let's find the location of the peak and make that our guess for

gauss3 = gauss;

fo = fitoptions(gauss3);

[maxvalue,location] = max(armdata(:,2)); % maxvalue will be guess for b

position_guess = armdata(location,1); % guess parameter c

width_guess = 1; % arbitrarily pick a width guess, parameter d

fo.StartPoint = [mean(armdata(:,2)); maxvalue; position_guess; width_guess];

Second, although it is mathematically possible for a gaussian to appear upside-down, that is not expected in this situation, so let's limit the peak to being positive. Further, let's limit c to being no wider than our current data.

Let's look at how we do this. Examine the available fit options parameters:

fo,

should pop up something like this:

Fields of fit options structure (don't type this, this is what you should see):

fo =

Normalize: 'off'

Exclude: []

Weights: []

Method: 'NonlinearLeastSquares'

Robust: 'Off'

StartPoint: [1.8255 7.8799 7.8799 1]

Lower: [1x0 double]

Upper: [1x0 double]

Algorithm: 'Trust-Region'

DiffMinChange: 1.0000e-08

DiffMaxChange: 0.1000

Display: 'Notify'

MaxFunEvals: 600

MaxIter: 400

TolFun: 1.0000e-06

TolX: 1.0000e-06

We can set the upper and lower limits of the variables a, b, c, and d. Let's restrict a to be between -maxvalue and positive maxvalue (the constant offset doesn't need to go outside these bounds), and restrict b to be positive, and c to be within our bounds. We have less information here as to how to restrict d, so, for the moment, we won't.

fo.Lower = [-maxvalue; 0; min(armdata(:,1)); -Inf];

fo.Upper = [maxvalue; Inf; max(armdata(:,1)); Inf];

gauss3 = setoptions(gauss3,fo);

Now let's perform the fit:

[nerve3,nervegof3,errortime3]=dasw.fit.watchfithappen(armdata(:,1),armdata(:,2),gauss3,30);

figure(100);

plot(errortime3,'g-');

legend('Bad starting position','Good starting position','Great starting position');

Q4: Did our guesses work?

Garbage in, garbage out

As you've already seen, one of the most important rules pertaining to any analysis process is that when you put in data that doesn't conform to the assumptions of the process, then you get meaningless data out the other side. Here are a couple common situations in fitting:

Situation 1a: the function doesn't describe the data very well

Let's consider the function we used to fit the population data in Lab 2.1: f(x) = a*(x-b)^2 . Suppose that instead of trying to use this function to fit population data (which inherently has an exponential shape), we tried to fit sinusoidal data.

x = 1:0.1:10;

y = sin(x);

s = fitoptions('Method','NonlinearLeastSquares','Lower',[0 0],'Upper',[Inf max(x)],'StartPoint',[1 1])

f = fittype('a*(x-b)^2','options',s);

[c,gof]=fit(x',y',f) % need to make sure x and y are columns

figure;

hold on;

plot(x,y,'k');

xlabel('x'); ylabel('y');

plot(x,c(x),'b--');

Q5: Do you think that the fit function did the best job it could fitting the data to the quadratic function that was requested? What are the confidence intervals on the parameters a and b? Does the fit do a good job of accounting for the variance in the data? What is rsquared? Should you use this function for fitting this data?

Solution to this situation: pick a good function and verify it is a good choice on a lot of your example data.

Situation 1b: the function doesn't describe the data very well

There is another way in which the the function might not describe the data well. Suppose, we are trying to sample a relatively unresponsive nerve in someone's arm, and we obtain the data in morearmdata.txt below.

morearmdata = load('morearmdata.txt','-ascii');

gauss4 = gauss;

fo = fitoptions(gauss4);

[maxvalue,location] = max(morearmdata(:,2)); % maxvalue will be guess for b

position_guess = morearmdata(location,1); % guess parameter c

width_guess = 1; % arbitrarily pick a width guess, parameter d

fo.StartPoint = [mean(morearmdata(:,2)); maxvalue; position_guess; width_guess];

fo.Lower = [-maxvalue; 0; min(morearmdata(:,1)); -Inf];

fo.Upper = [maxvalue; Inf; max(morearmdata(:,1)); Inf];

gauss4 = setoptions(gauss4,fo);

[nerve4,ngof4,et4]=dasw.fit.watchfithappen(morearmdata(:,1),morearmdata(:,2),gauss4,30);

Q6: Did you still get a value? Did it fit anything? Would you report that the nerve preferred position c and had a width parameter of d? Does that seem like the right thing to do?

Solution to this situation: Consider some sort of statistical test to reject noisy cells from being fit in the first place; an easy test is that the maximum value might need to be at least 5 standard deviation values above the noise.

5*std(morearmdata(:,2))

maxvalue,

Is it 5 standard deviations above the noise?

Situation 2: the function might describe the data in principle, but you didn't sample enough data points to allow a good fit

Suppose we had sampled the nerve that provided the data for armdata at low resolution, so that we only sampled every 2mm:

armdata_low = armdata(1:200:end,:)

What happens if we perform the fit? (You can cut and paste here, this is a lot of the same concepts):

gauss5 = gauss;

fo = fitoptions(gauss5);

[maxvalue,location] = max(armdata_low(:,2)); % maxvalue is guess for b

position_guess = armdata_low(location,1); % guess parameter c

width_guess = 1; % arbitrarily pick a width guess, parameter d

fo.StartPoint = [mean(armdata_low(:,2)); maxvalue; position_guess; width_guess];

fo.Lower = [-maxvalue; 0; min(armdata_low(:,1)); -Inf];

fo.Upper = [maxvalue; Inf; max(armdata_low(:,1)); Inf];

gauss5 = setoptions(gauss5,fo);

[nerve5,ngof5,et5]=dasw.fit.watchfithappen(armdata_low(:,1),armdata_low(:,2),gauss5,30);

hold on

plot(armdata(:,1),armdata(:,2),'m');

Q7: Compare the confidence intervals for parameter c and d for nerve3 and nerve5. Did the lower resolution fit have the right answer?

Number of parameters and model selection

One big question that one often wants to answer is the following: I have 2 functions that fit the data well. Which should I use? Can I make a statistical argument that one is better than the other?

Let's look at a specific example. Let's quickly re-do the fit of the census data from last time:

load census

s = fitoptions('Method','NonlinearLeastSquares',...

'Lower',[0,0],...

'Upper',[Inf,max(cdate)],...

'Startpoint',[1 1]);

f = fittype('a*(x-b)^2','options',s);

[c,gof] = fit(cdate,pop,f)

figure;

plot(cdate,pop,'o'); xlabel('Year'); ylabel('Population, millions');

x_values = 1750:2000;

y_values = c(x_values); % this returns c evaluated with best a, b

hold on;

plot(x_values,y_values,'b--');

Now let's fit the data with a third order polynomial:

s2 = fitoptions('Method','NonlinearLeastSquares',...

'Lower',[0,0 0 0],...

'Upper',[Inf,max(cdate),Inf,max(cdate)],...

'Startpoint',[1 1 1 1]);

f2 = fittype('a*(x-b)^2+c*(x-d)^3','options',s2);

[c2,gof2] = fit(cdate,pop,f2)

x_values = 1750:2000;

y2_values = c2(x_values); % this returns c evaluated with best a, b

hold on;

plot(x_values,y2_values,'m--');

Q8: How does the rsquared of the third order polynominal fit compare to the second order polynomial?

In general, more parameters means a better fit. If we want to compare 2 fits that have the same number of parameters, we can just compare the squared errors (again, assuming both fits are not garbage by the criteria above).

So, suppose we have 2 fits that DON'T have the same number of parameters? How do we say which one is "better", discounting the extra number of free parameters?

In general, this is a pretty difficult question. There are a variety of approaches (see MacKay, "Information-based objective functions for active data selection", Neural Computation, 4:589-603).

One of the simplest is the "Nested F-test", which compares 2 "nested fits". The 2 fits above are considered "nested", because the 2nd is just an elaboration of the first with more parameters; we could get exactly the first fit if we set c and d equal to 0 above.

The formula for the Nested F test is the following: compute the value F, and then we'll examine whether the value of F is what we expect for the "same" fit quality, or if the reduced error of the second model is more than we expect for adding 2 additional parameters.

To do this, we need to examine the change in error but also the change in a quantity known as "degrees of freedom". Degrees of freedom, loosely speaking, is the number of data points we have minus the number of parameters; it's the the number of dimensions that remain "free" to vary after the parameters are specified.

df_more = length(pop) - 4; % let df be number of data points - number parameters

df_less = length(pop) - 2; % same

SSE_moreP = gof2.sse;

SSE_lessP = gof.sse;

changeinerror = ((SSE_lessP - SSE_moreP)/SSE_moreP);

changeindegreesoffreedom = ((df_less-df_more)/df_more);

F = changeinerror / changeindegreesoffreedom;

We can obtain a P value for the nested F test by evaluating the cumulative density function for F, which is provided by the Matlab function fcdf (see help fcdf):

P_nestedF = 1-fcdf(F,df_less-df_more,df_more),

Q9: Is the 4 parameter fit significantly better than the 2 parameter fit?

There is a nice description of the nested F-test here.

Model selection is one of the more difficult tasks in fitting. It requires a lot of care, probably 3 times the care of fitting a single function! In the nested approach, if either of the fits "fails" due to the reasons we discussed above (getting stuck in local error minima, not enough points, the fit doesn't describe the data at all), then the output of this "Nested F test" is bogus. But it is useful when you need it and have spent the time establishing that your fits are of high quality.

Number of parameters and overfitting

In labs 1.5-1.6, we saw that we can never know the "true" distribution of our data, but we can learn some of its parameters by sampling. The process of sampling means that our data will deviate from the true distribution slightly, but we can infer certain properties of our distribution (like the mean, the amount of correlation between 2 variables, etc).

When we are fitting, we want to be careful to avoid loading up on parameters so that the fit runs through all of our data points, because this probably means we are fitting the particulars of the sample, rather than features of the "true" distribution. For example, consider the following fit (from Wikipedia::Overfitting):

The line approximately fits the data; some ambitious person has gone in and fit the data using a higher order polynomial (y = a + b*x + c*x^2 + d * x^3 + ... ). Notice, the higher order polynomial fit, which has many parameters, goes through every single data point and has 0 apparent error! It must be better, right? Well, what this person has probably done is to fit the sample, but has actually done a worse job of fitting the "true" distribution that we don't know. If we were to perform sampling a second time, the second fit would probably be much worse than the simple linear fit. So when it comes to number of parameters, remember, the fewer the better, generally speaking.

The tutorial is already pretty long, so we won't work any examples for overfitting. If you're interested, you can check out this video blog on the topic. It is short and has some pretty nice slides.

Function reference

Matlab functions and operators

User functions

  • dasw.fit.watchfithappen.m

Labs are copyright 2011-2021, Stephen D. Van Hooser, all rights reserved.