## IntroductionIn the last lab, we examined the nuts and bolts of a typical statistical test. In this case, as in most cases, the procedure involved calculating the likelihood that 2 samples of data were the same. That is, we posited the "null hypothesis" was true, that our drug did nothing. We did this empirically first, and then saw that there was an equation that the mathematician Kolmogorov had developed that agreed very closely with our empirical experiments on the computer. Finally, we calculated the likelihood that 2 experimental data sets were actually 2 samples of the same underlying distribution; this was very unlikely, so we concluded that the null hypothesis was false (that is, the 2 samples really were samples of different underlying true distributions). Many statistical tests can be expressed in the form of a "recipe" that 1) describe the situations where it can be used, 2) describes the behavior of the test statistic when the underlying data are the same (that is, when the null hypothesis is true). Let's look at a sample recipe for the Kolmogorov-Smirnov test: Now let's follow this recipe on the data set we used last time: sample1 = generate_random_data(20,'normal',0,1); sample2 = generate_random_data(20,'normal',2,1); [X1,Y1] = cumhist(sample1, [-10 10], 0.1); [X2,Y2] = cumhist(sample2, [-10 10], 0.1); figure; plot(X1,Y1,'r-'); hold on plot(X2,Y2,'b-'); xlabel('values of 2 samples of size 20'); ylabel('Percent of data'); Now that we understand the idea behind the statistical test, we can calculate it with the built-in function: [h,pvalue] = kstest2(sample1,sample2) Q1: What is pvalue? Can you reject the null hypothesis? That is, are these samples likely to have been derived from different underlying distributions?In today's lab, we will look at how we can determine the mean of a true distribution with some confidence, and to examine whether means across 2 distributions are statistically different.
## What is the mean value of my distribution?Often in science, one is very interested in knowing what happened in an experiment "on average"; that is, we are interested in knowing the "true" mean of the "true" distribution of interest, or to know how the means of 2 distributions compare to each another.
Of course, if we have a set of samples, then calculating the mean of those samples (called the
sample mean) is quite easy (this is the "mean" or "average" that you might have learned in high school). It is simply:Sm = (S1 + S2 + S3 + ... + SN)/N, where N is the number of points in your sample.
We are typically interested in determining, with some confidence, the
"true" mean of the distribution. Fortunately, we can learn a great deal about this "true" mean through sampling.To build our intuition as to how sampling can tell us about the true mean, we'll do some virtual experiments on the computer. Suppose the following distribution of data indicates, for 2010, the exact birth date of the first litter of young squirrels born to all 20,000 mature female squirrels in the city of Waltham, MA:
Note that here we are assuming that this is the "
true distribution" of squirrel births. Of course, if we were scientists interested in understanding the mean birthday of squirrels in this year, we would be unable to secure funding to monitor the nests of 20,000 mature female squirrels, both because it would be really expensive, but also because it is unnecessary, thanks to sampling. Instead, suppose we set out to equip 30 randomly-chosen squirrel nests with video equipment in order to observe squirrel births and record the date of occurrence. We want to find the average birth day for this population.How much will our 30 squirrel nests tell us about the population mean? To evaluate this, we will model our experiment in 2 steps:
- We will randomly draw 30 points from the true distribution to comprise our sample.
- We will calculate the sample mean Sm from this sampled data.
We'll repeat this modeled experiment 1000 times, to look at what the sample mean tends to look like for this kind of experiment. To do this, we want to grab 30 samples at random. 1 way to do this is to use the ready-made function
randperm, that will produce a randomly scrambled list of the numbers 1 to M; further, we can use the built-in function length to find out the number of points M in a variable. Let's try this with a small example. Suppose we want to grab 3 data points at random from a list of 6 data points:S = [ 0.5 7 10 4 3 5 ]; % a data set with 6 elements
number_of_elements = length(S) % length(S) returns the number of elements of S
R = randperm(number_of_elements) % returns a random permutation
R(1:3), % the first 3 elements of the random permutation
newsample = S(R(1:3)) % the new sample of 3 points from the data set
Okay, now let's model our experiment. We'll put this in an m file so we can repeat it with some other distributions easily. We'll also write a function
autohistogram.m to make automatic guesses as to a nice number of bins to use for plotting the histogram:Now let's examine the distribution of sample means that we might obtain in the experiment that we designed:
squirrel_births = load('squirrel_births.txt','-ascii');
simulate_random_sampling(squirrel_births, 30, 1000);
title('The sample mean of squirrel birth days for 1000 sampling experiments');
Q2: Is the distribution of sample means that we would obtain for experiments of this type broad or narrow? Is the distribution of sample means clustered around the true mean, or is it all over the place? Based on the cumulative histogram, what is the value we would have obtained for the sample mean in the 2.5th percentile of all experiments? What is the value we would have obtained for the 97.5th percentile? The low value (the X value at 2.5 percentile) and the high value (the X value at 97.5th percentile) together comprise the 95% confidence intervals for the true mean; we can be 95% sure that the true mean lies within the confidence intervals.## Dependence of the confidence interval of the mean on the number of samplesNow let's examine the dependence of our confidence in the mean based on the number of samples in our experiment. Let's try values of 10 samples, 30 samples, and 60 samples:
simulate_random_sampling(squirrel_births, 10, 1000);
title('Sample means of squirrel birth days for 1000 experiments, 10 samples');
simulate_random_sampling(squirrel_births, 30, 1000);
title('Sample means of squirrel birth days for 1000 experiments, 30 samples');
simulate_random_sampling(squirrel_births, 60, 1000);
title('Sample means of squirrel birth days for 1000 experiments, 60 samples');
Q3: What are the 95% confidence intervals for 10 samples and 60 samples? Does having more samples increase our certainty regarding the mean?## Dependence of the shape of the distribution of sample means on the type of true distributionThe distribution of sample means is tightly clustered around the true mean for the data above; however, is this true for all types of data? Consider the true distribution of human birthdays in Massachusetts in 2010 (or, what it might be; this data is made up) or, the number of heads that occur in 1000 coin flips for 10,000 different experiments. These true distributions have a very different shape from the true distribution of squirrel births; the human birthdays are very close to a
uniform distribution, with all values equally likely, while the coin flips counts form a normal distribution, or "bell curve" distribution that we'll introduce formally in short order.Now let's imagine we performed sampling experiments to estimate the mean values of these distributions.
human_births = load('human_births.txt','-ascii'); simulate_random_sampling(human_births, 30, 1000); title('Sample means of birth days for 1000 sampling experiments, 30 samples');
heads_coinflips = load('heads_coinflips.txt','-ascii'); simulate_random_sampling(heads_coinflips, 30, 1000);
title('Sample means of number of heads for 1000 sampling experiments, 30 samples');
Q4: Are the sample means of the human births as tightly clustered as the sample means for the squirrel births? Do the sample means exhibit any clustering? If the distribution of sample means for the human births is wider, does it have the same basic shape as that of squirrel births or numbers of heads?The normal distribution and the central limit theorem
There is a special type of distribution that arises frequently in both mathematics and in the natural sciences, and this is the normal distribution, or the bell curve distribution. The distribution of sample means in our 1000 sampling experiments above has a normal distribution. This is the result of a beautiful theory in mathematics called the central limit theorem. For our purposes, the central limit theorem can be summarized as follows:
The equation for the normal distribution, where the mean of the distribution is µ and the standard deviation is sigma, and ∆x the resolution of the X axis, is as follows:
The central limit theorem is good news, because it allows us to estimate our confidence in the mean of a distribution from a single sampling experiment (instead of 1000 in our example above). By letting µ be our sample mean Sm, and letting sigma be the standard error of the mean (calculated as the standard deviation divided by sqrt(N), where N is the number of points in our sample), we can obtain confidence around our estimate of the mean. Let's try it.
First, let's remind ourselves of our results from the 1000 sampling experiments:
simulate_random_sampling(squirrel_births, 30, 1000);
title('Sample means of squirrel birth days for 1000 experiments, 30 samples');
Now let's conduct a single sampling experiment:
R = randperm(length(squirrel_births));
S_30 = squirrel_births(R(1:30));
Sm = mean(S_30);
S_standarddeviation = std(S_30); % std calculates the standard deviation
Std_error = S_standarddeviation/sqrt(30);
Now, based on the central limit theorem, we predict that, if we had done many many experiments (like 1000), the distribution of sample means should look something like this:
figure;
X = 1:200;
dX = 1;
Nus = dX*exp(-(power(X-Sm,2)/(2*power(Std_error,2))))/sqrt(2*pi*power(Std_error,2)); cumulative = cumsum(Nus);
hold on
plot(X,Nus,'g-');
plot(X,cumulative,'g--');
Q5: Based on the distribution of means predicted by the central limit theorem, what are the 95% confidence intervals of the mean? (Use the 2.5% and 97.5% limits of the cumulative distribution of the sample means that we just created.)## The differences between 2 means sampled twice are distributed as the T distributionFinally, we are interested in knowing when 2 distributions have different means. As usual, we rely on the fact that mathematicians can calculate what the differences should be when the mean of a distribution is sampled twice. We won't go over this formula in class, but it is in your text book. Instead, we will merely cover how to implement the test and give the recipe. Just as with the Kolmogorov-Smirnov test, we can calculate whether this difference is greater than expected by comparing the actual difference to a predicted difference. This test, called the t-test, is built right into the Matlab's statistics toolbox (the
ttest2 function, see help ttest2).We will generate 2 samples of the same underlying distribution by again sampling 30 points from the squirrel_births true distribution, and perform a t-test to see whether the means are significantly different from what is expected. R2 = randperm(length(squirrel_births)); S2_30 = squirrel_births(R2(1:30)); [h,pvalue] = ttest2(S_30,S2_30); pvalue, The pvalue indicates the likelihood that the 2 means are the same. In this example, the pvalue is likely to be rather high (greater than 0.05, for example). If we were to perform this test on data for which the means were different, then the pvalue would be low. If we wanted to have a confidence level of 0.05, then we would say that the means were significantly different if the pvalues are less than 0.05. Note: While our confidence in the mean that we calculated above using the central limit theorem is general for any type of data, calculating the significance of the difference of means using the T distribution assumes that the underlying true distributions are "normal" distributions. In practice, however, this procedure works well for data that are approximately normally distributed (that is, where most of the data points lie in the middle of the distribution, and there are no hard edges or thresholds). |

Labs >