# Lab 2.3 Creating robust index values

## Introduction

Often in biological experiments, one creates an "index" value to summarize an experiment. For example, consider the following experiment with fruit flies:

Imagine that 100 fruit flies are placed in a glass tube with an apple scent. The flies are then briefly shocked. A few minutes later, the flies are placed in a glass tube that has an apple scent at one end, and a pear scent at the other end. At the end of the experiment, a majority of flies are hanging out at the "pear scent" end, because they have associated the apple scent with the shock.

Suppose we wanted to repeat this experiment many times with flies that have different genes; we could use the task to understand which genes are involved in learning and memory. Let's load some example data from 3 strains (that is, flies with 3 different sets of genes) called flies1, flies2, and flies3.

flies1 = load('flies1.txt','-ascii');

flies2 = load('flies2.txt','-ascii');

flies3 = load('flies3.txt','-ascii');

Let's look at the data

flies1,

You can see there are 2 columns of data for many experiments; the first column shows the number of flies in the pear side of the tube, and the second column shows the number of flies in the apple side of the tube.

## Beware the simple ratio index

We'd like to summarize our experiments with a simple index. Let's try one index, that we'll call the Pear/Apple ratio:

**PearAppleRatio = #of flies on Pear-side / # of flies on Apple-side**

Let's try to plot bar graphs with the raw data side by side. I'm going to add a little twist to the bar graphs. Since it's possible for more than one sample to have the same value, we'll add a little random noise to spread the points out on the X axis (see the plot command below):

pearappleratio1 = flies1(:,1)./flies1(:,2) % perform element-by-element division

pearappleratio2 = flies2(:,1)./flies2(:,2);

pearappleratio3 = flies3(:,1)./flies3(:,2);

mn1 = mean(pearappleratio1),

mn2 = mean(pearappleratio2),

mn3 = mean(pearappleratio3),

figure('name','Pear/Apple Ratio'); % new trick, we'll give this figure a name

bar([1 2 3],[mn1 mn2 mn3],'w');

hold on

x_scatter = randn(size(pearappleratio1))/10;

plot(1+x_scatter, pearappleratio1,'go');

plot(2+x_scatter, pearappleratio2,'ro');

plot(3+x_scatter, pearappleratio3,'bo');

xlabel('Strain number');

ylabel('Pear/apple ratio');

set(gca,'xtick',[1 2 3]);

Leave this window open for a little while. We'll go back to it a little later.

So what happened here? First, let's compare the first 2 columns.

**Q1**: Are the data in the first 2 columns easy to compare side by side? What happened to the third column? What is the value of mn2 and mn3? Are all 20 points for each experiment present on the graph? From looking at the raw index values, do you have any idea why this happened?

Now we've seen a couple things that can go wrong with using ratios as an index for an experiment. Let's try a few more ideas.

## Normalization is usually a good idea

Another possible index that we could choose is the Pear/Apple difference:

PearAppleDiff = # of flies on Pear-side - # flies on Apple-side

This index would vary between 100 and -100 for this experiment. But, what if we were to go into our tube of shocked flies and we discover that 4 flies have kicked the bucket during the shocking session (fruit flies don't live very long, they are always dropping off). Then we only have 96 flies left. Do we decide that we can't do the experiment, because we no longer have 100 flies, and just throw out the 96 flies? That's kind of a waste. Shouldn't we develop an index that doesn't depend on the exact number of flies in the tube? A better choice would be a * normalized* index that is independent of the number of flies. Here are a couple of options:

**PearIndex = #PearSide / (#PearSide + #AppleSide)**

**PearAppleIndex = (#PearSide - #AppleSide)/(#PearSide + #AppleSide)**

Let's plot the "PearIndex" values:

pearindex1 = flies1(:,1)./(flies1(:,1)+flies1(:,2)) % do el-by-el sum,division

pearindex2 = flies2(:,1)./(flies2(:,1)+flies2(:,2)) % do el-by-el sum,division

pearindex3 = flies3(:,1)./(flies3(:,1)+flies3(:,2)) % do el-by-el sum,division

mn1 = mean(pearindex1),

mn2 = mean(pearindex2),

mn3 = mean(pearindex3),

figure('name','Pear Index');

bar([1 2 3],[mn1 mn2 mn3],'w');

hold on

plot(1+x_scatter, pearindex1,'go');

plot(2+x_scatter, pearlindex2,'ro');

plot(3+x_scatter,pearlindex3,'bo');

xlabel('Strain number');

ylabel('Pear Index');

set(gca,'xtick',[1 2 3]);

Again, leave this window open for a little while. We'll go back to it a little later.

**Q2**: Compare and contrast the graphs of the pear/apple ratio and the pear index. Which one do you find easier to read and interpret?

## Index names are important

What is the best name for our index? Let's evaluate a few candidates.

We could stick with "Pear Index". However, what if we want to switch our scents in some experiments, to make sure that there isn't something different about associating apple scents with a shock as opposed to learning to associate an arbitrary scent with a shock? So let's hunt for a more generic name.

Maybe the next candidate would be "ShockScentAvoid Index". Someone might choose this name because it evokes the notion that the flies have "learned" to avoid the scent that was paired with the shock. However, this name is a bit misleading; it ascribes a particular meaning to the presence of a fly on the opposite side of the tube from the odor that was present when the flies were shocked.

**Q3**: If a given fly is in the "pear" side of an apple/pear tube after being shocked in the presence of an "apple" scent, does it imply that the fly "learned" to avoid apple? As you consider your answer, imagine a mutant strain of flies that lacks the odorant receptors necessary to detect apple and pear odors. How would you expect 100 of these flies be distributed in the tube? Did any of those mutant flies "learn" anything?

Perhaps the best choice of name is something that merely describes what happened, such as "ShockScentOpposite Index". This describes the number of flies that are on the opposite side of the tube from the scent that was associated with the shock, without using a name that ascribes a specific meaning to the actions of individual flies. However, it also avoids using the name of the specific odors, so that the index could be applied to experiments with different combination of odors. If you wanted a shorter name, you could call it "ShockScent Index" and use the number of flies on the shocked side divided by the total number of flies (this is equal to 1-ShockSentOpposite Index).

The best choice of name really depends on the question one is trying to answer. If one was really interested in understanding how pear and apple scents were detected specifically, then perhaps one would want to choose a name like "PearApple Index". If one is more interested in the process of learning and memory, then one might prefer a name like "ShockScent Index" or "ShockScentOpposite Index".

## Non-parametric test for index values: the Kruskal-Wallis test

Examine the Figure named "Pear Index" that you created earlier. We mentioned previously (Lab 1.6) that the t-test is great at detecting differences in means between 2 samples, and that the ANOVA technique (Lab 1.8) can detect differences in means among multiple groups. But we also said these tests should be used for variables that are normally distributed, or approximately normally distributed (where most of the data is clustered around the mean).

**Q4**: Looking at the "Pear index" graph, are most data points clustered around the means, or are the data points more skewed? Are several data points hitting a "ceiling" or "floor" defined by the equation of the index value?

In this situation, we cannot reliably use tests like the t-test or ANOVA. Fortunately, others have developed tests that rely on the **rank order** of the data points. *These rank order tests can be used with any data set and do not make any assumptions about its underlying properties.* Let's illustrate this here:

Suppose we have 2 data sets that are uniformly distributed between 0 and 1:

x1 = rand(10,1)

x2 = rand(10,1)

Now let's sort these datapoints together. We'll ask sort to return both the sorted list but also the index values (the second output argument; see help sort).

x_12 = [x1;x2]

[x_sort,index] = sort(x_12)

The values in the sorted list that come from the first data set have index values of 10 or less (since they are the first 10 entries in our combined list x_12), and the values in the sorted list that come from the second set have index values greater than 10. Let's look at which group each point comes from:

inds_x1 = find(index<=10);

inds_x2 = find(index>10);

index(inds_x1) = 1;

index(inds_x2) = 2;

index,

We can calculate the average "rank position" for each group, relative to the middle of the combined data set (which is position 10.5, since the ranks vary from number 1 to 20):

mean(inds_x1)

mean(inds_x2)

**Q5**: Are the points from the 2 groups well mixed, or are members of 1 data set grouped towards the beginning or end?

Now let's look at a second data set:

x3 = rand(10,1)

x4 = rand(10,1) + 0.5; % we'll artificially shift this data set over

x_34 = [x3;x4]

[x34_sort,index34] = sort(x_34)

inds_x3 = find(index34<=10);

inds_x4 = find(index34>10);

index(inds_x3) = 1;

index(inds_x4) = 2;

index,

mean(inds_x3)

mean(inds_x4)

**Q6**: How do the mean rank orders of data set x3 and x4 compare to the mean rank orders of x1 and x2?

The Kruskal-Wallis Test is based on a mathematical calculation of the likelihood of seeing specific rank orders for various sample sizes. We won't go over the math here (but see the Wikipedia link if you're interested). The P-value for this test can be derived from the Matlab function **kruskalwallis** (see help **kruskalwallis**). The beautiful thing about the Kruskal-Wallis test is that we can look for differences in rank order among any number of groups, so it is possible to compare 2 or more groups at the same time.

The **kruskalwallis** function has the same form as the **anova** function. We need to define group variables. You did this in Lab 1.8 by making a big variable G that you manually typed out:

G = [1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 ];

G indicated that the first 10 points were in group 1, and the second 10 points were in group 2. We can then run:

p12 = kruskalwallis([x1;x2],G)

p34 = kruskalwallis([x3;x4],G)

Typing out the group memberships is kind of boring. We can use the function **repmat** to repeat the same value over and over again. Check out help repmat.

repmat([1 2 3],2,1)

repmat([1 2 3],1,2)

G = [repmat(1,1,length(x1)) repmat(2,1,length(x2))]

This is a good solution for preparing the "group membership" values for anova1 or kruskalwallis.

Gf = [ repmat(1,1,length(pearindex1)) repmat(2,1,length(pearindex2)) ...

repmat(3,1,length(pearindex3))]

p = kruskalwallis([pearindex1;pearindex2;pearindex3],Gf);

**Q7**: Are there significant differences (with alpha 0.05) in PearIndex among the 3 fly groups?

### If the non-parametric tests can be used with any distribution regardless of its properties, why don't we always use non-parametric tests instead of t-tests or ANOVAs?

The reason is that if we know that our data have a central tendency and don't hit ceilings or floors very often, then we can discover differences between samples with fewer data points if we use the t-test or the ANOVA test. So, if the data are more-or-less normally distributed, it's best to use the t-test or ANOVA test. Otherwise, we can still do the job with the Kruskal-Wallis test, but we might need more data points, especially if the differences are small.

Kruskal-Wallis and Mann Whitney

The most common non-parametric test that is referred to in textbooks is the "Mann-Whitney U-test". This test is the same as the Kruskal-Wallis test for the case that the number of data groups is 2. Kruskal and Wallis extended the Mann-Whitney U-test to an arbitrary number of groups (including 2). When you see the Mann-Whitney test used, just think Kruskal-Wallis, it is the same thing.

## Case study: orientation selectivity in visual cortical neurons

Each experiment offers its own challenges for creating a robust index. One property that my lab studies is the orientation selectivity of neurons in the visual cortex. Most neurons in visual cortex respond especially well to images of bars at a particular preferred orientation, and respond less well as the stimulus orientation deviates from the preferred orientation. Different cells exhibit different orientation preferences so that all orientations are represented in the cortex.

Click here for a brief movie showing an orientation-selective neuron responding to visual stimulation and an associated tuning curve.

In real life, there are many features of the curves that we are interested in pulling out, but in this exercise we'll focus on just the following question: how much does a cell respond to stimulation at its preferred orientation as compared to the opposite orientation?

The most straightforward and classic way to attack this question is to identify the orientation with the largest response, call that orientation the preferred orientation, and then to compute the normalized ratio of the response at the preferred orientation to the opposite. This has been called the orientation selectivity index (OSI):

OSI = (R(Î¸p) - R(Î¸o))/R(Î¸p)

where R(Î¸p) is the response to stimulation with the preferred orientation and R(Î¸o) is the response to stimulation with the opposite orientation. This index normally varies between 0 and 1 (unless the response to the opposite orientation drops below the "baseline" firing rate and becomes negative). For cells with high selectivity, this is just great, we get a high selectivity value:

But what about cells that exhibit poor selectivity? Let's look at a cell that is completely indifferent to orientation (a noisy cell):

The problem is that, because we are sampling, one orientation has to have the largest response; the ratio between this arbitrary angle and the opposite orientation can be quite noisy, as we'll see in a minute. Here is a function that computes the orientation selectivity index. I'd recommend saving it to a new folder under your tools folder that you call neuroscience. Then, make sure you refresh the path by calling

addpath(genpath(['\\unethome.brandeis.edu\yourusername\MATLAB\tools']))

(or using whatever directory name is appropriate for your situation and computing platform).

**orientation_selectivity_index.m**

function osi = orientation_selectivity_index( angles, responses )

% ORIENTATION_SELECTIVITY_INDEX

% OSI=ORIENTATION_SELECTIVITY_INDEX(ANGLES, RESPONSES)

% Takes ANGLES in degrees

% Returns (R(pref)-R(pref+90))/R(pref)

[mx,ind]=max(responses);

ang=angles(ind);

ang1 = find(angles==ang+90);

ang2 = find(angles==ang-90);

if isempty(ang1), opposite = ang2;

else, opposite = ang1;

end;

osi = (mx-responses(opposite))/mx;

The orientation_selectivity_index.m function uses a statement that we haven't yet seen before: the if/elseif/else statement. The if statement allows the program to take one action IF a certain condition is true, or another action IF a different condition is satisfied, or do something else. Let's spend a moment trying this out.

a = 0;

if a<5,

a = 3,

elseif a>5,

a = 30,

else,

a = 0,

end;

This statement will check to see if a is less than 5; if so (and it is), Matlab will execute any lines that are present until it sees an else, elseif, or end. Here, I've had Matlab print the output of a to the command line so you can see which one gets run.

We can also omit the elseif, or omit the else, or both, if we don't need to take an action in those cases.

if a<5,

a = 3;

end;

or

if a<5,

a = 3;

else,

a = 0;

end;

and the responses to all angles for the cell above look like this (note that the circle represents the average vector):

We can create another orientation index (we'll call this index orientation vector index, OVI) based on the vector representation of the responses. We'll define it to be the length of the vector response divided by the sum of all of the responses. If there is only a response at a single orientation, then this index will be 1; on the other hand, if the responses are relatively even across all orientations, then the index will be 0. For poorly selective cells, this orientation vector index is likely to be less noisy than the orientation selectivity index above because rather than subtracting the response at a single angle from the maximum response, we'll subtract the response at many angles.

Now back to orientation selectivity.

Another way to calculate orientation selectivity is to examine the response at the preferred orientation compared to the response at all other angles. We can do this by expressing the response to each orientation as a vector in the complex plane (where the "x" axis is the real component, and the "y" axis is the imaginary component), and computing the sum of all of the responses:

and here's a function that computes it (I'd recommend saving to a folder called neuroscience):

**orientation_vector_index.m**

function ovi = orientation_vector_index( angles, responses )

% ORIENTATION_VECTOR_INDEX

% OVI=ORIENTATION_VECTOR_INDEX(ANGLES, RESPONSES)

% Takes ANGLES in degrees

%

% OVI = 1 - |R| where

% R = (RESPONSES* EXP(2I*ANGLES)') / SUM(RESPONSES)

%

% See Rinach et al. J.Neurosci. 2002 22:5639-5651

angles = angles/360*2*pi;

r = (responses * exp(2*sqrt(-1)*angles)') / sum(abs(responses));

ovi = abs(r);

### Testing index robustness

Now that we have 2 candidate index values, we can test their robustness by simulating orientation responses and adding noise. We'll simulate orientation responses using the gaussian equation:

a = 20; % peak responses, in spikes/sec

b = 90; % location

c = 20; % width

d = 0; % offset

e = 2; % amount of noise

angles = 0:10:180

ori_nonoise = a*exp(-((b-angles).^2)/(2*c^2))+d+0*randn(size(angles));

ori = a*exp(-((b-angles).^2)/(2*c^2))+d+e*randn(size(angles));

figure;

plot(angles,ori); xlabel('Angle'); ylabel('Response (spikes/sec)');

hold on;

plot(angles,ori_nonoise,'k-');

osi = orientation_selectivity_index(angles,ori)

ovi = orientation_vector_index(angles,ori)

Now let's simulate the performance of these index values for simulated data with varying "true" orientation selectivity. We'll compute 5 example curves for each level, using what is called a "nested loop", a loop within a loop (in this case, a while loop within a while loop):

true_osi = [];

measured_osi = [];

measured_ovi = [];

a = 20; d = 0;

while d<=10,

i = 1;

while i<=5,

ori = a*exp(-((b-angles).^2)/(2*c^2))+d+e*randn(size(angles));

true_osi(end+1) = (a+d-d)/(a+d);

measured_osi(end+1) = orientation_selectivity_index(angles,ori);

measured_ovi(end+1) = orientation_vector_index(angles,ori);

i = i + 1;

end;

d = d + 1; % increase baseline

a = a - 2; % decrease peak rate

end;

Let's look at how our index values did:

figure;

plot(true_osi,measured_osi,'gs'); % green squares

hold on;

plot(true_osi,measured_ovi,'bo'); % blue circles

xlabel('True osi');

ylabel('Measured osi (green) ovi (blue)');

**Q8**: Which measure is more "robust"? For low "true" selectivity, is one measure consistently lower than the other? If a cell has low selectivity, does it ever receive a moderately high score for either index? For high "true" selectivity, is either index much better than the other? (Hint: It doesn't matter if the values of the index closely progress from 0 to 1; instead, if the "true" orientation selectivity is a particular value, how little do the 2 index values vary on actual example data?)

## Appendix: the bootstrap

Watch this space for an appendix describing the bootstrap (not needed for the course, but if you have to test the robustness of index values someday come back to see how this is done!)

## Function reference

### Matlab functions and operators

### User functions

**orientation_selectivity_index.m****orientation_vector_index.m**

**Labs are copyright 2011-2012, Stephen D. Van Hooser, all rights reserved.**