Lab 4.4 Functional imaging

Download me first!

Today's file is rather large, so please start downloading it now by clicking here: lab4.4.zip This file is too large to fit on a Brandeis UNET account, so you'll need to download it locally to your own computer. All of the data will take up about 6.75GB when the file is uncompressed, and you'll probably need about 10GB total free space to complete the lab.

Please also download the .m files at the end of the lab, and save them in a new folder in your tools folder called data_structures. Be sure to call addpath(genpath('YOURPATH/MATLAB/tools')) to add the new directory to your search path.

Introduction

In this lab we will study how we can quantify the change in an image over time. There are a wide range of ways that an image can change over time. If we are photographing an object, such as a tree, a person, or a small process within a cell, it is possible that the object might move and we might want to track and record its movement.

In this lab, we will look at a simpler form of imaging over time, where we assume that our object stays relatively still, and we want to examine how much the reflectance or brightness changes over time.

Brain surface imaging of intrinsic signals: Change in reflectance over time

One of the exciting advances in imaging over the last 20 years has been the ability to observe evidence of neural activity in different brain regions of humans and animals. No doubt you have heard about the non-invasive technique called “functional magnetic resonance imaging”, which produces and measures magnetic fields in order to indirectly measure blood flow in local areas of the brain.

In the first part of this lab, we will study a related (but invasive) imaging technique called intrinsic signal imaging. In intrinsic signal imaging, the exposed brain is illuminated in red light of about 630nm in wavelength. If you study the graph below, you can see the absorbance (in µa) of blood differs depending upon whether or not the blood is oxygenated (HbO2 is oxygenated hemoglobin, HbR is de-oxygenated hemoglobin). When a local area in the brain is active, the oxygen that is stored in the nearby blood is used to nourish the neurons and glial cells, and some of the blood switches from being oxygenated to being de-oxygenated. When this happens, the non-oxygenated blood will absorb more of the 630nm light, and therefore the active area will appear slightly darker than surrounding areas. (Immediately after this darkening, there is a period of increased blood flow to the region, but we won’t analyze that here.)

Intrinsic imaging has been applied in several brain areas, and has been used notably to identify the functional structure of the visual cortex in both adult and developing animals:

In the left panels, you can see the columns that respond to vertical bars (false-colored white) or horizontal bars (false-colored black); these columns are present before natural eye opening. On the next panel, you can see the columns that are responsible for motion selectivity; columns that respond to upward motion are false-colored black, and those that respond to downward motion are false-colored white. Motion selectivity is not present until after natural eye opening. This is shown quantitatively on the right.

Analyzing intrinsic signal images

To identify the pixels that have become slightly darker as a result of activity, we can examine the fractional change in reflectance at each pixel in our image:

∆R/R = (stimulation_image - reference_image)/reference_image.

Blood flow drifts slowly over time in a way that is independent of our stimulation, so the best reference images are those that are taken immediately before stimulation.

In this exercise, we will compute the average ∆R/R for 20 repetitions of the presentation of horizontal bars to an animal. Then, we will look at the average image and see the locations that have become darker. Finally, we will compare the results to average images taken at other stimulus orientations (45°, vertical bars, 135°), to create a false-color image of the orientation preference at each pixel location.

Download the Lab4_4.zip file by clicking here ( lab4.4.zip it is too large to be attached below). It contains a directory called 2010-12-16 that has images that were generated in response to horizontal bars. The images are stored in a “Labview” format, and they represent the average over 15 video images that were acquired at 30 video frames per second. Therefore, each saved "data frame" corresponds to 0.5 seconds of data. The acquisition of images took place 0.5 seconds before the onset of stimulation, so the first frame in each record is a reference, and the second 2 frames correspond to stimulus conditions.

Let’s perform this calculation on data collected for a single stimulus. We'll use the imagedisplay.m file that was part of Lab 4.2.

help readintrinsicstim

% take a moment to read the help file

framedata = readintrinsicstim(['data' filesep '2010-12-16'],0);

% look at one image:

imagedisplay(framedata(:,:,1));

dRoR=(0.5*(framedata(:,:,2)+framedata(:,:,3))-framedata(:,:,1))./framedata(:,:,1);

imagedisplay(dRoR)

The signal is very very small; try looking at changes from +/- 0.002 or 0.001 (corresponding to 2 or 1 parts in 1000, or 0.2 or 0.1%).

Q1: Can you see evidence of a response to the stimulus?

The following function will loop through the images and calculate the average ∆R/R image. It relies on the very useful functions eqlen.m, sizeeq.m, eqtot.m, and eqemp.m which are attached at the end of this lab.

analyzeintrinsicstims.m

function analyzeintrinsicstims(dirname,baselineframes,signalframes)

% ANALYZEINTRINSICSTIMS - Calculate deltaR/R for intrinsic imaging data

%

% ANALYZEINTRINSICSTIMS(DIRNAME,BASELINEFRAMES,SIGNALFRAMES)

%

% This function walks through the directory DIRNAME and

% calculates deltaR/R = (signal_avg-baseline_avg)./baseline_avg

% for each intrinsic imaging response that is found.

% signal_avg is computed as the average response in the data frames that

% are specified in the vector SIGNALFRAMES, and baseline_avg is

% calculated as the average response in the data frames that are

% specified as BSAELINEFRAMES. Data frames are numbered from 1.

%

% It is assumed that the intrinsic imaging responses are named:

% Trig_NNNNframe_YYYY.extension, where NNNN is the trigger number

% and YYYY is the data frame number, and extension is "tiff" or "lvdata".

%

% The function loops through all such files and saves each dR/R image as

% 'stimNNNNImage.mat'; the image is saved in the variable 'img'.

%

% If the stimNNNNImage.mat file already exists, the function looks to see

% if the 'baselineframes' and 'signalframes' used in the computation were

% the same; if so, nothing is done. Otherwise, the new result replaces the old.

%

% Example: analyzeintrinsicstims(DIRNAME,1,[2 3]) will use the first frame

% as the baseline frame and frames 2 and 3 as the signal frames.

d = dir([dirname filesep 'Trig_*frame_0000.tiff']);

if length(d)==0,

d = dir([dirname filesep 'Trig_*frame_0000.lvdata']);

end;

pause(1); % give a little time for any in-progress writing operations to finish

if ~isempty(d),

d = sort({d.name});

for i=1:length(d),

disp(['Working on ' d{i} '...']);,

n = sscanf(d{i},'Trig_%dframe_0000.tiff');

A = exist([dirname filesep 'stim' sprintf('%0.4d',n) 'Image.mat'])==2;

if A,

g=load([dirname filesep 'stim' sprintf('%0.4d',n) 'Image.mat'],...

'baselineframes','signalframes');

B=~(eqlen(baselineframes,g.baselineframes)&eqlen(g.signalframes,signalframes));

else, B = 1;

end;

if B,

framedata = readintrinsicstim(dirname,n);

baseline = nanmean(framedata(:,:,baselineframes),3);

signal = nanmean(framedata(:,:,signalframes),3);

img = (signal-baseline)./baseline;

save([dirname filesep 'stim' sprintf('%0.4d',n) 'Image.mat'],...

'img','baselineframes','signalframes');

end;

end;

end;

Now let’s run this function:

analyzeintrinsicstims(['data' filesep '2010-12-16'],1,[2 3]);

The stimuli (numbered 1 to 5) were presented in a random order, which is in the file called stimorder.mat. Let's take a look at the order in which the stimuli were presented:

so = load(['data' filesep '2010-12-16' filesep 'stimorder.mat'])

so.stimorder

The values of the stimuli are in the file stimvalues.mat (0 means horizontal bars, 90 means vertical bars, NaN means the stimulus was a blank, that is, the screen remained gray as a control):

sv = load(['data' filesep '2010-12-16' filesep 'stimvalues.mat']);

sv.stimvalues

Next, we'll average the individual responses together to create what are called single condition images. These single condition images are the average response to each stimulus condition. The following function reads in the stimulus order and averages the stimuli together to create a single condition image for the stimuli 1 to 5:

createsingleconditions.m

function createsingleconditions(dirname, multiplier, meanfilteropts, medianfilteropts)

% CREATESINGLECONDITIONS - Makes single condition images from intrinsic data

%

% CREATESINGLECONDITIONS(DIRNAME,MULTIPLIER,MEANFILTEROPTS,MEDIANFILTEROPTS)

%

% Creates single condition images from individual intrinsic signal responses.

%

% Assumptions: Assumes that in the directory DIRNAME there are the following files:

% stimorder.mat -- should have variable 'stimorder' that has the display order of stimuli

% stimNNNNImage.mat -- should have variable 'img' that has the individual response

% to each stimulus (numbered from 0)

%

% Inputs:

% DIRNAME -- the directory name to examine

% MULTIPLIER - Multiplier for saving TIFF image

% MEANFILTEROPTS - The size of an optional mean filter

% MEDIANFILTEROPTS - The size of an optional median filter

%

% Files that are written:

% singleconditionZZZZ.mat -- the single condition file

% singleconditionZZZZ.tif -- single condition in 16-bit tiff format

% singleconditionprogress.mat -- contains information on partial progress

so = load([dirname filesep 'stimorder.mat']);

so = so.stimorder;

stims = unique(so);

sc = {};

progfilename = [dirname filesep 'singleconditionprogress.mat'];

if exist(progfilename)==2,

prog = load(progfilename,'-mat');

else,

for i=1:length(stims), prog.existence{i} = []; end;

prog.meanfilteropts = meanfilteropts;

prog.medianfilteropts = medianfilteropts;

prog.multiplier = multiplier;

end;

for i=1:length(stims),

existence = [];

inds = find(so==stims(i))-1;

% first check existence data to see if we need to update

for j=1:length(inds),

existence(j) = exist([dirname filesep 'stim' sprintf('%0.4d',inds(j)) 'Image.mat']);

end;

if ~eqlen(existence,prog.existence{i})|...

~eqlen(medianfilteropts,prog.medianfilteropts)|...

~eqlen(meanfilteropts,prog.meanfilteropts),

stimdata = [];

for j=1:length(inds),

if existence(j)==2,

g = load([dirname filesep 'stim' sprintf('%0.4d',inds(j)) 'Image.mat']);

%stimdata = cat(3,stimdata,g.img);

if isempty(stimdata), stimdata = g.img./double(sum(existence>0));

else, stimdata = stimdata + g.img./double(sum(existence>0));

end;

end;

end;

% compute mean image

imgsc = nanmean(stimdata,3);

% if any filter options are specified, then apply them

if ~isempty(meanfilteropts),

imgsc=imgsc-conv2(imgsc,ones(meanfilteropts)/sum(sum(ones(meanfilteropts))),'same');

end;

if ~isempty(medianfilteropts),

imgsc=medfilt2(imgsc,medianfilteropts*[1 1]);

end;

save([dirname filesep 'singlecondition' sprintf('%0.4d',stims(i)) '.mat'],'imgsc');

imwrite(uint16( round(2^15+multiplier*imgsc)),...

[dirname filesep 'singlecondition' sprintf('%0.4d',stims(i)) '.tiff'],...

'tif');,

prog.existence{i} = existence;

end;

end;

existence = prog.existence;

save(progfilename,'existence','medianfilteropts','meanfilteropts','multiplier');

Now let's use it:

createsingleconditions(['data' filesep '2010-12-16'],1000,[],[])

Now let's take a look at one of the single conditions:

img1 = load(['data' filesep '2010-12-16' filesep 'singlecondition0001.mat']);

imagedisplay(img1.imgsc)

Q2: Again, try looking at changes from +/- 0.001. Can you see a repeating pattern of black spots? Those are the areas that respond to horizontal bars.

Intrinsic images tend to be noisy; we can take advantage of the fact that we expect nearby pixels to have similar values and "blur" or "smooth" the image with a median filter. This reduces local noise. Let's use the median filter option of createsingleconditions.m and look at some of the resulting images:

createsingleconditions(['data' filesep '2010-12-16'],1000,[],[5])

% the horizontal bar image

img1 = load(['data' filesep '2010-12-16' filesep 'singlecondition0001.mat']);

imagedisplay(img1.imgsc);

title('Horizontal bar response');

% the vertical bar image

img3 = load(['data' filesep '2010-12-16' filesep 'singlecondition0003.mat']);

imagedisplay(img3.imgsc)

title('Vertical bar response');

img5 = load(['data' filesep '2010-12-16' filesep 'singlecondition0005.mat']);

imagedisplay(img5.imgsc)

title('Blank image response');

Q3: How does the blank image response compare to the responses to vertical and horizontal bars?

False color processing

Now let's use false color to convey a bit more information with these images.

Simple difference images

One simple way to show the responses to horizontal bars and vertical bars on the same image is to compute the difference between the 2 images. Because the dark areas correspond to areas that are active, when we subtract the 2 images the dark regions will correspond to areas that were differentially driven by one stimulus (the one with the positive sign) and white regions will correspond to areas that were differentially driven by the other stimulus (the one with the negative sign).

diff_image = img3.imgsc - img1.imgsc;

imagedisplay(diff_image);

title(['Black = vertical bar response, white = horizontal bar response']);

Q4: Rescale to look at -0.001 to 0.001. Zoom in. Do you see an alternating pattern of areas that respond to vertical and horizontal bars?

Coloring orientation preference

To convey even more information, we can use a color code to indicate the orientation preference that was recorded at each location.

We can estimate the "orientation preference" of each pixel by using the vector method that we learned in Lab 2.3. We summed all of the responses around the complex unit circle:

to give a "prevailing" orientation:

.

Let's do this for an similar tuning curve, as in Lab 2.3:

a = 20; % peak responses, in spikes/sec

b = 90; % location

c = 20; % width

d = 0; % offset

e = 2; % amount of noise

angles = 0:10:180

ori_nonoise = a*exp(-((b-angles).^2)/(2*c^2))+d+0*randn(size(angles));

figure;

plot(angles,ori_nonoise,'b-o');

xlabel('Angle');

ylabel('Response (spikes/sec)');

Now let's plot this in vector space:

angles_radians = 2*pi*angles/360;

r = [];

for i=1:length(angles),

r(end+1) = ori_nonoise(i)*exp(2*sqrt(-1)*angles_radians(i));

end;

figure;

plot(r,'bo')

hold on

plot(sum(r),'kx');

ylabel('Response(Hz)');

xlabel('Response(Hz)');

We can find the orientation of the vector by using the angle function (which returns the phase angle of a complex number). The number that is returned is between -pi and pi radians (corresponding to -180 and 180 degrees):

vector_angle = angle(sum(r))

To convert this back to orientation, we can transform this to be between 0 and 2*pi by taking the modulus, using the mod function (see help mod). If you've never used a modulus function before, allow me to introduce you. The modulus mod(x,y) is a form of remainder after division of x by y. You can use it to wrap around any repeating interval. For example, suppose you want to know what time on the clock it will be 17 hours after now. Suppose it's 11 o'clock. You can type

11 + 17

which gives the result 28. But if you ask for mod(11+17, 24), you'll get the answer:

mod(11+17,24)

Let's try a few more examples:

Q5: What are the values below?

mod(24,24)

mod(0,24)

mod(1,24)

mod(25,24)

mod(26,24)

mod(-1,24)

mod(-2,24)

mod(-3,24)

mod(25,12)

Now let's return to our problem at hand. We want to wrap this value -pi to pi onto the interval 0 to 2*pi:

vector_angle_mod2pi = mod(vector_angle,2*pi);

And now by converting to degrees and by dividing by 2 (because we put orientation, which is inherently a quantity between 0 and 180 degrees onto the unit circle, which goes from 0 to 360 degrees), we can get the orientation:

vector_angle_degrees = (vector_angle_mod2pi * 180 / pi )/2

Q6: Does the vector_angle_degrees match the location provided in parameter b above?

Now we can calculate the orientation preferences for the images. First we'll load in the 2 images we don't have yet:

img2 = load(['data' filesep '2010-12-16' filesep 'singlecondition0002.mat']);

img4 = load(['data' filesep '2010-12-16' filesep 'singlecondition0004.mat']);

angles = [0 45 90 135];

angles_radians = 2*pi*angles/360;

r_ = zeros(size(img1.imgsc));

for i=1:length(angles),

myimg = eval(['img' int2str(i) '.imgsc']);

r_ = r_ -myimg.*exp(2*sqrt(-1)*angles_radians(i));

end;

phases = angle(r_); % get a full image of angles

phases_angle_mod2pi = mod(phases,2*pi);

phases_degrees = (phases_angle_mod2pi * 180/pi)/2;

figure;

image(phases_degrees);

colormap(hsv(180));

colorbar

title('Orientation preferences');

Q7: What do you see?

References

Hillman EMC (2007) Optical brain imaging in vivo: techniques and applications from animal to man, J Biomedical Optics 12:051402.

Labs are copyright 2011-2012, Stephen D. Van Hooser, all rights reserved.