Lab 4.2 Digital image representations of the physical world


In the previous lab, we learned about some standard methods for representing image data digitally. There are index-based images, where each pixel refers to an entry in an associated color look-up table (aka color map), and RGB images, which specify the intensity of the pixel on the red, green, and blue monitor channels.

In this lab, we will explore the relationship between the physical world and the digital image data we work with on the computer.

Electromagnetic radiation and "light" in the physical world

"Light" is a form of electromagnetic radiation that is carried by subatomic particles called photons. Each photon oscillates at a particular wavelength as it travels through space. Light is a term based from the properties of the human visual system, as it denotes the portion of the electromagnetic spectrum that is visible to the human eye, ranging from deep violet (wavelength: 375nm) to deep red (wavelength: 750nm).

Electromagnetic spectrum [Image from link]

What we record depends on illumination, properties of the object, and properties of the camera/detector

In the physical world, objects can (in principle) emit electromagnetic radiation of any wavelength. Of course, they are not limited to the wavelengths that we can see. The intensity of light that we actually record with our eyes or an instrument depends on several factors, such as the illuminating light source, the properties of the object (absorbance/reflectance and/or fluorescence), and properties of our camera or detector (detection spectrum). In order to interpret a digital image of some object in the physical world that is obtained in an experiment, it is important that we have at least a conceptual understanding of these steps.

Let's study an example of a human looking at a blue object in sunlight.

First, load the file that is included with this lab ( Put imagedisplay.m and assign.m into your MATLAB folder on your drive. You can put all of the other files in your data folder.


At a simplified level, the sun emits light with the following intensity distribution:

load sun.mat % load the file included with this lab

figure(10); % let's use a specific figure number



title('Sun irradiance');

Q1: What is the peak wavelength? Is this wavelength in the visible spectrum?

Now that you've explored the curve, let's change the axis bounds to highlight the visible region.

axis([380 750 0 1.1]);

A small spot that corresponds to a single pixel on a blue object would exhibit a high degree of reflectance in the wavelengths near 470, so let's create such a reflectance curve:

myblueobject = zeros(size(sun_wavelength));

myblueindexvalues = find(sun_wavelength>440&sun_wavelength<490);

myblueobject(myblueindexvalues) = 1;

figure(10); % make sure we are on figure 10



axis([380 750 0 1]);

title('Reflectance of blue object');


In order to compute the amount of light at each wavelength that would be reflected and would arrive at a detector such as the eye, we must, for each wavelength, multiply the illuminating light intensity by the reflectance. For example, the intensity of the illumination at 450nm is

[dummy,index450] = min(abs(sun_wavelength-450)); % find the correct wavelength in sun_wavelength

objectintensityat450 = sun(index450) * myblueobject(index450)

In Matlab, we can compute the whole distribution by multiplying

intensityatdetector = sun .* myblueobject;




axis([380 750 0 1]);

title('Radiation arriving at detector');


What responses would this distribution of light generate in a detector, such as the cone photoreceptors of the human eye?

There are three types of cone photoreceptors, colloquially referred to as the "blue", "green", and "red" photoreceptors, although, as we're about to see, each is really activated by a spectrum of light rather than a single wavelength. Vision scientists usually refer to these cones as the S-, M-, and L-cones (short-, middle-, and long-wavelength sensitive) The activation curves of the photoreceptors have the following shapes:

load cones.mat




hold on;



axis([380 750 0 1]);

title(['Response profiles of detectors']);

Now we can calculate the response of each cone type in 2 steps. First, we want to calculate the overlap of the light that is arriving at each detector with that detector's response spectral response profile. We do this by multiplying each wavelength of light that is arriving at the detector by the response profile (similar to calculating the reflected light off the object above):

scone_overlap = intensityatdetector .* scone;

mcone_overlap = intensityatdetector .* mcone;

lcone_overlap = intensityatdetector .* lcone;




hold on;



axis([380 750 0 1]);

title(['Spectral overlap of arriving light and detector response profiles']);

In the second step, we want to sum the overlap to get a value that would be proportional to the response of the cone receptors at this pixel of the image. In the case of a cone receptor, the constant of proportionality would have units of mV / unit light, because the response in the actual receptors is expressed as a voltage. In computing the sum of the overlap, we also have to multiply by the wavelength step size (delta_wavelength, in this case 1 nm) because, in preparing these examples, I sampled the electromagnetic space in 1nm. (If you have the same detector and light source, but sampled these values at a different step size, such as 0.5nm, you'd want to make sure you get the same answer for the response when you sum.)

scone_response = sum(scone_overlap) * 1;

mcone_response = sum(mcone_overlap) * 1;

lcone_response = sum(lcone_overlap) * 1;



bar([1 2 3],[scone_response mcone_response lcone_response]);

axis([0 4 0 4]);

title(['Response of cone photoreceptors']);


Q2: Copy and paste this figure

To create images that we humans can see, devices such as monitors use pseudo-color

Computer monitors and other devices are not capable of producing an arbitrary light spectrum. Instead, they produce an image that is "good enough" for the human visual system's cone photoreceptors.


Monitors and display produce color images by using 3 filters (LCD and DLP displays) or by using 3 types of phosphors (CRT displays). The chromatic principles are the same for all of these devices. Let's look at a CRT:

load monitor_phosphors.mat




hold on;



axis([380 750 0 1]);

title('Monitor phosphors');

If we wanted to produce a pixel that represented part of the blue object we observed above, we could do that by driving the blue phosphor:

b = 0.83; r = 0; g = 0;

mypixel_spectrum = r * red_phosphor + g * green_phosphor + b * blue_phosphor;




axis([380 750 0 1]);

title('Spectrum generated at this pixel');


Once again, we can compute the impact of this spectrum on our cone photoreceptors:

scone_overlap = mypixel_spectrum .* scone;

mcone_overlap = mypixel_spectrum .* mcone;

lcone_overlap = mypixel_spectrum .* lcone;




hold on;



axis([380 750 0 1]);

title(['Spectral overlap of arriving light and detector response profiles']);

And we can compute the response of the photoreceptors:

scone_response = sum(scone_overlap) * 1;

mcone_response = sum(mcone_overlap) * 1;

lcone_response = sum(lcone_overlap) * 1;



bar([1 2 3],[scone_response mcone_response lcone_response]);

axis([0 4 0 4]);

title(['Response of cone photoreceptors']);


Q3: Compare the spectrum generated by the monitor for this pixel with the spectrum from the physical object. Are they the same?

Q4: Compare the cone photoreceptor responses to the physical object and the monitor pixel. Is there any difference? Would these colors appear the same to a human observer?

Q5: Other mammals have slightly different cone photoreceptors that prefer different wavelengths. Would your TV look right (chromatically) to your dog?

In science, we can use cameras and other devices to capture radiation that is outside the visible range, and pseudo-color the images so that we can observe those patterns.

Q6: For an example, take a minute to check out the infrared images at:

Physical devices have limits on the intensity values they can record

In the example above, we learned that the response of a detector can be determined by summing the overlap of the radiation arriving at the detector with the detector's spectral response function. In principle, if a brighter stimulus is presented to the detector, the detector's response should increase. However, for actual detectors, this process has limits. Most detectors that produce a digital output have a resolution and range:

  • 8 bit detectors can differentiate 256 levels of intensity in any given acquisition period

  • 10 bit detectors can differentiate 1024 levels

  • 12 bit detectors can differentiate 4096 levels

  • 16 bit detectors can differentiate 65536 levels

Typically, these values are read out by the computer in the form of an integer that ranges between 0 and N-1, where N is the number of levels the detector can distinguish.

Let's examine an image of the brain surface from the 12-bit camera in my lab. Here, we have illuminated the suface with green light (about 532nm) to highlight the blood vessels, as blood vessels readily absorb green light (and appear darker than the surrounding tissue, which reflects more green light than blood vessels).

Absorbance of oxygenation-dependent hemoglobin in brain tissue; also shown is scattering of light in brain tissue. From Hillman 2007.

im = imread('brainimage.tiff');




axis square off; % force the axes to be square and disappear

Let's see what type of image it is


So it is a 2-dimensional image; because we have acquired this image with a CCD camera, the value of each pixel has a physical meaning – it indicates how much light was falling on the detector (and overlapped the detector's response spectrum) at that pixel location.

Intensity histograms

We can examine the distribution of intensity values that are contained in this image by performing a histogram.

bins = -0.5:1:4095.5;

N = histc(im(:),bins); % the (:) lets us pass the NxM matrix as a 1xN*M vector



title('Intensity distribution');

Q7: Look at the histogram; what is the range of data that is present in the image? Are any points at the maximum intensity that is detectable?

Detecting saturation

When a pixel has the highest value that the device can detect, such as 4095 in a 12-bit image, then we say that the pixel is saturated. This means we know that the pixel has an intensity value of "4095 or greater", but we are uncertain as to the precise value. For example, if the light impinging on the detecter would nominally give an intensity value of 8000, our 12-bit system would dutifully report a value of 4095 (the highest value it can provide as output).

Saturation is a serious issue to consider when collecting and analyzing imaging data. If we want to observe a change in fluorescence or reflectance with some manipulation, then we need to make sure our signal has room to go both up and down.

We can examine saturation by altering the color map.


map = colormap; % get the current color map, gray(256)

map(256,:) = [ 1 0 0 ]; % set the peak colormap value to red

map(240:255,:) = repmat([1 1 0 ],255-240+1,1); % set values near peak to yellow

colormap(map); % set the color map

So image values above 4095 * 240/256 will be drawn with yellow (those values close to saturation), and 4095 will be drawn red (values that are already saturated). Note that the definition of what is "close" to saturation will vary from study to study (if you expect to see a 10% change in signal, then you need to leave at least 10% of room for the signal to go up).

Some detectors also allow one to "subtract" an offset, effectively changing the definition of 0. Confocal microscope photomultiplier tubes are an example of a detector with an offset. If an offset was used for an image, then one also wants to note points that are 0; we really don't know if these intensity values are 0 or less. We can do that here:


map = colormap;

map(1,:) = [ 0 0 1]; % make the lowest value blue

colormap(map); % set the map

Image math

You might say: it is sort of confusing to work with 256 levels of output when my image has 4096 levels. Is there a better way?

On some computing platforms (Mac OS X, Linux), Matlab is capable of using color tables with 4096 entries. But when Matlab runs on some platforms (I'm looking at you, Windows XP, Vista, and 7), it doesn't take advantage of color tables larger than 256 entries. I like to write code that works on any platform, so I stick with 256 color table entries when I work with intensity images in Matlab.

To convert images to be in the range 0 to 255, we can just divide the image by its maximum value and multiply by 255. But there's an important data type trick. Recall that most images are specified as integers; when you divide a data that is of type integer (specifically, uint8 for 8-bit images and uint16 for 10-, 12-, or 16-bit images) by another number, the result is again an integer, with no "remainder" allowed:

a = uint16([1024 512 100]) % now we have 'a' defined as an integer

a/max(a), % look at the result!

255 * a/max(a),

What we really want is to be able to have 'real' numbers as a result. In Matlab, we do this by converting numbers to type double (the usual default type for variables):

255 * double(a)/max(double(a)), % now we get what we want

Let's apply this to our image:


im2 = double(im);

im2 = 255*im2/max(im2(:));



Why doesn't Matlab just always use doubles for images by default? The reason is memory. A 16-bit integer (65536 levels) consumes, well, 16 bits, or 2 bytes, of memory. A 512 x 512 image represented in 16 bit integers consumes 512 kB of memory. Each 'double' requires 64 bits of information (or 8 bytes), so that same image would consume 2 MB of memory if it were represented in doubles. But we're performing data analysis here, so we can afford to spend some memory to work with these images mathematically.

Displays and the human visual system have limits on the resolution that can be perceived: adjusting brightness, contrast, rescaling

Although many scientists use high resolution image acquisition systems capable of 12 or more bits of resolution (4096 or more levels of gray), most computer displays are only capable of 256 levels (8 bits). Further, the human visual system itself is, in careful psychophysical experiments, only able to distinguish about 1024 levels (10 bits) of gray levels. So sometimes we want to adjust the brightness and contrast of the image so that the most informative information in the image is displayed in our 8 available bits (256 levels) of monitor resolution.

It's likely that you already have experience with this in the form of adjusting brightness and contrast of images. Adjusting brightness scales the intensity distribution up or down, and adjusting contrast scales the intensity distribution with above and below the middle level (intensity 127.5 in an image with a maximum value of 256).

We won't write code to adjust brightness and contrast here, but for your information, this is how the GIMP (GNU Image Manipulation Program) performs contrast and brightness adjustments. You can also look at the examples on the Wikipedia page: Image Editing, where I obtained this code. In the GIMP, pixel values vary between 0 and 1, and the brightness and contrast variables vary between -1 and 1):

if (brightness < 0.0),

newpixelvalue = pixelvalue * ( 1.0 + brightness);


newpixelvalue = pixelvalue + ((1 - pixelvalue) * brightness);


newpixelvalue = (newpixelvalue - 0.5) * (tan ((contrast + 1) * pi/4) ) + 0.5;

Down the road, if you experiment with image manipulation in Photoshop or GraphicConverter, try out the "Levels" function or "Auto Levels" function. This allows you to have finer control over the intensity distribution.


One of the simplest and most powerful manipulations is simply to scale the pixels from one range to another. I have written a function rescale for this purpose, and it is one of the functions I use the most in my own work.

Q8: Create a new .m file called rescale.m (put it in a new package called +dasw/+math) and type the code for this function, including the help:

function newvals = rescale(vals, int1, int2, noedge)

% RESCALE - Rescale a quantity to a new interval


% NEWVALS = dasw.math.rescale(VALS, INT1, INT2)


% Takes values in an interval INT1 = [a b] and scales

% them so they are now in an interval [c d]. Any values

% less than a are set to c, and any values greater than b

% are set to d.

% NEWVALS = dasw.math.rescale(VALS, INT1, INT2, 'noclip')

% will do the same as above but will not clip values

% above b or below a.


newvals = (int2(1)+((vals-int1(1))./diff(int1))*diff(int2));

if nargin<4, % perform clipping unless user asks us not to

newvals(find(newvals>int2(2))) = int2(2);

newvals(find(newvals<int2(1))) = int2(1);


Now let's apply rescaling to an image.

cell_image = double(imread('cell_image.tiff'));




% rescale to fit into the range

mn = min(cell_image(:));

mx = max(cell_image(:));

cell_image_whole_range = dasw.math.rescale(cell_image, [mn mx],[0 255]);


bins = -0.5+[0:256];

N2 = histc(cell_image_whole_range(:),bins);



title('Intensity distribution of cell_image');


cell_image_focused = dasw.math.rescale(cell_image,[0 1000],[0 255]);


N3 = histc(cell_image_focused(:),bins);



title('Intensity distribution of focused image');


Hillman EMC (2007) Optical brain imaging in vivo: techniques and applications from animal to man, J Biomedical Optics 12:051402.

Labs are copyright 2011-2012, Stephen D. Van Hooser, all rights reserved.