CS333 Lab P1-2:
Pipelined Medical Image Processing
Goals of this lab:
- Learn about digital image processing.
- Experiment with parallel processing for better computational throughput.
- Learn how to control processing pipelines with user feedback.
In this assignment, you will construct a distributed pipeline for
digital filtering and enhancement of medical images. The images will
be nuclear medicine beating heart studies. The images were taken by
injecting a patent with a radioactive substance (Tc-99m) that attaches
to the red blood cells and then placing the patient under a gamma
camera that detects the gamma radiation emitted from the patient. By
collecting the data in time slices corresponding to the patients
electrocardiogram, a sequence of 32 "frames," each corresponding to
one part of the heartbeat, were produced. Each frame consists of a
64x64 array of pixels with values in the range 0-255. When
shown in succession, the result is a movie of the beating heart that
can be "read" by radiologists to determine the extent of heart damage
after a heart attack, for example. These beating heart movies are
called RVGs, which stands for Radionuclide VentriculoGram.
One problem is that there is a lot of ambient radiation that causes
the images to be rather noisy, and thus difficult to read. A solution
to this problem is to reduce the noise by digitally filter the images.
Images can be filtered spatially, meaning that each frame is
filtered independently, and they can be filtered temporally,
meaning that each pixel is filtered across a number of frames so that
transitions between frames are smoother. More details on filtering
are in the paper handed out in class.
Filtering images on demand requires substantial processing power. One
way to accomplish this is to harness the power of several workstations
by arranging them in a processing pipeline. In this lab, you
will write modules that perform digital filtering, both spatially and
temporally, and you will organize your modules into a pipeline so that
some of the computation can be done in parallel in order to improve
the overall response time. You will also experiment with other kinds
of image processing operations that the users can control dynamically
in order to adjust the appearance of the images.
Read over the entire assignment before starting.
- Try the sender and counter modules:
We have provided two Playground modules that you will use in your pipeline.
In addition, EUPHORIA
can be used to visualize and control the filtering process.
Try bringing up these modules to display an image. The image files
are in the directory /home/cec/class/cs333/rvg, so you should change
to that directory when you run the sender module. A README file in that directory
explains the images in the various data files.
- An executable module called fsender is available for
serving RVG images from files. The first presentation variable
is the frame that is being output by the sender. The second is
the file name. When this is changed, the sender reads the given
file and sends out the sequence of 32 image frames.
- EUPHORIA can be used to display and control the filtered
heart images. See Section 5.1 of the EUPHORIA Reference manual
for instructions on how to create a movie in EUPHORIA.
Click here for a picture of EUPHORIA being used to display the images.
- An executable module called fcounter is available for
specifying the playback sequence and rate of play of frames.
It generates an integer value which
is incremented over time. Minimum and maximum value variables are
supplied, allowing you to view only a certain range of frames. When
the counter reaches the maximum, it is reset to the minimum. Another
variable allows you to set the frames per second (i.e. how fast values
are incremented). Connecting this module to EUPHORIA allows you to
control how fast the frames are viewed. You may also connect it to multiple
EUPHORIAs, effectively synchronizing their displays (approximately).
- Spatial filter module:
Write a Playground module that takes in an image frame (see the
the definition of the frame object), filters the frame by applying a
filter mask to the image, and then outputting the frame.
The module should be reactive, so that each frame is filtered as it
You will use an 11-by-11 filter whose coefficients are
defined by the two-dimensional array in this
The filter is symmetric, with entry 0,0
corresponding to the pixel being filtered. Because of the symmetry,
the array only has values for one quadrant of the filter.
To determine the new value for each pixel, you can imagine centering
the filter mask over the image at that pixel, multiplying each
coefficient in the filter by the pixel value that is "under" it and
then summing. You should do all the intermediate computation in
a two-dimensional array of doubles, and then normalize the image
to a maximum pixel value of 255 to avoid overflow.
- Temporal filter module:
Write a module to filter the images temporally using this
symmetric filter of length 7,
as defined by the one-dimensional array in this
Since the filter is symmetric,
only 4 distinct coefficients are given. For a given pixel, the
calculation is similar to the spatial filter, except that
your filter "mask" is length 7 in the temporal dimension. That
is, to compute each pixel, you'll need the values of the corresponding
pixels for the three frames behind and in front of it, wrapping
around for the first three and last three frames.
Again, to avoid overflow, normalize each frame to a maximum pixel
value of 255.
- Concurrent processing:
The bottleneck in your system is probably the spatial filter, since it
performs many floating point computations per pixel. To improve this, you
should add a branch to the pipeline so that some of the processing
is done concurrently on different processors.
For example, you might have one module filter
the even-numbered frames and another module filter the odd-numbered frames
before passing them on to the temporal filter module. You won't need to
change any of the modules you've already written. Just write a dispatch
module that forward frames to two different places depending on the
frame number. Be sure to run the spatial filter modules on different
machines. Do you notice a speed-up? In the extreme case, you could
filter each frame on a different machine. Think about how you might
partition the work for temporal filtering on different machines.
- Eliminating background pixels:
Write a module that will eliminate background by dropping to 0 all
pixel values below a certain threshhold value.
Make the threshhold value interactively
controlled by the user (you might use EUPHORIA for this, if you want).
Experiment with putting this module at different stages of the pipeline
(before and after filtering).
- Changing the gray scale (optional):
Our eyes actually respond to light in a non-linear way. For example, if you
are in a room lit by one candle and you light another, you'll notice a big
difference. However, if you are in a brightly lit room and you light
a candle, the room won't seem any brighter. Try recalibrating your image
using a logarithmic scale, so that differences in brighter pixels
are "magnified" at the expense of compressing the differences for
smaller pixel values. Let the user control the contrast
interactively. Try this at the end of the pipeline. What happens if you
put it earlier?
To receive credit for this lab, you should:
- Clean up and print out your code. (Don't turn it in, but
save it for your code/design review.)
- Turn in a
Project Evaluation Form
near the beginning of
class on the day you want to do your demonstration and code/design
review. You should be
prepared to demonstrate your working application, explain your
design and code, and answer questions.