Analyzing FlyCapture Images obtained from Flea Digital Cameras

Flea2 Camera Physical Layout

The photograph shows the physical setup for grabbing images from a Flea2 camera (by Point Gray Research) mounted above a Xaar inket printer. This represents a prototype used to obtain images of microarray spots printed to glass sample slides, in “Format7” (partial image) mode, as the printhead moves across trays containing 25 microarray slides.

Using the FlyCapture SDK

The FlyCapture SDK provides methods for acting upon triggers from external pieces of hardware (in this case Xilinx FPGA boards) and retrieve the image buffer when receiving these prompts. What is more, it is possible to capture grayscale images directly from the camera, eliminating the need for colour-to-grayscale conversions in software.

Obtaining acceptable camera settings interactively

Assuming you have installed the full FlyCapture2 SDK, at the Start button, select All Programs -> Point Gray Research -> FlyCapture2 -> Examples -> Precompiled Examples -> Flycap2MFC.exe. The interface will display a list of all connected cameras:

Click refresh or double-click the make of camera highlighted to invoke the screen capture interface, that will show the images it receives for the current camera setup which in my setup look like this:

On this particular setup, clearer images of slide spots could be obtained by making small adjustments the Flea2 camera distance and focus:

Obtaining real-time images in Format7 mode

The previous screenshots show regions of sample spots on an already-printed slide. We are interested in obtaining images of groups of 12 spots the moment these 12-spot groups get printed to the slides as the robot is moving the Xaar printhead from left to right at a constant velocity. In other words, capturing much narrower images on-the-fly, not after they have all been printed.

C++ software is used to activate the printing of spots at precise moments using FPGA commands which are triggered according to the position intervals obtained from Renishaw linear encoders. The C++ also controls the Mitsubishi PLC to start the stepper motor movements to move the printhead to move from left to right across the slide tray. At the same time a separate thread is launched to start grabbing images of printed spots the moment they receive the external trigger inputs from the FPGA.

As an example of its potential usage, by capturing and analysing these spot images in real-time, we can deduce if there are potential hardware problems with the printhead, such as missing spots or mis-aligned spots caused by misfiring inkjet nozzles, and thus improve existing QC processes.

Further analysis of FlyCapture images

By using and adjusting the Format7 settings to get the image dimensions and pixel formats you want, plus including any other offsets/magic numbers that are needed, it is possible to grab the section of the image that we need, such as letterbox areas containing the regions of 12 spots as they are printed.

Use of open source tools such as OpenCV and cvBlobsLib enables software to “see” images as they are being retrieved. After converting the grayscale image into binary (black and white) it should then be possible to obtain a list of strongly connected components (spots), using APIs such CBlobResult or cvFindContours, giving you information such as the number of spots, spot dimensions like bounding rectangles or center coordinates and area. See this posting for more specifics on using the FlyCapture2 SDK to grab images in Format 7 mode. Grayscale image obtained after according to Format7 settings:

Notice some shadowing due to camera angle and less than ideal lighting conditions. OpenCV is used to threshold the grayscale image:

// Threshold to convert image into binary (B&W)
cvThreshold( img,                 // source imaging
             img_bw,              // destination image
             150,                 // threhold val.
             255,                 // max. val
             CV_THRESH_BINARY );  // binary type );

So that it becomes

Not very good – it picks up far too many artifacts and does not sufficiently exclude those gray values we do not want included as connected components. After experimenting with the threshold value, reducing it from 150 to 100 much improves the binary image obtained, causing the bitmap to come out like this:


Much better. And it correctly detects the 12 blobs and their bounding rectangles, ellipse coordinates and pixel areas. There is also cvAdaptiveThresholding though this is also subject to appropriate setting of the adaptive_method, threshold_type, block_size etc parameters. As an example usage of adaptive thresholding:

cvAdaptiveThreshold( img,               // source image
                     img_bw,            // destination image
                     255,               // max. value
                     CV_ADAPTIVE_THRESH_MEAN_C, // adaptive method
                     CV_THRESH_BINARY,          // threshold type
                     7,                         // block size
                     7 );                       // param1

Gives the following black and white image:

Better than the original poor result, but not as clean as using ordinary thresholding, and the shadowing around the proper spots is also getting picked up and interpreted as more connected components, meaning more than the intended 12 spots get detected.

No doubt this could improve with further experiementation, but ordinary thresholding at the moment seems much simpler, and the method I would stick with for the time being, or until the lighting setup changes.

Potential problems when using FlyCapture2 inappropriately

If you get an error message similar to the following…:

Then make sure you do not already have an instance FlyCap2MFC running in the background, such as when running the same version in Visual Studio in addition to the raw executable.

If the software you are running to grab images seems to hang or take much longer than is usual, then it may well be caused by having attempted to run more than one instance of FlyCapture2 –related software, be it your own creation or one of the SDKs provided by Point Gray Research. This problem causes the RetrieveBuffer() function to return a PGRERROR_TIMEOUT error.

This type of problem usually causes my XP machine to hang in such a way that a forcible power down and restart is necessary. Obviously the solution is to ensure that only one instance of FlyCapture2 software is run at a time..


Other posts related to image detection

Tracking Coloured Objects in Video using OpenCV
Displaying AVI Video using OpenCV
Integrating the FlyCapture SDK for use with OpenCV
OpenCV Detection of Dark Objects Against Light Backgrounds
Getting Started with OpenCV in Visual Studio
Object Detection Using the OpenCV / cvBlobsLib Libraries

`