In our previous blog post we discussed resonant and polygonal mirror scanning systems. These scanning systems are designed for high-speed imaging, and can achieve short pixel dwell times compared to conventional galvanometer mirrors. While short pixel dwell times increase imaging speed, they also reduce the amount of light that you can detect. Further, images acquired at low light levels often exhibit “salt and pepper” fluctuations in the pixel intensity readings. Figure 1 illustrates a representative example where the “salt and pepper” fluctuations are readily visible.
The intensity fluctuations seen in Figure 1 arise from random fluctuations in the light signal and the detector electronics, as well as other factors that create variations and/or distortions in the signal. We call the intensity fluctuations “noise.” In the next series of posts, we will delve into the significance of noise and discuss how you can minimize or even leverage these fluctuations for imaging.
Noise comes from many sources, and these different noise sources combine to create the noise observed when acquiring a signal such as an image. Further, the noise sources can differ widely from system to system. We will cover noise sources in more detail in future posts, yet for simplicity here we will refer to them collectively as “image noise.”
When zooming in on the previous image, as shown in Figure 2, you can just make out the pollen grain on the left edge of the frame, while the rest of the frame has a “salt-and-pepper” appearance of speckled pixels. Notice that the background intensities shown in Figure 2 is highly variable, with some pixels black, some white, and some various shades of gray. If you were to take this image again with the same acquisition parameters the intensities would not be identical because the noise in the multiphoton microscope used to acquire the image is random.
When recording a signal, noise will always be present, even when imaging a completely black field. In an ideal noise-free environment (note this never exists in the real world) you would see an image that is completely black, with an intensity of zero, and every single pixel in the image reporting zero. In reality, noise contributes to the signal recorded for each pixel randomly, sometimes increasing, sometimes decreasing, and sometimes not changing the signal at all.
When imaging with standard confocal and multiphoton microscopes, we often use photomultiplier tube (PMT) detectors that integrate (sum) the current triggered by incoming photons within a given time. For these systems, we can improve the quality of our resulting images by using a technique known as averaging. By averaging successive lines or frames, we can reduce the contribution of image noise.
Averaging improves the signal quality by smoothing out the fluctuations associated with random noise. Figure 3 shows an image of pollen grains taken with 0, 2, 4, 8, 16 and 32 frame averages. As the number of averages increases, the salt-and-pepper in the background decreases, and the pollen grains are more clearly defined. Averaging can be done after acquisition through image processing, or automatically by the acquisition software (in this case the resulting data set contains one final averaged image).
|16 Averages||32 Averages|
FIGURE 3 – CREDIT TO Charles Gora at CERVO, Martin Lévesque lab, Université Laval
While it seems like averaging multiple frames can be time consuming, a fast acquisition system means that averaging can be carried out relatively quickly. Additionally, thanks to the very short pixel dwell time, each point on the sample is only briefly illuminated which can help to reduce photobleaching while averaging, by limiting exposure to the laser.
The part of an image that contains the sample is known as the “signal” while the part of the image that does not contain any sample is called the “background.” Notice that averaging smooths out the background fluctuations noise and also offers similar improvement in our sample image (in our case the pollen grains). By eye, the image looks better, yet it is important to consider ways of quantifying this improvement. A standard and accessible method is to compare the signal intensities to the background intensities. With such an approach, we can systematically evaluate how different acquisition settings, such as laser power, imaging speed and averaging affect image quality. Optimizing image acquisition then becomes a systematic process rather than guesswork. More on how to carry out a signal to background measurement in our next post.