The meaning and relevance of bit depth is a common point of confusion for life scientists who use microscopes. Typically, when we mention bit depth in microscopy, we are referring to gradations in the intensity readings from a detector. Let’s use a photomultiplier tube (PMT) on a laser scanning confocal microscope as an example. As the laser beam is scanned point by point across the sample, the PMT converts the detected photons to an electrical signal. This signal may be viewed as an analog signal on an old oscilloscope, appearing as a voltage which changes as the confocal laser sweeps over the sample.
But now that the world has gone digital, the electronic signal is sampled instead, much as we sample music or old photographs. In microscopy, the signal is converted from a real object – a current or voltage from the detector – to a series of numbers. These numbers are generally a string of positive integers (and sometimes 0 too). This is known as analog-to-digital (A/D) conversion.
A PMT is a point rather than an array detector, so the readings are acquired sequentially. With systems equipped with cameras, the signal is read from each photosite (physical sampling element in the camera array). These readings then are displayed as the digital image that you can view and manipulate. The image is composed of picture elements, known as pixels, and these can be inspected individually if you zoom in the image enough.
The greater the number of electrons, the higher the reading. By convention, low readings in microscopy reflect low signal and high readings correspond to bright areas in the original image.
There is another consideration- once the minimum and maximum are set, it’s important to consider the gradations between the minimum and maximum values as the image goes from dark to bright. This is known as the bit depth and, roughly, it corresponds to the number of “shades of grey” that can be differentiated in the signal. This can be set by the detector or how you choose to display and/or work with your data.
You can imagine this as a graduated cylinder with divisions at each 5 ml rather than every 1 mL. For the cylinder on the left, you can only report the values in increments of 5mL while on the right, you can use increments of 1 mL. Which one would you rather use for your experiments?
Although both cylinders have different gradations, they still hold the same volume. This extends to detectors too, where two detectors can have the same maximum readings, but they may have different bit depth. In the same way, each PMT or photosite has a characteristic holding capacity but the gradations can change depending on how you take the readings or choose to save your image.
The more grey levels, the more you can differentiate among the number of photons hitting the detector (until you hit a limit at which no improvement is possible- more on this in future posts). In microscopes, and in all digital signal readings, we tend to use characteristic but non-intuitive ranges (unless you are an engineer, physicist or computer scientist perhaps!). These ranges are set by the A/D converter. They take a continuous analog signal (the reading from your detector, say a PMT) and convert it into a series of numbers that can be encoded and displayed as an image. So if you zoom on your image enough, you can see that it is made up of individual pixels, and each pixel displays a gray (or colour) value that represents the intensity read by the detector at a specific point and time.
These ranges go as 2N, where N is the bit depth. When N=1, the bit depth equals 21, or 2. Images with a bit depth of 2 encode readings that equal 0 and 1 or alternatively, equal 1 and 2. Sometimes the readings start from 0 and sometimes from 1 (this depends on the microscope and software)
With a bit-depth of 2, the readings then can equal exactly 0 or 1. With only two options you can say yes or no; there are photons detected or not, but you can’t tell which regions are truly brighter than the others. This is illustrated in the left image, which shows an immunofluorescence imaging of tubulin shown at a bit-depth of 2.
In the right image, we have shown the same image using bit-depth of 8, so values range from 0 to 255. Note that the darkest and brightest points have not changed, but rather, the intensities in between contain more gradations. You can make out more differences in intensity- it looks more shaded and gives you more information about the relative fluorescence intensity being emitted from the sample.
Images courtesy of Lee Barrell, LCI
Now for microscopy we generally start at N=8 (8-bit) but can go up to N=16 (16-bit) or more depending on the technique. One disadvantage of acquiring and saving higher-bit depth images is that the files take up more storage space. However, with a few exceptions, it is better to acquire the images at the highest bit depth allowed by your system. Your eyes will not detect a difference because even under optimal conditions, the highest bit-depth discrimination is about 870 grey levels, ranging between bit-depth between 9 and 10). Yet even if you can not see the difference, reducing the bit depth means that you may lose accuracy and precision when you analyze your data. Another good practice is to keep the original data set in its proprietary format even if you plan to export the images as TIFs or PNGs. Also avoid JPGs as most often, as JPG uses lossy compression which will degrade the quality of your images in ways beyond just reducing bit depth.
Bit depth is a fundamental concept in digital imaging. Join us next post as we demystify other concepts and jargon in digital imaging that can present barriers to life scientists who want to optimize their imaging workflows.