Image Saturation - No blue pixels vs "few" blue pixels

A majority of core facilities have differing protocols when it comes to setting confocal parameters. According to Leica, you want a few green and a few blue pixels when looking at the LUT. Other protocols/SOPs state you want no green and no blue pixels, or a “salt and pepper green background”. Is this just a matter of preference? I understand the logic on both sides.

Duke University: “Adjust the gain so just a few pixels are the max colour, reduce the offset so the background is about 50% the 0 colour. This ensures you have the full range of brightness within your image.”

Avoiding the pitfalls: “The image acquisition parameters should be set so that no detection channel shows pixels reading zero or saturated levels.”

Tutorial: guidance for quantitative confocal microscopy: Shows a few red (saturated) pixels in their figure.

And in either case, it’s seemingly impossible to have settings that will not result in some areas between samples/treatments having saturation.


The idea of having a few saturated pixels along with a few background pixels at zero does one thing - it maximizes image contrast. Image contrast has nothing to do with quantification. This is a distinction that is very important and also very misunderstood.

When you are adjusting parameters on your confocal microscope - you need to keep in mind that you are optimizing raw data for potential image analysis - so ideally you want no overexposure or underexposure anywhere in the final image (no green or blue pixels for Leica, no red/blue for Zeiss, etc.) for all of your conditions. This way you can ensure that all your measurable signal falls within the detectable range of the PMT.

Often this will not get you the best image contrast - but at this point you should not be worrying about this. Image contrast can be adjusted in post for optimal display.

It is not impossible to adjust your system so that you have no overexposure in your images for every condition - you just have to have a good idea of what to expect. This is where controls come in. Positive control conditions give you a good idea of how bright your signal should be - and if you make your adjustments to your brightest condition - the rest of your conditions should fall somewhere below the maximum. Or if you do not have a positive control - you can underexpose your brightest condition by 10-15% to hedge.

Sometimes this is a difficult point to get across as you may have a hard time visualizing good image contrast in scenarios where you have a large signal disparity between your brightest condition and your dimmest condition. That does not mean that the disparity is not quantifiable - just that your monitor may not be able to display all the data contained within the image at one time. Don’t let that trick you into overexposing however. Most monitors cannot display the full dynamic range from a 16 bit image.

The main point I make with my users - if you overexpose - you cannot get this data back. It is gone forever. If your adjustments collect all the data - it can be further processed to render a measurable data set AND a high contrast image from the same data.

Hope this helps!


Important question! :point_up: @jasonkirk well-stated answer is correct.

To add my own spin on it: When acquisition settings are determined by arbitrarily clipping off the lowest and highest intensity values using a visual assessment of whether there are “a few” colored pixels in the image, the image is not representative of the full range of intensities present in the sample and therefore can’t be used to accurately measure intensity values. Perhaps some of the instructions you found online were written back in less quantitative times. This type of acquisition limits you to qualitative assessment of the data only, and the possibility of missing (depending on the sample and how “a few” is interpreted) weak or bright structures. And you may find yourself later collecting the data all over again to satisfy a reviewer or editor.

I’ll also add, more generally, that “a few” pixels is not a useful way to express quantity in science. Numbers are needed to reduce bias and enhance reproducibility.


@jasonkirk @jennifer

Thank you both for your reply. I understand the limitations. I guess the community could benefit from consistent messaging and protocols. Along with updated websites.

Just FYI the word “few” blue pixels is on the Lecia SP8 poster. It’s also especially difficult considering researchers want images that look like published images, which are usually highly saturated


1 Like

Jasonkirk and Jennifer are correct. I’ll add that the sample and your question has a lot to do with it, and there are often limitations on what kind of image you can generate, sometimes some pixels saturating is unavoidable at the expense of being able to image what you’re interested in, for example, how much signal to noise do you need to get the data and answers that you seek, maybe no pixels saturating at all is completely fine, I could go on and on. It seems like recommending a bit of saturation is trying ti make the point to take advantage of the dynamic range of the detectors (if you can), but any saturation is throwing information away, for display purposes only, a bit of saturation in the image is ok, as long as the background noise it low. Anyway. I’ll stop.

1 Like


Thanks for the reply! I like the comprehensive discussion.

@jasonkirk A “few” blue pixels vs “lots” of blue pixels “at zero” is sometimes indicative of a hardware issue. On some early models of Thorlabs systems, its necessary to adjust the offset (gain) in a pre-amplifier circuit by opening up the control box and adjusting a potentiometer during initial setup. “Lots” of blue pixels with gain for the ADC amp set to 0 and detector supply voltage both set accordingly (in software) is “bad” and indicative of a problem with the pre-amplifier, the offset in the ADC board itself, the detector, or a wiring issue. Unfortunately that is how it its often described.

@jennifer These are all relative and not very scientific terms, which probably confuse a lot of end-users, who might not have a lot of experience with or need for detailed knowledge of electronics (or instruments such as oscilloscopes), and just want to go use the system for their research. It would be nice if there was more standardization between manufacturers in terminology, or better efforts to describe in a general way, what happens between detector and LCD screen.

1 Like

Hi there

Our facility recommends no saturation and no under exposure in or near the area of interest (e.g. a piece of dirt that saturates in a corner of the image is not a problem if it is not close to the area to be analyzed). This deals with cases where the area of interest in dimmer than other areas in the image.
We recommend to set the detector offset to the lowest level that gives no under exposure (blue pixels).
I think it would be worth feeding this conversation back to Leica. They might adjust their wording.


Offset to which I referred is hardware, as in the gain factor the pre-amplifier (inside/nearest to the detector module) or offset in the firmware of the DAQ. The pre-amp is for low-level signals directly from the detector, before going through lossy, lengths of noisy wiring to the analog input of the DAQ (which also has an amp but with fixed gain for signal conditioning). Was not referring to “sliders” in the end-user software. Sliders only adjust the way digital data interpreted by the DAQ is displayed on screen (basically thresholding) and/or gain of the detector itself (supply voltage to the detector).

It is not likely an issue unless the system has been recently installed/moved, is very old, or the detector has been replaced or upgraded to a different type. It is possible to “get rid” of the pixels of various colors by adjusting the the pre-amp or DAQ firmware, but is not something most end-users often if ever adjust. However, if there are “lots” of blue pixels it is sometimes indicative of a hardware issue, especially if “it wasn’t like that before” with a standard sample.

The “blue pixels” are negative values in decibels (-dB) relative to some threshold when looking at the output of an amp on a spectrum analyzer (instead of as an image). Its a long time since analog CRT oscilloscopes have been used to display the image. The manufacturers should better help their customers interpret what it is that appears on the LCD screen under different conditions.