Averaging and accumulating in LAS X

Hello everyone

I am doing live cell imaging and in LAS X there are two options averaging and accumulating. What is the difference between these two modes? As I know both reduce the noise but in different ways. E.g. in accumulation sample scan several times and pixel intensity is added together and in averaging sample scan several times and data average together But I can not understand this definition exactly. Moreover, I know we can not use average the line and accumulate the line at the same time. We should average line and accumulate frame. Why?

Thank you in advance

Hi Maidah,

You’ve already explained it yourself one is adding several scans (accumulation) and the other is averaging (in other words accumulating and dividing by the number of scans) so you can make your own average from an accumulation. Why then do averaging, you might ask. Well that has to do with the values if you make an 8-bit image your maximum value is 255 that means if you have an intensity of 50 and you want to do 8 times accumulation you’ll run out of values. It then still is very possible to do averaging and keep the 8-bit.

The combination of line accumulation and frame averaging allows you to be even more flexible, lets say you have values of around 20 than you can do 4 times accumulation per line, make 2 frames and average them again.

In practice for general confocal (PMT detectors) averaging 4 to 8 times gives the best images in reasonable scan times. For HyD detectors and weak signals accumulation might give beter images.

Ingeneral both methods work in the sense that signal stays the same and noise differs between images thus averaging will cancel out a bit of the noise and accumulation will have less contribution of noise.


Thank you for your explanation. I use Hyd detector and the signal is usually between 8 to 15. I used line average 4 (because I read in some manual that is faster and it is better to use in live imaging)with considering scan times however when I increased line average the signal decreased negligibly but the quality of image got better. I can not understand why the quality of image gets better by increasing average. Is it because of decreasing noise?
Moreover, between line accumulation and frame accumulation I selected frame accumulation(because as I told earlier I read in the manual that combination of line average and line accumulation is not possible). When I increased the frame accumulation the signal increased a lot. Which frame accumulation is optimum and how can I select the optimum frame accumulation?Should I consider higher signal with reasonable scan time?

Frequently pixels are positive either due to random scatter or possibly even detector noise. This happens infrequently, so if you check a pixel 8 times, and it has a value of 0, 6 of those times, the averaged value would be low, even if one or two times there was a speckle of 20 or 40 or something.

If you have a “live” mode where there is no averaging, you will see the noise in the background regions fluctuating, while the real signal stays constant. If your gain is relatively low and there is no auto-fluorescence, that signal should be infrequent and weak. After averaging, most of it goes away.

Think about the average of
60, 65, 59, 80, 72
1, 0, 5, 0, 20

If you only had the final reading, your noise in that pixel would appear to be quite high, and that will happen in a certain percentage of pixels. Far less likely to get random scatter or detector noise many measurements in a row.

1 Like

Don’t allow the sum to go over the maximum value of whatever your bit depth is for saving your images. Whatever that value is will depend on both how you save the image, and the brightest point in your brightest sample, which is why it is good to start with the brightest possible sample first, and maybe give yourself a little bit of room.

It’s the same as playing with the laser power or gain. Once the pixel is saturated, it is mostly useless for many types of quantification.


Thank you very much for your answer. As I am doing live cell imaging (Cancer cell culture) I want to know that averaging affects imaging depth through the cell or just increases the signal? Because as I know these image acquisition parameters (like averaging, speed, …) just can change signal and it has nothing to do with imaging depth. Because imaging depth is affected with the mismatch in refractive index between cell and medium which causes scattering and as a result do not let the light penetrate more into a cell. Is it correct explanation?

Yes and no. Imaging depth is often limited by noise, and better signal to noise ratios can allow you to make use of data from greater depths in the sample. That is not to say that this improves the theoretical limit to depth imposed by the system (objective, depth of focus), but anything to reduce noise could potentially allow you to collect useful data from a few microns deeper.

Most of the noise reduction methods you are discussing here have the tradeoff that they increase scan time and damage to the sample. So by increasing averaging or accumulation you may get more useful information from deeper, but the sample may also not survive the increased light exposure (bleaching).

1 Like

Thank you for your quick answer. You are talking about noise as a factor to limit imaging depth. I used hybrid detector(Hyd) with gain 100. Is it better to change it to PMT to reduce noise?
In general which factors can reduce noise and improve the signal-to-noise ratio to get better imaging depth? I think detector and some image acquisition parameters like averaging, scan speed, laser intensity, and pinhole size are effective. However, laser intensity and pinhole size improve signal and have nothing to do with noise.

1 Like

I’m afraid I do not know the specifics of your system, it might be best to talk to your representative if you have hardware questions, or put up a new thread requesting advice from anyone familiar with the exact kind of system that you have.

The best way to get more imaging depth is clearing the sample, which unfortunately requires it to be dead. The second best way, probably, is longer wavelengths, which can be achieved with a 2 or 3 photon laser… which is fairly expensive. Purely on the settings side, most of the changes you make to gain increased depth (aside from longer wavelenghts) will have negative effects on the survival of your sample.

There is no right answer, it will always be project specific. Dependent on the sample, the characteristics of the fluorophore, the sensitivity of the cells, etc. If you really want to dive in, you might want to look for imaging lectures or papers like: Fluorescence Live Cell Imaging


Yes I am struggling with this topic and I searched a lot but I have still questions. If you know any other source about imaging depth or reducing noise in live-cell imaging please share with me.

Thank you in advance

1 Like

Hi @Maidah,
It’s important to mention that there are different sources/types of noise. I’d welcome more photon noise because the ratio between signal to noise is what’s important. I recommend reading this chapter from Pete Bankhead as an accessible introduction. Hopefully you’ll reconsider your last statement after reading. Bonus fun source of noise: photoswitching!

Since you’re doing live-cell imaging, it may not be possible to acquire high SNR images while keeping your sample happy (or without incurring motion blurring). Post-processing (deconvolution/denoising) can be very helpful.

I’m not sure which Leica confocal you’re using, but I’ll assume the SP8 for the following statement.
Compared to PMTs, HyDs have a higher quantum efficiency which lets you detect more light in the wavelengths you’re likely interested in.


HyDs are fantastic detectors to use with accumulation. The low dark noise of HyDs means you won’t really see salt-and-pepper noise.

If it’s a dim sample, I prefer to use photon counting mode. For brighter samples, I go back to standard mode for more accuracy. (See here for more info)

Can you clarify what you mean by “imaging depth”? Do you want a thicker optical section, or are you struggling with the reduced brightness/quality as you image away from the coverslip?
Assuming it’s the latter issue, ideally you try to match the refractive index of the sample with the objective. But water/silicone objectives are pricey. If there is a refractive index mismatch, make sure your z step size is correct! One practical tip is to start the z-stack away from the coverslip to try and compensate for any photobleaching that would occur.

It’s tough to make detailed recommendations without knowing the goals of your project. You can find general advice at Tutorial: guidance for quantitative confocal microscopy.


I just want to emphasize this as it was something I did not appreciate when I started working at a core facility years ago!

1 Like

Thank you so much for introducing some sources. I will try to read them. My sample is a 3D cell culture(spheroid). I don’t know it is dim or bright but the fluorescent intensity which I get from the sample is around 6 to 12 in arbitrary units.

I mean by imaging depth, the penetration depth or what you exactly said reduction of fluorescent intensity as we image away from coverslip(I worked with 96 well plate instead of coverslip). How can I select best z step? As I know it is defined by pinhole size. I selected z step= 2.35 µm with a pinhole size of 1.5 airy unit. My spheroid size is also around 400µm.

I want to reach best imaging depth (with considering the limitation of confocal microscopy 100µm). I don’t know which parameters are more effective to get better imaging depth. Is it related to reducing noise, improving signal, averaging or refractive index mismatch is the main parameter to get better imaging depth?

How can I select the best z step size? As I know it is defined by pinhole size. I selected z step= 2.35 µm with a pinhole size of 1.5 airy unit. My spheroid size is also around 400µm.

The easiest way is using the provided paper linked above which includes a macro you can run in Fiji.

Summary: When imaging into another media (say air to water, different refractive indices), the effective step size changes.

1 Like

Assuming a reasonable pixel size, laser power, and scan speed, a grey value of 6-12 sounds dim to me.

But let’s take a step back and check some other things:

  • The spheroids are ~400um thick (either floating in some gel or media). Even if you had access to a water immersion objective to minimize RI mismatch-induced spherical aberration, you’ll still see an x,y,z dependence of intensity. (Paper that looked at spheroids up to 200um)

  • I’m assuming this is an inverted microscope, and your z step size makes me suspect that you’re using a low mag dry objective.

  • Does your 96 well plate have a glass bottom, or is it plastic?

  • How are you labelling your protein / object of interest?

  • What’s the least that you could image that’d still answer your question? Can you sacrifice pixel size for better signal?

  • Are you tossing these spheroids after imaging?

If you can’t clear / expand your sample, then minimizing RI mismatch is your next best option with that scope. If you’ve got a microscopy core facility, I strongly urge you to chat with them. Maybe there’s a nearby scope that’d fit better?


And many cancer spheroids in particular are quite dense and will not provide good imaging over ~50 microns, even with a 2P laser, unless cleared. Other spheroids, like some neuronal ones I have played with, can be imaged at much greater depth.

1 Like

Sorry I did not get it. Do you mean even by using a water immersion objective lens, there is still spherical aberration due to x, y, z dependence?

Yes, I am using 10 dry objective lens and glass-bottom 96 well plates. But you did not mention how I can optimize step size.

I use a higher pixel size to get better image quality but I can decrease it and still have a clear image but the quality of image decreases a little bit.

I did not use clearing methods after imaging. Because this method is based on fix cell and i am working live cell imaging.

In your live sample, you’ll still see a reduction of image quality as you go further from the glass bottom. Here’s fig 3 from the paper that I referenced showing that this is a problem even with a water immersion objective. image

As for how to set your z step size during acquisition and how to correct for the axial distortion after acquisition, please see Box 1 for a step by step procedure.


You helped me with introducing these papers. I should thank you again. So I know that the main reason for low imaging depth(penetration depth) is scattering because of mismatch in refractive index. But I have one question.
Can dye concentration and incubation time(a time we let dye diffuse to spheroid in the incubator) improve imaging depth? Can they increase absorption or not? E.g. when I increased dye concentration, the fluorescent intensity increased and by comparing two slices (for two different dye concentration) at a same depth we see more cells in higher dye concentration.