That is one reason, but the main reason tends to simply be the scattering of the material. Which is why clearing is frequently used for fixed samples, and tumors tend to be very hard to image at any depth (dense+ECM). Dye concentrations can improve imaging depth, but that tends help more with signal to noise ratio than signal to background, and either one becomes a problem if you go deep enough. Dyes are not 100% specific, and as you add more, then tend to… become less specific/bind to more things.
Enough mitotracker in a cell will label the nucleus. Enough Hoechst/DAPI will bind to the cell membrane (also negatively charged). Don’t ask how I know.
Thank you for your quick answer. I think you are right. So you mean increasing dye concentration or incubation time improves the signal-to-noise ratio, not imaging depth but I don’t know why when we see confocal slices(stack), at the same depth there are more cells for higher dye concentration. Is it just more signal-to-noise ratio which comes from a sample and we can not say that it means we are able to reach higher imaging depth. Is it correct?
I have one more question
Even image acquisition parameters like pinhole size, scan speed, averaging can improve signal-to-noise ratio, not imaging depth. Is it correct? Because I am still struggling to reach around 100µm for imaging depth of my spheroid but I reach less than it.
It can increase the effective imaging depth by increasing the signal to noise ratio. Similarly, you can increase the imaging depth by turning up the laser power. Either way, you are increasing the amount of signal, ideally. However, any background, or anything the dye sticks to that is not your target, is also going to have increased signal.
Increasing the dye too much can introduce new kinds of background as the dyes can begin to stick to new organelles.
In the end, the scattering/absorption purely from the material itself will always win out. You may also end up saturating the signal (max value) in the shallower parts of the sample, or bleaching the sample by turning up the laser power.
All of those things can increase your signal to noise ratio, and also increase your effective imaging depth. They also all have tradeoffs.
Larger pinhole is more signal, but less resolution. Whether opening this is a good idea often depends on whether your pixel/voxel size is near the size of your point spread function. Using 1.0 Airy units might be great for resolution, but if you voxel size is ten times that (due to camera or PMT collection settings), you are really only limiting your light collection.
It is tough to image 100um into most biological samples without a loss of SNR, and while commonly used, point scanning confocals have a lot of different acquisition parameters than can be tricky to optimize. So the struggle is real!
Biological samples scatter and absorb both excitation and emission light, and spherical aberration increases with depth. So it is expected to see decreased SNR with depth, and if the SNR is low enough you can get to the point where any signal that is there is lost in the noise and therefore undetectable. If you increase the signal in your sample, you should be able to get a higher SNR at greater depth. There are multiple approaches to increasing signal/decreasing noise, and the best approach is very sample dependent.
In general, live biological samples scatter more light because they usually have inhomogeneous and mismatched refractive index (mismatch between sample, coverslip and immersion media increases spherical aberration) and most of the fluorophores that are compatible with live cell imaging (eg, fluorescent proteins) photobleach faster and give off less photons per unit time (quantum yield and extinction coefficent) than fluorphores that work with fixed imaging (eg, the Alexa series). The use of a water immersion lens (with a carefully adjusted correction collar) rather than oil immersion can decrease spherical aberration and increase SNR at higher depths, if you have access to one.
Fixed samples that are extracted (eg, with a detergent that disrupts the membrane to allow dyes in) and perfused with a high refractive index mounting media are generally easier to image, because the extraction helps to homogenize the refractive index of the sample and perfusion with a high refractive index helps match the refractive index of the sample, glass coverslip and immersion oil. Check out Fluorescence Mounting Mounting Media - Nikon Imaging Center at Harvard Medical School. There are also many more photostable and brighter fluorophores that are compatible with fixed samples than with live cells.
It’s best to always start with optimizing the fluorophore, illumination/filters and optics for high signal. Choosing a fluorophore that is compatible with microscope laser lines and filters, as bright as possible (high quantum yield and extinction coefficient) and photostable (low rate of photobleaching) can make a huge difference. Check out FPbase.org for help with selecting a fluorophore. The sample affects the properties of fluorophore performance, so try a few if you can. We find that there are a lot of outdated fluorophores in lab freezers, so when your struggling with low SNR it’s well worth taking the time to choose and test fluorophores with your sample. The sample affects the properties of fluorophore performance, so try a few if you can. You should also use as high a numerical aperature objective as you can - generally, the higher the NA the more signal the objective lens is able collect (Numerical Aperture - YouTube).
The acquistion parameters you list can sometimes increase signal, but unfortunately always with a sacrifice that won’t work with all samples. A key is choosing the sacrifice that is the most tolerable with your sample and experimental question. Increasing pinhole size will collect more signal, but at the cost of increased collection of out of focus fluorescence (OOF). If your sample has a lot of OOF, increasing the pinhole size may result in worst SNR, while if your sample has lower levels of OOF larger pinhole size can be very helpful in increasing SNR. Slowing down the scan speed can increase SNR, but will also increase photobleaching and slow down acquisition. This can work with samples/fluorophores that bleach at a slow rate, but often fails with bleachy samples. Averaging increases SNR by decreasing noise. The idea here is if the signal is ~constant in each pixel in the image but the noise is variable (which it is), averaging will result in an image with similar signal but decreased noise. However, averaging requires collecting multiple images at each focal plane which increases photobleaching. Sometimes using a combination of these approaches is best. Since samples vary so much it’s hard to advise which parameters to start with. Trying the different parameters is the best approach and what we do with our core users.
Techniques like clearing, adaptive optics, multiphoton and deep learning (eg, CARE) can help - but each of these require a lot more work and optimization, are technically difficult and have their own issues. So def don’t start with them, and only try these if you are certain an optimized confocal and sample don’t get you the image quailty you need for your analysis.
I hope this helps!
Jennifer C. Waters, PhD
Director of the Nikon Imaging Center & Lecturer in Cell Biology, Harvard Medical School
Chan Zuckerberg Initiative Imaging Scientist https://twitter.com/jencwaters
Thank you for your complete answer So u mean if I don’t want to use other techniques and want to use just confocal microscopy we should change to air objective lens to water immersion lens to get better imaging depth. Is that correct? Because I want to get better imaging depth and our microscope is Leica SP8 and it has just air and oil immersion lens. I tried to observe spheroid with an oil immersion lens but I could not see anything. And as I told you by optimizing other parameters like dye concentration, incubation time, or acquisition parameters we can get better SNR(I am careful that there is trade-off by changing every parameters).
One more thing that may not have been mentioned is the working distance of some of those objectives. Some oil objectives may have difficulty focusing on the sample if it is not pushed up against the coverslip/bottom of the well.
Also, the distance between the coverslip and the organoid is made up of a different refractive index, that of the media. An oil objective then has to go objective->oil->coverslip->water/media->sample. And then back. Water objectives will be better for this, but the deeper into the well your sample is, the worse your image will be.
Depending on how structurally strong your samples are, it may help to position them as close to the coverslip as possible.
Forgive me, I didn’t read all the posts! Great that you have been optimizing the sample.
A water immersion might help, but will introduce other issues as well…you would need to optimize the objective lens correction collar for depth since spherical aberration increases with depth. It’s not possible to optimize it for all depths (ie, if you optimize it for X um into the sample, it might not look as good at Y um), so again your needs may not be compatible. John Murray’s lab (J Microscopy 2004) also published a nice paper on aberrations that can occur with water immersion lenses due to coverslip tilt (You may be able to borrow one from Leica to try out. If it makes a big difference maybe you can get your PI to buy you one. @Research_Associate makes another very good point - getting the sample as close to the coverslip as possible can make a huge difference. How is your sample set up?
Let me know if you need anymore references for any of these topics. You may also want to check out our list of some of our favorite imaging references here: References - Nikon Imaging Center at Harvard Medical School
(Thank you all for distracting me from boring admin work :))