Choosing a confocal for live imaging / how resonant scanning actually works?

Hi uCrew,

We’re considering a spinning disk (crest v3), line-scanner (confocal.nl NL5), and point scanner (Nikon AXR) to start doing some live 3D imaging in our group. I’m curious about feedback for these systems generally, so if anyone has advice or wouldn’t mind a chat please DM me.

In the meantime, I’m also trying to learn more about why resonant scanning actually works in terms of reduced phototoxicity. I’ve had a hard time finding papers that clearly benchmark it against other imaging modes, and I’ve only seen a few references that go into triplet states and some of the photochemistry. Is it clear why resonant scanning would actually reduce phototoxicity and how the best examples of it perform relative to something like spinning disk toxicity? I’ve spoken with reps from several companies but nobody seems to have a clear answer on how resonant scanning works or performs via quantitative benchmarks.

Thanks!

D

Hi @djc,

Hot/somewhat conterversial topic we’re wading into here, but in my opinion, a reduction in photobleaching resulting from resonant scanning has not been rigorously/quantitatively demonstrated.

Most arguments I’ve heard point to a couple sources:

The one that I see most often referenced, and which appears to have the most direct comment is this letter in MRT from many years ago (I’m not sure if it’s peer reviewed):

This is a purely theoretical paper that argues that the pulsed illumination in resonant scanners may reduce photobleaching by reducing the accumulation of fluorophores in the excited triplet state (T2 below; which is said to lead to photobleaching), by giving fluorophores that have entered a longer-lived triplet state (T1) time to undergo relaxation to the ground state (S0) before getting immediately re-excited.

Obviously this game hinges entirely on these rate constants (k1-k7) and the actual duty cycle of excitation in a resonant scanner. That paper simply makes up rate constants that support the argument being made, with no references to back them up. Furthermore, there are no time units in the graphs there, and in simulations I’ve made to try to reproduce the curves they show, I have to pick pretty outlandish parameters to come up with anything approaching the benefit shown in figure 4. Nonetheless, that paper frequently gets used as rationale for the “relaxation to the ground state”.

Another, much more rigorous/quantitative paper exploring the effect of pulsed illumination on photon yield is Donnert et al 2007

https://www.nature.com/articles/nmeth986

There, they adsorbed various FPs and dyes to glass, then irradiated them with a (stationary) pulsed beam and measured rate of bleaching and total photon yield of the spot as the varied the repetition rate of the laser. They found that inter-pulse intervals on the order of 0.5-2µs increased the total photon yield:

unfortunately, when people reference that paper in support of “pulsed illumination is better”, they tend to miss the very important detail in that paper that the effect depends a lot on the irradiance, as shown in figure 2b. Specifically, the integral under that curve in figure one only starts to get a lot better with larger inter-pulse intervals once we start to get into the range of many MW/cm2

Screen Shot 2021-10-21 at 2.28.21 PM

Those are irradiation levels that are not uncommon in STED, but are very uncommon even in point scanning confocal (and unheard of for spinning disc and widefield techniques). Note: tens of MW/cm2 is at least an order of magnitude larger than a “saturating” light dose (i.e. approaching ground state depletion) for EGFP. God help you if you’re putting that much light on your live sample :wink:

There are many other papers that have tried to explore this relationship between temporal light dose and photobleaching (non exhaustive list below) … but properly normalizing absolute total irradiance while varying illumination protocol is a famously hard thing to do… and i think that all of these leave at least some room for debate (or at the very least, are hard to generalize beyond the specifics of the experimental setup… and are of questionable application to resonant scanning in a point scanning confocal)


To be clear, I’m not trying to argue that resonant scanners have no benefit over conventional point scanners… but in my experience, this argument has tended more towards qualitative anecdotal reports. (Though always happy to hear some counter opinions here!)


I’ll leave you with one additional back of the envelope calculation. consider that end the end of the day we’re limited by the fluorophore:

  • the lifetime of an EGFP molecule is ~3ns and the quantum yield is ~60% (meaning it can give you no more than 0.2 photons / ns at saturating light intensities
  • the dwell time of 12KHz resonant scanner is ~80ns for a 1k x 1k field of view.

even with a high NA lens, a high QE detector and a perfectly aligned microscope, you’re looking at a best case scenario of ~.6 photons per EGFP per 80ns dwell… (assuming you’re dumping upwards of .6MW/cm2 of light onto the specimen)

That’s not a lot of SNR, so very often you’ll find yourself averaging frames/lines to recover that signal.
There’s no free lunch.

7 Likes

So in a resonant scanner, it’s not consistent across the field of view. The edges bleach more than the center as the mirror heads back in the opposite direction. Sometimes a Pockel’s Cell, AOD, AOM, SLM or other device is basically used as a high-speed shutter for “edge-blanking.” It’s not quite “suitable” to think in terms of dwell time as with a galvo mirror.

There’s also the case of polygon scanners…

There’s also consideration of the differences in terms of detection (PMT/APD vs camera) and all that goes along with that…

Awesome details, Talley!

My 10,000ft view of resonant scanners is that they provide the option to sacrifice image quality to gain substantial image speed and less photobleaching (because resonant scanner has a much smaller minimum dwell time), not that they have some super secret physics to get you image quality equal to galvos but faster and more gentle (though that would be awesome).

Tangential comment on modality: Think of the size of what you’re imaging, the size scale of what you want to measure and the time scale of local movements.

  • Very small regions within the full field of view (region sizes = 1-2x resolution limit) will likely be collected faster with a scanning systems than with full-field integration, thereby reducing motion blur (from Brownian motion or directed motion) and associated signal loss. Ex: tiny protein condensates and vesicles.
  • However, over the full field of view, there’s a time delay with scanning systems that may introduce large-scale spatial artifacts since the the top left is a different time than that of the bottom right. ex: Cell shape and continuous structures like microtubules. Obviously, how much this matters depends on the speed of the structure of interest and the time required for the image collection.

So, from my perspective, the main compromise to suss out is spatial integrity at v small scale (1-2x resolution limit) and signal vs large scale spatial integrity.

Good luck!

1 Like

Here is a comparison of “Swept Field” vs spinning disk from a Bruker brochure. Swept Field is a type of “line scanning” which uses linear arrays of pinholes (not slits). It was created by a uni in Wisconsin, marketed by Nikon for awhile and now Bruker. I have looked through the marketing material for confocal.nl in detail. But as of yet I have not seen a detailed diagram of the scanning scheme for their line scanner. There’s dozens of variations of “line” or “slit” scanning. However, the table provides an easier to digest comparison of “line” vs spinning disc. Confocal.nl has quite knowledgeable staff who are familiar with Nikon and other instruments. Likewise with Crest. But it’s hard to get vendors to break down benefits of their devices without glossing over their own weaknesses compared to others.

This screenshot is from a company called IVIM Technology, a Korean outfit which uses a polygon scanner. I include it simply to explain the non-uniformity of illumination associated with resonant scanners (a bit exaggerated). Polygon scanners also have issues. IVIM like Nikon sells an entire system. It’s not fair to compare just a part of a system. Nikon systems typically include AOD or AOM inside of the laser unit which may be used to “shutter” the laser for edge-blanking and effectively remove any bleaching on the edge. Likewise it’s not fair to compare a scanner from one vendor to a complete system, without taking into consideration all of the parts needed for a particular solution.

That table is rather heavy on the marketing, wouldn’t you say?

Absolutely. It’s just Bruker has a table providing some comparison. I quite enjoy reading papers about different designs… but for end-user making an expensive decision… not straightforward.

Diagrams of the Bruker scanhead