Camera choices for STORM

Hi everyone,
What is your suggestion on camera choices for a STORM system? sCMOS or EMCCD? I would imagine that a 6.5 μm sCMOS is better than a 16 μm EMCCD? But I was also told that if we are looking for localizations (blinkings), then the actual size of each pixel does not matter? Or is QE more important compared to pixel sizes in STORM?

Thank you!

1 Like

There are several things to consider. The first is maybe your budget as EMCCDs are (or at least were) at least 2x more expensive than most scientific CMOS cameras. After that QE is more important than pixel size, but it’s not exactly high QE alone that you are looking for, you want to make sure that the read noise of the camera is significantly less than the signal. For a sCMOS camera that would work well for STORM this means less than about 3 electrons (e-) read noise. You may also be aware that the EMCCD gain stage adds extra noise that is approximately equivalent to a 2x reduction in QE, so an EMCDD with 100% QE would be equivalent to a CMOS with 50% QE (assuming low read noise). Finally you want your camera pixel size to correspond to about 100nm in the focal plane of the microscope. This can be more challenging with some CMOS cameras that have relatively small pixels, like 4.5um as you end up having to use a relatively low magnification objective or adding external optics to demagnify.

Hi Hazen,
Thanks for your reply! This is really helpful. I hope you don’t mind me asking, I am very new to Super-Resolution, why does the camera pixel size need to correspond to about 100nm in the focal plane? to give you more details, we are comparing the Hamamatsu Fusion and the Andor U897 EMCCD. The quote we got for the EMCCD system is cheaper than the Fusion system. They are two different companies, so there are probably other things to consider in price. I know the Fusion camera is very low in noise, so can I say that maybe the Fusion is a better choice if we can bring the price down a little?

Hi Chao-Wei,

Regarding pixel size, if the pixel size is too large (> 500nm) then most or all of the light emitted by a single fluorescent dye molecule will be captured by a single pixel. In this case your resolution will be limited by your pixel size, and will be very low, technically 2x the pixel size due to the Nyquist sampling limit. If the pixel is too small (< 20nm) the fluorescence from the dye molecule will be spread across many pixels, so the ratio of signal to read noise will be correspondingly lower. The value of 100nm is a compromise between these two regimes. The exact value will depend on the specifics of the camera, and can be calculated. I believe there are papers on this subject, perhaps from the Ober lab (https://www.wardoberlab.com/)? However, the minima here is relatively broad, so getting the pixel size exactly right isn’t super important as long as it is close. I’ve used setups that had pixel sizes between 160nm and 80nm without noticing much difference in resolution.

1 Like