How to do Camera comparison? (What makes an image "better"?)

Hi! I’ve recently acquired a Flash4.0LT, and I’d like to compare its performance to that of another camera (Photometrics Prime BSI) on the same microscope. I’ve captured images of the same sample using both cameras with the same exposure times, excitation powers, etc. Now I want to present this information to a PI and give them a recommendation for a purchase. How do I determine which of the two images is better, numerically? I have some ideas about this, but I’m not sure exactly what I should be doing. For example, I can normalise the images (subtract min then divide by max-min) and look at std dev, but the std dev in what definitely looks like the worse image is lower than in the better image. I suspect this is because of the high dark current (is that correct?) in the worse image, and the density of real signal in the better image.

I appreciate that explaining this kind of analysis entirely oneself is quite laborious, but if anyone has any good resources/reading that I can use/do for this kind of analysis I would be extremely grateful for the pointers.

Flash4.0 LT:

Prime BSI:

hey @CMCI, I love this question :slight_smile:

as a quick caveat: cameras are very complicated instruments with a ton of parameters that one might care about, and it can be really hard to do accurate comparisons. But that shouldn’t stop you from trying! :slight_smile: and here are few considerations.

Ultimately, probably the best number that you could measure is just signal-to-noise ratio (SNR). A very quick-and-dirty estimate:

  • you can ballpark estimate the signal as the max or the mean of some region of interest that has decent signal (it must be exactly the same region in the two images), minus the background (the mean of a different area with minimal signal).
  • you can estimate the noise as the standard deviation of some other region with minimal signal. Note: this is not strictly accurate, since that region will lack the poisson noise (shot noise) intrinsic in the signal; however, if you tried to take the SD of the area that had signal, then you’ be mostly just looking at variance in the sample itself (and not variance in your intensity measurement). So this gives you an estimate of noise due to read noise alone
  • (if bleaching weren’t a concern, then technically a better way to measure SNR would be to take the signal of a pixel or region of pixels, divided by the variance of that same pixel or region of pixels over a number of observations. However, with bleaching, you’ll mostly be measuring variance due to bleaching).

But there’s a lot to unpack in that simple ratio, and plenty of ways that your measurement might not be a fair/accurate comparison:

  • Signal: first off, it’s really important that you really do deliver the exact same amount of light if you want to be doing a fair SNR comparison (otherwise, well, the signal isnt the same). This is nearly impossible with biological samples due to bleaching. So just keep that in mind.
    • at the very least, switch up which camera you take the “first” image with.
    • if you have them, try using a sample that barely bleaches, such as beads, or fancy test slides.
    • try to keep the light path as similar as possible: definitely use the same filter sets, and try to use the same camera port and everything: take 1 image, then remove the camera from the port and put the other camera on in the same spot, and take the exact same image.
  • sampling & pixel size - don’t forget to consider pixel size and sampling. Bigger pixels will have better SNR (but at the expense of sampling). Fortunately, the PrimeBSI and the flashLT both have a photodiode size of 6.5 µm, so you don’t need to take extra considerations for that.
  • read noise: this (along with QE differences) will likely be the primary thing you’re seeing between the two. However, it’s very important to check the readout mode you’re using on the two cameras (i.e. don’t use “fast readout” on one and “slow readout” on the other). At the slowest (most accurrate) readout speeds, the LT is reported to have a read noise of 1.5 electrons (rms) / 0.9 electrons (med), while the BSI reports 1.1 electrons (rms) / 1 electron (med).
  • dark current - since you mentioned it, you’re likely not looking at major differences in dark current (which is the average number of random electrons thermally generated during the exposure). the LT reports 0.6 e/pix/s and the BSI reports 05 e/pix/s. Not a huge difference, particularly if you’re not using many-seconds-long exposures.
  • noise and gain uniformity. this a tough one to quantify easily, but this tells you how uniform the noise/gain characteristics are across the various amplifiers on the chip. Hamamatsu shows this as the “readout noise distribution” in their spec sheet. a related metric (for gain) is called pixel response non-uniformity.
  • any additional denoising modes - notably, that BSI advertises a “prime enhance” mode. You should not be using any active denoising to make a fair comparison. I would double check whether that was on, because the difference in your two images is larger than the specs would suggest)

From the specs alone (pasted below), you actually shouldn’t expect to see this big of a difference between those two cameras. Basically, the BSI does indeed have a QE of 95% vs the LT’s 82%, and that’s great, but it has a roughly similar expected read noise… and those images look like much more than just a 13% bump in SNR. So, I’d double whether there is additional on-camera denoising or processing going on, since it doesn’t quite fit what you’d expect to see.


BSI Spec sheet

LT Spec Sheet


as always, a quick response like this is an oversimplification. There’s plenty of articles out there. One of the best resources is the Photon Transfer book by James Janesick, but that’s probably overkill here :). I wrote an easier introduction to some of this in a book a while back, and while it doesn’t go into depth on sCMOS cameras, many of the core concepts are still applicable.

(direct link to pdf)

3 Likes

All the things @talley says are good.
I would just take a step back though and start by refining the question as “better for what?”

Some example of the “what” might be:

  • Imaging for illustrations vs. quantitation (or both0
  • Low light level imaging
  • Bright field imaging
  • Field of view
  • Temporal resolution (at what light level)
  • Linearity of response (again, over what light level range and frame rate).
  • Need for cooling and how effective is any built in cooling at stably reducing SNR at the required light level
  • Ease of interface (software, file formats) to the user’s required work flow
  • Ease of computer control of camera parameters (and by what software)
  • Extent and ease of manual control over camera parameters
  • Does the end-user need to sync frame capture or frame rate with other hardware for their purpose and does the camera support this?
  • Need / availability of support from manufacturer (i.e. is user likely to require customisation?)

etc.

2 Likes

Thanks for the help @talley and @P_Tadrous!

@Talley; your SNR calculation is pretty much how I explain it to people at the microscope when I’m telling them about SNR, so that feels validating for me!

I was also surprised by how much of a difference there was between the 2 cameras for this sample at this binning. I’m using 2x binning here for each camera, and I think that must be the reason; look at these images of the same cell at 1x binning:
Flash4:


Prime BSI:

It’s pretty much the same image, just a bit brighter for the Prime BSI, which I guess we would expect given the camera parameters. It’s super-weird that the Flash4 looks so bad at 2x binning (i.e. with the patterned noise I can see), because it looks perfectly fine at 4x binning!

@P_Tadrous Your point about application being the final metric is well taken. That’s actually why I was comparing the 2x binning mode (as well as 1x binning); I need to do live imaging of cells with extremely low signals, and I will need to bin pixels to detect the signals. I’m actually using 4x binning on the Flash4 at the minute, but unfortunately the Prime BSI doesn’t do 4x binning, so I can’t compare those directly.

ah! Actually… in general binning on an sCMOS (while acquiring) is not going to gain you any signal-to-noise ratio over what you would have achieved simply by downsampling digitally (with summation or averaging) after acquisition. (This was not the case with CCD cameras but it is with sCMOS). When you bin on an scmos, it is generally done at the level of the FPGA (and not done in a analog fashion during the readout, as it was with CCDs). So, pretty much the only reason to bin on an sCMOS is if you want to save the bandwidth transferring the data from the camera to the computer. unless you’re in that scenario (and definitely if you’re trying to compare two cameras), turn off the binning

Comparing images based on how they look could be misleading unless you have ensured that the histogram settings are identical.

The actual data is an array of numbers coming from the camera that is rendered as an image on your screen. The histogram/display settings determine how the actual data are converted into the image. To be quantitative you should use the actual data as directly as possible.

Oh wow, yes! Because the noise is created in every pixel, it doesn’t get averaged out? So on a CMOS with 2x binning, each image pixel is 4*(signal + noise), whereas in a CCD you get (4*signal)+noise? I’ll have to go look at the un-binned images again.

yep, you’ve got it right! in a CCD, binning 2x2 gives you a signal increase of 4x and the noise remains the same, so you get a 4x gain in SNR. in an sCMOS, binning 2x2 would still give you a 4x gain in signal, but since there are still 4 (noisy) read events, you have to add the noise together too (which adds in quadrature) so noise = sqrt(1^2 + 1^2 + 1^2 + 1^2) = 2. So binning on an sCMOS gives you 2x gain in SNR (and, again, it’s digital binning, so you could achieve the same thing in post-processing). Basically, just don’t bin on a CMOS unless you’re actively trying to save disk space or you are speed limited by the bandwidth of data transfer between the camera and the computer.

I just tried to DL the article you shared with me; it is behind a paywall, so I would be grateful if you can send it to me. Thanks!

here ya go (will add to the post above as well)

1 Like

Agree with all this awesome advice from [talley] and others. Signal to Noise it a big one, and pixel size. As long as you have the sensitivity you need, I love small pixels, at low and intermediate magnification its great to have small pixel size, images are cleaner looking even with some light binning. For what its worth, they are both excellent cameras. I’m a bit more familiar with Hamamatsu and have to say - I have never, in 25 years, had a Hamamatsu camera malfunction, still using a 15 year old ORCA R2 and it is perfect.