Leica Thunder and a comparison between 4.2 cmos versus Hamamatsu ORCA-Fusion

In my quest to build the cheapest high-end spinning disk microscope I ended up demoing a Leica Thunder widefield system in my lab. A few people wrote to me asking for my impressions so I will share them. While I’m under no illusion that a widefield could replace a confocal I was curious what Leica was doing in terms of image clearing/denoising and if this was just a Leica version of deconvolution or something else.

So here is the Thunder setup:
It’s on a fully automated new DMI8. No complaints with the stand at all. There’s a Lumencor LED at the back.

The Leica guys very kindly brought a Hamamatsu Fusion with a Hamamtasu Gemini. The Gemini is an image splitter; you can project two channels on different parts of the camera chip. On the other camera port was a Leica branded 4.2 cmos sensor camera.

One of my favourite parts of this scope was the nifty Leica filter wheel (below). It was switching easily at 25 ms.

Last, here is the Thunder computer. It’s a HP Z4 with a Nvidia 2080 RTX graphics card and a sticker:

This was the first time I’ve used any modern version of Leica microscope and acquisition software. For whatever reason, in every institute I’ve worked, the oldest microscope is always a Leica and I previously had bad impressions of their software. This wasn’t the case now. It’s easy to use and setup. It also had a few nice features such as scanning an area on low mag, creating a map of your sample that you can “click” into at high mag. Convenient.

So what’s up with all this Thunder business? Leica has bundled three different imaging processing techniques in their software: Thunder, deconvolution (I think they call it small volume clearing) and a combination of Thunder and deconvolution (again, I think they call it large volume clearing). The deconvolution does as stated and I won’t really talk about it here. The Thunder processing is definitely interesting though.

Leica sells Thunder as a computational clearing method. Which it is. It works in 2D which is nice and it’s not deconvolution (at least it’s not fitting a PSF). It’s also very fast and can be done live during image acquisition.
Here’s a few examples of what it does on 2D images (this was a 100x):

And a zoomed in version:

In my opinion, Leica’s algorithm doesn’t seem to be making things up. The structures that it makes clearer are visible in the widefield image. However, my biggest complaint, and many people will share my view, is that there is no information about what Thunder is actually doing. It makes it a little hard to trust it. However, I will admit that it certainly does clear the image. I imagine this probably works really nicely for large opaque-ish samples.

This is definitely not a replacement for a confocal system but it’s a useful exercise to compare them. Try and guess which is the spinning disk and which is Thunder in the image below (both taken on a Fusion):

If you guessed the bottom image is Thunder you are correct. One give away is the “banding” around the edge of the nucleus. I’m not entirely sure what causes this but maybe it’s due to the out of focus light on the edge of the nucleus. Since these cells are pretty flat, that could be causing issues. I think Thunder does a pretty good job with the spots. This was the thunder+decon processing method which is why the Thunder images look so smooth.

Overall I think Thunder is promising. I would like to know exactly what it is doing otherwise it’s not really possible to justify using these images in a quantitative way. I think you currently can’t buy the Thunder processing separately from the system as a whole, and I believe that you have to use a PC from Leica as well. The system is a pretty inexpensive widefield with some nice image processing added on. It works well as a whole integrated system and Leica have done a lot to make sure it is very easy to use. It could work well in imaging cores where confocals are booked up all the time and users just need a few nice pictures. I would also be curious to see a head-to-head between Thunder and say Nikon noise.ai.

In the next post I’ll show a comparison between a 4.2 cmos sensor camera and the newish Hamamatsu ORCA Fusion.


Thanks to the guys at Leica, I could also try out the new-ish Hamamatsu Fusion. The Fusion is supposedly a completely new sensor and is not made by Fairchild like the 4.2 sensor found in many sCMOS cameras. I couldn’t find any information about who actually produces the sensor for Hamamatsu that go into the Fusion. Here we are comparing it to a Leica branded 4.2 sensor. They both have the same pixel size of 6.5 micron.

For the comparison, we removed the Gemini splitter from the Thunder setup above and just had a camera on each side. We then took images in the lowest noise modes for each camera using the same settings. An example of the results are below.

I think the Fusion is a great improvement. For whatever reason, the contrast is greater and the Fusion picks up on structures the 4.2 misses. It may be hard to tell but in the 4.2 you can see the “columns” that are present on most CMOS sensors. These don’t exist on the Fusion. A nice added bonus is that the typical horizontal split line that is also usually present is not on the Fusion either. I would have loved to test the Fusion against a 95B as well. The last nice feature of the Fusion is that it has both USB3 and the CoaXpress connections. This means I can start off with USB3 and then later upgrade to CoaXpress when I have extra funds for the frame grabbing chip.

Since the fusion is only marginally more expensive than, say, a Hamamatsu Flash, this is definitely the camera that I will be going with.

1 Like

Thanks for taking the time to put all of these posts together. I have really liked the wide area “click into” feature on the Zen software, so I am glad to hear it will also be on the Leica systems.

As with any image processing technique, I would be worried that the banding would be part of their cleanup process, sounds similar to the results of an unsharp mask filter.

Thanks for the great comparison, Andrew! For the ‘Thunder’, is it just on/off or can you control any parameters at all?

The results you show reminded me of what you get using the ‘Dehaze’ feature available in Adobe Lightroom, so I applied that to your figure. The left column is widefield, upper right is Thunder and lower left is Lightroom. I’d say that’s close…

Here are the settings I used in Lightroom: 80% Dehaze, 80% Clarity, 80% Texture, 80% Shadows, 80% Blacks, 10% Contrast.


Thank you Andrew! For such detailed comparison! We are in the process of demoing Andor Dragonfly 500. I will also put some updates here!

ORCA Fusion looks great, I’ve always had great success with Hamamatsu sensors! Regarding Thunder, I’ve seen it on 2 Leica Systems, a high-end system and a fluorescent stereoscope, I think it was promising, but seemed ‘heavy handed’ when trying it on the stereoscope with different samples. You’re right, Thunder is sold as a separate ‘solution’, it’s a bit pricey and runs on a PC you must buy from them, which I get, but it is expensive. Recently, I’ve seen the Olympus deconvolution in 3D and it seems a bit better in my opinion…It seems to me that the nature of what’s being imaged has a role (if that makes sense), some samples may look great using one method, and not as good using another (in general), I guess having access to many choices of image processing softwares is the goal!?

1 Like

Hi meatball,
I’d love to see how the Dragonfly compares. That would be great


@JamesOrth, can you elaborate on what you mean by heavy handed for the stereoscope? We’ve seen some Thunder data from an inverted system and it looked great. For reference, we us Microvolution for all our decon and works really well.

Thunder separates the signal into ‘in focus’ and ‘out of focus’ light, and uses wavelet transforms, and as such quite different from more traditional deconvolution algorithms. Not sure if it uses wavelet transforms instead of, or in addition to, Fourier transforms to determine the result. As such, I’m really intrigued to know where the algorithm breaks down. We’re all likely familiar with identifying decon artifacts, and it would be great to know what happens to Thunder in non-ideal conditions.

Also, I really don’t like the term ‘Computational Clearing’; very misleading. You know you’re going to get people ask you to computational clear their mouse brains.


Perhaps it was the sample or imaging parameters that contribute to the issue, but the images looked over-processed, dark edges around objects, etc., and some artificial structure I kept talking about tweaking the parameters but it didn’t happen, the “recommended
settings” were being used.

And I couldn’t agree with you more, “computational clearing” isn’t the best term - but I’m sure the marketing likes it as clearing is hot.


1 Like

That’s very interesting about lightroom. The effect is very similar :face_with_monocle:
I got some more info on what Thunder was actually doing: Below is a screengrab of the settings page for Thunder, you can change what they call the feature scale and you can change strength.

As I understand it, strength is how much background you will substract and the feature scale is like the size of a rolling ball filter (though here it is Gaussian fitting rather than rolling ball). Below is are a few examples showing what the feature scale does when set very small, very large or optimal (the size of objects in your images).

@Research_Associate It seems like the banding artifact that I was getting sometimes is from choosing too low a setting for the feature scale.


Hi @Andrew_Seeber!

Thanks for sharing your impressions.

Your last post caught my eye, as those settings and results look really similar to those of an unsharp mask filter, like the one available in ImageJ (and Fiji).

Here’s a quick comparison. I converted the images from the post to 16-bit before applying the filter, but obviously that’s not the same as processing the native 16-bit images. That’s probably the reason for the saturation on some areas. You could try it on the original files, though.



1 Like

Thanks for that comparison with the unsharp mask, @Research_Associate mentioned this as well.

I did a quick comparison with the raw data using the mask settings you did:

The effect is pretty similar, though the Thunder presumably has some extra processing in there, as the image looks a bit more even.

I only have images of cells unfortunately. It would be nice to compare how Thunder deals with a thicker sample versus an unsharp mask.

1 Like

Oh, I had totally missed @Research_Associate’s comment on unsharp mask. Thanks for pointing that out.

The results for Thunder look indeed more even, although the 0.9 weight setting was a little bit on the heavy side (I bumped it up to get a similar look, but probably was due to the 8-bit original). A value of 0.6 sounds more reasonable.

Thunder probably does something in the realm of unsharp mask, perhaps using a slightly different approach. Whatever it is, it has to be fast. The unsharp mask does a pretty good job, nonetheless.

It would indeed be nice to see Thunder applied to other samples.


1 Like

For what it’s worth, and since I don’t think it’s been mentioned. There is more info on what thunder is doing in the white paper. From the equations in there, it does sound very much like a fancy high-pass filter (i.e. it finds and subtracts low frequency components). So it’s not terribly surprising that you’d see similar results with these unsharp mask and dehaze functions. I encourage you to grab that white paper if you want to know (a bit) more


@talley Good point about that white paper. Leica shared it with me; doesn’t look like we can add .pdf attachments to this forum so I’ll share the relevant page.