Has anyone here tried the new-ish module from Zeiss “Airyscan Joint Deconvolution”. It’s supposed to improve spatial resolution on top of regular Airyscan processing. I can’t find any information about the theory behind it, so I’m hoping someone here has some experience with it? Is it reliable? Is it worth the money?
Hi @smcardle , Zeiss has some info about Airyscan Joint Deconvolution in their “Practical Guide of Deconvolution” starting on page 16.
I have tried Zeiss’s Airyscan joint deconvolution (jDCV) software, and completely echo your sentiment that there is a resource vacuum for navigating this new module. The document @WillGiang shares above is the most I could find as well, and in my opinion it is unfortunately very marketing-forward. However it does contain the bare minimum information to help you get started with the premise of the algorithm, input parameters in the jDCV software, and some troubleshooting. However, I will attempt to fill in some gaps I have figured out from my own personal experience tinkering with Airyscan jDCV!
Airyscan jDCV combines Airyscan processing with an accelerated Richarson-Lucy iterative deconvolution algorithm. Zeiss advertises that Airyscan can get you down to 120 nm resolution, and combined with jDCV it can get you down to 90 nm resolution (theme of this post is the keyword is can not will). Since Airyscan processing and jDCV are both computational tools, Airyscan jDCV allows Zeiss LSM980 users to achieve super resolution without complex imaging setups or special sample prep.
Here’s how Airyscan jDCV works in practice:
Aquire Airyscan SR z stack (minimum of 5 slices) in Zen blue. Files acquired in Zen black will not be compatible with the jDCV module, sadly. You also must be in SR mode.
Select the Airyscan joint deconvolution processing option in the Processing tab.
Input “sample type” OR “number of iterations”. Zeiss made the “sample type” input to make the software very plug-and-play. The “sample type” options are dense (inputs 5 iterations), standard (inputs 20 iterations), and sparse (inputs 45 iterations). Alternatively, you can set a custom number of iterations. Zeiss recommends starting with standard, which is 20 iterations. Zeiss says these two inputs are the same thing, but you do have to specify a sample type even if you put a custom iteration number.
Input “quality threshold” (ranges 0 to 0.1). The software uses this threshold to stop iterating when the difference in fit quality between iterations is smaller than this value.
Decide whether or not you want different settings for each channel.
Click apply. Processing takes longer for larger files and for high iteration settings (45 iterations for a 100 slice z stack takes about 7 min on a high RAM computer).
The output file is your Airyscan processed, jDCV output. Save the raw file too to do input parameter optimization.
Here’s my 2 cents on Airyscan jDCV so far:
- The 90 nm lateral resolution claim is a marketing tactic. I have nearly replicated the 90 nm resolution claim with fluorescent beads (estimated ~100 nm lateral resolution), but to do that I have to do at least 20 iterations of jDCV. When I apply these same settings to cell images, I get overprocessing artifacts galore! There are different sample prep steps that limit the resolution I can achieve, such as my refractive index mismatch between immersion medium and imaging medium, but that would not account for a difference of ~50 nm. So, yes, you can get down to 90 nm resolution, but your result might not be real
Do not follow Zeiss’s recommendation of starting with the standard sample type (20 iterations) - that is way too many iterations for a biological sample at least! Instead, use the number of iterations input. Start with 5 iterations and work your way up to 10 or 11. I have not found a need to exceed 11 for my cell data. You will need to use your best judgement to identify deconvolution artifacts. A rule of thumb is that if there are grainy objects appearing in your jDCV image that weren’t there originally, those are probably artifacts. The only way I can think of to validate them as a real finding is to do a different super resolution technique such as STED or STORM to confirm you get the same organization.
When you find a good iteration number for your data, you can play with the quality threshold setting to enable bulk processing of files without worrying about potentially overprocessing some that vary from the others in ideal iteration number. I personally would love it if Zeiss released more information about the quality threshold, because it is still unclear what this value ranging from 0 to 0.1 actually refers to. I have only experimentally determined that 0.1 “seems fine”.
Take the time to adjust iteration settings per channel. Your ideal number of iterations is dependent on the distribution of signal in your image. If a channel has signal with very defined edges already, then it can probably tolerate higher iteration numbers. If a channel’s signal does not have very defined edges, then it will be more susceptible to overprocessing. Setting optimization and validation takes a lot of time, but it is important to make sure your results are real!
Airyscan jDCV is not magic. Deconvolution is not magic. It’s not going to magically resolve all confocal data to look like STED or STORM. Deconvolution in general is not meant to do that. The best I’ve been able to achieve with Airyscan jDCV for my cell samples is a slight improvement in resolution (~5-10 nm at best) and a slight improvement in signal to noise ratio (SNR). In some cases, I do get a result that resembles signal organization in published STED images for filamentous proteins like F-actin, but it is very sample-dependent. Resolution and SNR is already incredible with Airyscan processing as is for confocal. Is jDCV worth it? Depends on the scientific question you are trying to answer. If you’re imaging fluorescent beads, you’ll have a great time with Airyscan jDCV
To directly answer your questions now:
- Is it reliable: Yes in the sense that it will slightly improve the quality of images, but will only do so correctly with ample optimization and validation efforts. It will likely not enable 90 nm resolution for biological samples.
- Is it worth the money: It very much depends on the scientific question you are trying to answer. Personally, I consider Airyscan jDCV to be in beta mode because there are so few resources available from Zeiss on how to use it well. Why not Airyscan-process files and then run Huygens deconvolution software on them instead of Airyscan jDCV? Why not try a free open-source deconvolution software that uses the same algorithm? I do not know the answer to these questions, but I hope that Zeiss and other fellow microscopists with access to Airyscan jDCV + other deconvolution options can eventually fill in the blanks! So personally, I would say it’s worth the money in about 3-5 years.
In conclusion, deconvolution is not magic, and Zeiss combining Airyscan and deconvolution with fancy resolution claims + a steep price tag does not change that. I hope that Zeiss will eventually update Airyscan jDCV to have (1) more resources on troubleshooting and usage, (2) the standard sample type setting be 5 iterations instead of 20, and (3) decrease the price tag on Airyscan jDCV so it’s a little more accessible to the community to explore. As more scientists venture into deconvolving array-based point-scanner confocal data, I hope there are more efforts to demystify deconvolution limitations and also resources to systematically identify and avoid artifact production from overprocessing. Regardless, I hope my tidbits can be helpful in your potential consideration and exploration of Airyscan jDCV! Good luck!
Wow, thank you so much! This is really helpful!
It saddens me to see professional scientists reduced to poking and prodding a closed source commercial black box algorithm in order to find out how it works by studying stimulus-response characteristics the same way they would poke and prod a cell.
With cells, that’s all we have - because we genuinely don’t know how aspects of biology work. Those secrets are hidden by/in nature and it’s our job to find the answer
With algorithms someone, somewhere, DOES know in absolute reductionist detail how it works but they’re just not telling us (in the case of closed source commercial software).
I hope this will become a relic of the past as more scientists adopt open source solutions in favour of splashing the grant cash on black boxes.