Hi, I wonder how many iterations used when do deconvolution using iterative deconvolution algorithms (for example, the Richardson Lucy Deconvolution).

For some situations, the deconvolution method will converge to a fix result and the performance would not increase. However, the noise amplification may happen, and if the number of iterations is too small, we cannot get the optimal result, if the number of iterations is too large, the noise will dominate. The choice of the appropriate of it has a crucial impact on the result.

How to decide the optimal number of iterations for the iterative methods?
Is it just a subjective judgment? Are there any objective criteria, such as some quantitative metrics?

There is always - and I mean always - an element of subjectivity in making this decision. So, knowing that, you must seek an alternative to an objective decision based on your considered expert and subjective judgement of one or more of the following:

You need a clear, and preferably quantifiable, way of knowing what is the purpose of deconvolution in your particular experiment.

Always use the minimum number of iterations that will allow you to achieve the goal of your experiment (noise will always increase with iterations).

If you are making comparative measurement between several similar datasets then the number of iterations (and all other decon parameters) should normally remain the same between all datasets. There may be some exceptions for datasets that are not all that similar - the details of the experiment dictate the approach.

If the goal of your experiment is essentially aesthetic then use the minimum number that give the most pleasing eye-candy result. The deconvolution packages of the free and open source BiaQIm Image Processing Suite (BIPS) allow you to see the intermediate image (and PSF in cases of blind method) results of decon as the iterations progress and this can be a useful visual monitoring tool for such purposes.

If there is some measurement goal of your decon (i.e. not just eye-candy) then you may wish to terminate the decon when a certain metric on the current solution is satisfied such as a certain SNR or a certain measure that correlates with noise or makes your desired end-goal measurement optimal in some way.

The above point touches on the fact that the type of decon algorithm, the type of regularisation algorithm, the type of image/signal being deconvolved and its original SNR, the type of noise in your input images, and other convergence criteria available in your software, appropriately set, all can dictate when the iterations end - so it is not up to you to specify when to stop based only on maximum number of iterations. The maximum number of iterations in that case is just a fail-safe, just one of several criteria that define the limits of deconvolution.

These are just the main principles. Details always are dictated by the specifics of a given experiment.
To learn more about the free BIPS software see this post on ImageSC forum:

The advice from @P_Tadrous is good. I teach a python deconvolution course once every year or two, and in it I’ve done some experiments on (un-accelerated) Richardson Lucy deconvolution.

One experiment I’ve done is to simulate a bead image, then test how many RL iterations it takes to restore the true intensity (and whether or not regularization is needed). See the notebook here. In this case it took about 500 iterations to restore the true intensities, and total variation regularization was needed to suppress noise.

Below are line intensity profiles through 3 synthetic beads, purple is truth, blue the noisy and blurred image, orange 200 iterations of regularized rl, red 500 un-regularized iterations, and green 500 regularized iterations.

I obtained similar results on a synthetic nuclei image here (500 rl iterations to converge).

I ran a similar test on the EL bead image from here. In this case I was also interested in testing spherical aberration correction as well. You can find the notebook here

This was a real image, and the SNR was fairly high, so the image got subjectively sharper until 2000 iterations. However I believe the volume measurements are accurate at 400 iterations and usable at 80. Below are axial cross sections (YZ) of the bead deconvolutions with different iteration numbers and sample refractive index values.

For example if you apply an Otsu threshold to the original bead and measure the axial size you get a measurement that is 2X too large, but it is correct after 500 iterations and close to correct after 80 iterations. See below for Otsu threshold of the bead and deconvolution.

Perhaps you can think of ways to do similar experiments for your images. It shouldn’t be too hard to come up with iteration guidelines especially if you have objects for which you know the true structure.