Extracting Super-resolution Structures inside a Single Molecule or Overlapped Molecules from One Blurred Image

https://www.biorxiv.org/content/10.1101/798934v1

This study’s main idea (“resolvable condition”) is strongly inspired by and tightly relevant to the imaging conditions of classic super-resolution techniques. It has undergone a long period of self-questioning and self-defense in logic. The conclusions are also supported by simulation experiments, but some are contradictory to our previous knowledge. Thereby we are not 100% sure about it, and would like to hear critics and communicate with peers. Any advices, questions, critics and other comments are welcome!

There is a brief slide which illustrates the paper intuitively. It is helpful for getting the main idea in a few minutes. Please browse it first, especially when you are busy. (Link: https://www.biorxiv.org/content/10.1101/798934v1.supplementary-material)

The experiments are very easy to replicate because the source code (including data) has also been submitted. (Link: https://www.biorxiv.org/content/10.1101/798934v1.supplementary-material)

Hi Edward,

I’ve read through your preprint and looked through your code a bit, and I have to be honest: I am still a bit confused. I can’t tell if I’ve fundamentally misunderstood what you are proposing, or if you have misunderstood some of the details of image formation in fluorescence microscopy. I have two main comments:

1) As I read it, one of the points you are making is that we can derive a unique solution to the inversion problem in a bandwidth limited system (i.e. recover the “ground truth” exact position of signal emitters in an image) IF the following conditions are met:

  • we have absolute knowledge of the underlying point spread function
  • we have an image that is essentially noise free, or has extremely high SNR
  • we have sampled the image appropriately

… and I think mostly I agree with all that. as a thought experiment, I do think people often don’t appreciate that these practical limits are more fundamental to the super-resolution problem than simply diffraction alone. As you say “even low frequency components carry full details of a sample’s spatial structure… and [that] seems inconsistent with traditional opinions.”

But in practice those limitations are really severe! Getting a perfect representation of the PSF is nearly impossible once the sample distorts the wavefront. More importantly, noise and pixelations are hard limits here. Even if we had a perfect camera with infinitesimal pixel size, shot noise and dynamic range of the detector severely limit this “deconvolution” type approach (or, in your case, the ability to solve that system of equations with a unique solution). For more details, I’m fond of this paper:
https://onlinelibrary.wiley.com/doi/abs/10.1046/j.1365-2818.1998.00290.x

2) However, what I’m really confused by is that (if I understood correctly) you seem to be arguing that we can get more information out of single molecule fluorescence images by what essentially amounts to deconvolving the image of the single molecule to resolve molecular structure. You say:

in a Single-Molecule-Localization microscope, the observed image of individual molecules is blurred, and the pixel values do not show the molecules’ detailed structure directly.

and you show this image, suggesting that the “true” image is on the left and the observed image is on the right

and later state …

From this point of view, this technique could be treated as the extension of existing techniques such as Single-MoleculeLocalization, and it further “split” a single point into 2 × 2, 3 × 3 or more points, i.e., the ROI.to, for example, further resolve the inner details of individual molecules, fluorescent probes or tiny light sources after localization them.

So I just want to make sure that we’re all on the same page here that panel A in your image above is definitely not the ground truth that we are trying to recover in a single molecule fluorescence image. That is: the entire protein molecule is not actually luminescent. Instead, the actual photons in a single fluorophore are only emitted from the chromophore (a set of conjugated bonds that are small fraction of the total molecule). And the very best we could theoretically do is to perfectly localize the chromophore on the molecule, but we will never be able to further resolve the inner details of the “full” individual fluorescent molecule, because it does not actual emit a signal. This is related, I suppose, to the fundamental limitation in SMLM on labeling density. We can’t resolve things that we can’t label with a density above Nyquist sampling (and we certainly can’t label the inner details of the fluorophore itself with more luminescent point sources).

Let me know if I misunderstood your argument there, and if you weren’t actually trying to say that we could get “sub-molecule” information from a single fluorophore by a deconvolution-like process.

cheers,
Talley

4 Likes

Pretty much this. I would be very concerned about any claims of molecular structure of a single fluorophore.

@talley On a complete side note, I notice that links clicked on this forum open in the current tab (closing the forum), while links on the image.sc forum open in a new tab. I prefer the second functionality, but was wondering if it was intentional.

Hi Edward,

I’ve also read the preprint and I am also a bit confused. In particular it wasn’t clear to me why exactly isolated lighting was important. You might consider providing an example or two of when this criteria was not met and how this causes the approach to fail. It sounds like it might be related to ideas from the compressed sensing field, though you seem to explicitly include a known sparseness in your examples, for example Figure 2, where you start from the knowledge that there are exactly two sources.

That said, to me this work seemed to confuse mathematically solvable with practically solvable. Sure if you know the PSF exactly and there is no noise than you can recover the signal by a simple inversion, for example by dividing the Fourier transform (FT) of the measurement by the FT of the PSF. However no one does this and expects it work on real images because, as Talley said, PSF distortions and noise will make the result almost meaningless.

best,
-Hazen

2 Likes

Hello Dr. Talley,

Thank you very much for your attention and thorough analysis! Please allow me to respond to your questions one by one.

1) Yes, this study assume that there is no or only low noises. To be frank, we do not believe this paper can solve all the problems perfectly. We treated it as just a beginning instead of a perfect ending. Actually, noise is a hard issue in all kinds of field, and it is a relatively independent research topic (or even area). On the one hand:

In our algorithms, the only input data is the observed image and the PSF image, any other factors (e.g., shot noise, wavefront distortion, detector limitation, etc.) can be attribute to the noises or distortion of these images. Thereby, our paper treats it as an important future direction. Given the progressive advancement of denoising technology, we believe it can be solved or improved gradually.

On the other hand:

Based on the above considerations, input data are assumed to be accurate in this study. Then, our major concern is: if the distance of two points is smaller than the diffraction-limit, and they are imaged simultaneously by a conventional light microscope, are they resolvable in the same image? According classic theories (about the diffraction-limit and Rayleigh criterion), the answer would be no. Of course, existing super-resolution techniques have already achieved resolutions beyond the diffraction-limit. But to our understanding, adjacent points (or different frequency components) are imaged in different time. However, this study finds an exceptional condition (termed “resolvable condition”), such points can be resolved directly. In such a “resolvable condition”, neither profile nor detail information is damaged by diffraction. Thereby, it can be recovered reversibly from a diffraction-blurred image (i.e., an image without high frequency components). This condition is tightly relevant to the imaging condition of existing super-resolution techniques. Then, a method is proposed based on the condition which can achieve unlimited high resolutions in principle.

2) First of all, please allow me to explain this question specifically. In our section “2.1. Background analysis” (page 3), we write: “In many other cases, observed signals are used to carry information. Example 2: in a Single-Molecule-Localization microscope, the observed image of individual molecules is blurred, and the pixel values do not show the molecules’ detailed structure directly.” These sentences are not associated to images. Then in section “2.2. Method for spatial domain” (page 8), we use the figure: “Fig. 3. The 2D situation of the spatial domain method. (a) Before convolution (the ideal image). (b) After convolution (the observed image). The two dashed-line rectangles indicate the ROI.” It is a general example of a ground truth image and the corresponding blurred image. We do not mention that it is a fluorescence image. It is only used for illustration of our method, and not used in the experiment section. Actually, random images are used in our simulation experiments so as to assure our methods works for any possible structures.

Then, please note that our finding and methods apply to various light microscopes including fluorescence microscopes. The imaging of fluorescent molecules is one of their possible applications on fluorescence microscopes. On the one hand, we admit that the extracted images do not include full structure of protein molecules in this way. Thereby, we are glad to modify the sentence in our paper: “For example, further resolve the inner details of individual molecules, fluorescent probes or tiny light sources after localization them”. On the other hand, our methods can extract the inner structures of illuminated ROIs. Such ROIs could actually be various objects as long as it fulfills the “resolvable condition”. In the future, maybe we can try structures comprised of adjacent molecules, or multiple chromophores? Maybe we can also try to extract the inner structure of chromophores? Or, if a molecule is illuminated by light directly (not in a fluorescent manner), is it possible to extract its inner structure better? These are just guesses, but may worth exploration because they are in accordance with our method’s principle.

Best wishes,
Edward Y. Sheffield

Well,… I did not meet the problem you mentioned. Is it because we are using different web browsers? Maybe you can get some help from the forum’s web managers? : )

Hello Dr. Hazen,

Yes, “isolated lighting” is one of the key aspects of the proposed “resolvable condition”. I think the explanation in page 3~5 (especially Fig.1) could be helpful. In short, if “isolated lighting” is not fulfilled, there will be some unknown structures outside the ROI. Their images will overlap the ROI’s image even if they are extremely far away from the ROI. The reason is: the PSF extend infinitely broad, as a result these structures’ images also extends infinitely broad. With these extra unknowns, we cannot solve the unknowns we need, i.e. the ROI pixels. Our methods are first illustrated with two-point situation, and then further generalized to arbitrary ROIs. Yes, we also suspect that there may be some essential linkage between our technique and compressed sensing. We even wish some Mathematician could find out about this.

This study’s task is to extract details from one diffraction-blurred image directly. According to classic theories (about Fourier Optics, the diffraction-limit and Rayleigh criterion), this problem is neither mathematically solvable nor practically solvable. For example, according to Fourier Optics, a conventional light microscope filters the high frequency components of any image. As a result, the FT of the PSF has all zeros in high frequency part. Thereby, the image’s high frequency components cannot be recovered by dividing the Fourier transform (FT) of the measurement by the FT of the PSF. But our study find an exception. Of course, it is still a difficult task in practice.

Actually, the diffraction-limit and Rayleigh criterion apply to various light microscopes, and this paper is based on them. We think this study’s conclusion is partly inconsistent with them but do not deny them. Rayleigh criterion is correct for human vision, while our conclusion is correct for computer vision. With the help of computer, it is reasonable that more details can be extracted than people thought before.

Best wishes,

Edward Y. Sheffield

…existing super-resolution techniques have already achieved resolutions beyond the diffraction-limit. But to our understanding, adjacent points (or different frequency components) are imaged in different time. However, this study finds an exceptional condition (termed “resolvable condition”), such points can be resolved directly.

It is not actually the case that adjacent points must always be separated time, and this has been known for a long time. It is well understood that (in theory) there exists a unique solution with arbitrary spatial precision to the inverse problem, provided zero noise, infinite sampling, infinite dynamic range, and absolute knowledge of the PSF. See “The Problem of Deconvolution”: pp 32-35 from Peter Jansson’s textbook on deconvolution. Specifically, this statement on pg 35 (where o(x) is the underlying object, s(x) is the blurring function, and ip(x) is a noise free image) and

p47b

The opportunity for success in solving for o(x) exists only if the solution o(x) itself exists. An ideal noise-free observation of ip(x) would guarantee existence in a practical (albeit not mathematical) sense to the extent that Eq. (89) correctly models physical reality. Data ip(x) did, after all, arise from a physical process that distorted a real object o(x). Adding noise n(x) to idealized data ip(x) such that i(x) = ip(x) + n(x), however, creates a problem that either (1) has no solution at all, (2) is ill-posed, or (3) is at best ill-conditioned. [i.e. small errors in image or PSF information will lead to large errors in object estimates]

So noise is the whole point! Without noise and with a perfect PSF we already have the solution.

As an interesting, possibly philosophical sidenote here: due to the data processing inequality, i don’t think denoising algorithms will ever fully save us here. They can rearrange the content in an image so as to improve the performance of some flawed inference system (such as human vision or some imperfect image analysis algorithm). And that’s very powerful! But they cannot fundamentally recover information that was not already present in some way in the original image. In that regard, “denoising” is a bit of a misnomer… (kinda curious whether @fjug agrees with that statement)

Second, your “resolvable condition using isolated lighting” is also a concept that has been discussed before, where it is sometimes called the “partial data problem”, or described by saying that the unknown image often does not have “compact support”. This thesis by Siddharth Shah is a nice place to start for discussions about the partial data problem.

https://www.semanticscholar.org/paper/Deconvolution-algorithms-for-fluorescence-and-Shah/92565e9677b9353fdb68070a0c985586e4e873e8

see also discussions on the partial data problem here:

2 Likes

Previously I mentioned “With the help of computer, it is reasonable that more details can be extracted than people thought before.” But it is not always the case, actually it requires a pre-condition (“resolvable condition”) in our study.

Yes, what you said is well-reasoned. BUT, the problem in our study is another extremely similar but essentially different one! In fact, their similarity also bothered us for a long time. Maybe your opinion is: the diffraction-limit is just a problem of noise? But that may not be true.

I do agree that deconvolution is a very powerful techniques. Actually, in slide 23 of “Deconvolution algorithms for fluorescence and electron microscopy”, we can see a very good result (without noise). But it is essentially different from the issue in our study. Please note that the slide shows that the PSF’s size is 35x35. In fact, even larger PSF may be OK, as long as the image’s high frequency components are retained. It is so in usual deconvolution techniques.

But the issue in our study is essentially different. According to Fourier Optics, a light microscope’s PSF extends infinitely broad (with its central area called the “Airy disk”), and more importantly its Fourier Transform (FT) is an ideal low pass filter. From the physical point of view, an image’s high frequency components cannot be collected by lens. As a result, there are no high frequency components in the convolved image, i.e., the image acquired by the microscope. In this case, usual deconvolution does not work even without any noise. We believe this is the first reason why the diffraction-limit issue has bothered the world for more than a century. We do agree that noise is a very important point. But for the diffraction-limit issue, it is still a seemingly “impossible” problem even without any noise. With the help of super-resolution techniques, we are now able to observe structures beyond the diffraction-limit, but “recovering details directly” is still “impossible”. That is why we choose to ignore noises and focus on the principle in this study. But after it is solved in principle, we do agree that noise should be treated as the most important issue.

Another question:
How can an image’s details be recovered after its high frequency components are removed by a microscope?

Our answer:
On the one hand, we find a “resolvable condition”. In this condition, the detail information is included in both high frequency and low frequency components. More generally, it is actually included in any part of the frequency spectrum.
On the other hand, we propose two methods. Each of the methods can recover the detail information from the low frequency components. More generally, they can actually recover it from any part of the Fourier spectrum.
In other words, the observed image (or its Fourier spectrum) has redundancy in the “resolvable condition” . Thereby, details can be recovered from part of the observed image, or part of its Fourier spectrum (e.g., low frequency components).

After recent communication with dozens of peers on Twitter or this forum, we made a “Questions&Answers.pdf”.
It can be found on our paper’s page in zenodo or bioRxiv (in processing):


or
https://www.biorxiv.org/content/10.1101/798934v1 (may need some days to be available)

If anything inappropriate is found in the material (or this study), please do not hesitate to tell us.
We intended to post this message on Twitter. We will do that after getting necessary Internet access.
Thank everyone for comments and help! Especially thank Harvard Medical School and this forum!

If any other peers have more (private or public) comments or questions in the future, please send (or send a copy) to Edward.Y.Sheffield@hotmail.com. We will respond as soon as possible after receiving the emails.

When necessary, we may also make more “Questions&Answers” and upload to preprint libraries.

Thanks for your valuable feedback!