Menu

Close

Deep Learning-Based Point-Scanning Super-Resolution Imaging

Linjing Fang, Fred Monroe, Sammy Weiser Novak, Lyndsey Kirk, Cara R. Schiavon, Seungyoon B. Yu, Tong Zhang, Melissa Wu, Kyle Kastner, Yoshiyuki Kubota, Zhao Zhang, Gulcin Pekkurnaz, John Mendenhall, Kristen Harris, Jeremy Howard, Uri Manor

Preprint posted on September 07, 2019 https://www.biorxiv.org/content/10.1101/740548v5

PSSR: A smart alternative to overcome the eternal triangle of compromise of point scanning imaging systems.

Selected by Mariana De Niz

Background

In microscopy, the ‘eternal triangle of compromise’ states that for a given signal-to-noise ratio (the ratio between the amount of signal (photons) and the amount of noise in an image), improving any corner of the triangle, i.e. resolution, system sensitivity, or imaging speed, will come at the cost of the other corners for a given photon budget (Figure 1). Improvement in all corners can only be achieved when either the photon number or the SNR can be increased. However, an increase in detected photons, is achieved by either increasing the laser power, or improving the detector’s collection efficiency. But increasing the laser power may come at the cost of sample integrity, and improving collection efficiency comes at the cost of acquisition speed. Altogether, within a point-scanning system, it is impossible to optimize one parameter without compromising at least one of the others.

Figure 1. The ‘eternal triangle of compromise’ of point-scanning imaging systems.

 

Despite these limitations, point scanning imaging systems are widely used tools for high resolution imaging. These include scanning electron microscopes and laser scanning confocal microscopes (LSM). Moreover, deep learning is an established method for image analysis and segmentation, and more recently, for restoration of microscopy images from noisy or low-resolution acquisitions to high resolution outputs (1-11). While current acquisition methods of point-scanning systems allow for useful analyses, a method that allows obtaining higher resolution ultrastructural information without the limitations of the ‘eternal triangle of compromise’ would be of great use.

Key findings and developments

General findings and developments

  • To overcome the limitations of the ‘eternal triangle of compromise’, Fang et al introduce a deep learning-based super-sampling method for under-sampled images, which they term Point-Scanning Super-Resolution (PSSR) imaging (12) (Figure 2).
  • They show that deep learning-based restoration of under-sampled images facilitates faster, lower dose imaging on both SEM and scanning confocal microscopes, which allows for smaller number of pixels acquired.
  • The PSSR approach enables increasing the spatiotemporal resolution of point scanning imaging systems to previously unattainable levels, and overcomes limitations imposed by sample damage or imaging speed when imaging at full pixel resolution.
Figure 2. Overview of PSSR general workflow. Training pairs were semi- synthetically created by applying a degrading function to HR images (right) to generate LR counterparts (left). Semi-synthetic pairs were used as training data (middle). Real-world LR and HR image pairs were both manually acquired (right column). The output from PSSR (LR-PSSR) when LR is served as input is then compared to HR to evaluate the performance of the trained model. (From (12)).

Specific developments

 Model generation

  • To train the model, many perfectly aligned high- and low- resolution image pairs are needed.
  • Instead of manually acquiring high- and low-resolution image pairs, oversampled images acquired in SEM or LSM Airyscan microscopes were ‘crappified’ and then used to train the model for restoring under-sampled images.
  • For ‘crappification’, the high-resolution images underwent Gaussian blurring, random pixel shifts, random salt-and-pepper noise addition, and 16x down-sampling of the pixel resolution.

 

Electron Microscopy

  • The EM PSSR–based restoration of under-sampled images was reproducible using various microscopes and preparation methods even though it was only trained on data collected from one type of EM modality – transmission mode SEM. For this study, ultrathin sections from the hippocampus of a Long Evans male rat were used.
  • Usually deep learning-based image restoration models are extremely sensitive to variations, including the model and training images, sample preparation, or the equipment used for image acquisition. The authors tested various microscopes, samples, and metrics (including PSNR, SSIM, FRC, NanoJ-SQUIRREL error mapping analysis and visual inspection) and found that PSSR was effective for restoring low resolution images.
  • One major concern with deep learning-based image processing is accuracy, and in particular, the possibility of false positives (‘hallucinations’). The authors measured the PSNR and SSIM of low- and high-resolution images, and then super-sampled the low-resolution images by bilinear interpolation (LR-Bilinear) and PSSR model (LR-PSSR), and found that LR-PSSR significantly outperforms LR-Bilinear, and yields more accurate segmentation.
  • Altogether, the ability to reliably 16x super-sample lower resolution datasets presents an opportunity to increase the throughput of SEM imaging by at least one order of magnitude.

 

Laser scanning confocal microscopy and live imaging

  • For fluorescence, a LSM880 Airyscan microscope was used. Using the PSSR, 100x lower laser dose and 16x higher frame rates than those corresponding high-resolution acquisitions.
  • In addition to fixed sample imaging, the authors sought to determine whether PSSR might provide a viable strategy for increasing the speed and lower the photon dose for live scanning confocal microscopy. As proof of concept, they trained the model on live cell timelapses of mitochondria in U2OS cells. While LR acquisitions were noisy and pixelated due to undersampling, they also showed less photobleaching. PSSR processing reduced the noise and increased the resolution of the LR acquisitions.
  • To improve the performance of PSSR for timelapse data, the authors modified the design of the PSSR ResU-Net architecture, and trained the models on 5 timepoints at a time (MultiFrame-PSSR, or PSSR-MF). The improved speed, resolution, and SNR using PSSR-MF enabled detection of mitochondrial fission events that were not detectable in the LR or LR-Bilinear images.
  • In conclusion, PSSR facilitates point scanning image acquisition with otherwise unattainable resolution, speed and sensitivity.

What I like about this paper

As previous work by the Manor lab, I like a lot the idea of open science, and the openness with which his lab communicates the work performed. I also like that this work addresses a limitation that many labs using microscopy across the world face. As a scientist I give great value to method development, and to the open dissemination of such methods. This helps advance science, and puts the necessary tools in the hands of as many scientists as possible. I found PSSR to be ingenious and extremely useful.

Open questions

*Note: all author answers are in section below.

  1. You mention in your discussion that although for various types of structures you analysed (e.g. mitochondria), you do not rule out the possibility that other structures may not be restored by your model with sufficient accuracy. Could you discuss a bit further which are the structures you think could be more challenging for your model, and why this might be the case?
  2. You discuss also that fluorescence data has the potential to be much more variable than EM data, and the relevance of this for training the model for a specific sample type. Equally, you discuss elsewhere in your manuscript the importance of the system and sample preparation for the model. For some time I have been aware of ideas by different people, to create open access repositories for microscopy-derived data, to which the scientific community of all fields of research can contribute. Do you think the direction you and other labs are exploring on deep learning for image analysis, would make such repository an even more appealing, and eventually even necessary prospect?
  3. In terms of sample preparation, was there anything specific you faced, which makes it easier (or more challenging) to train the model?
  4. What are the main limitations of the model when applied to live microscopy?
  5. Could you explain a bit further, why you used the progressive resizing and the discriminative learning rates only for EM, while you used only the best model preservation only for fluorescence?
  6. You mention in your discussion that in the near future it might be possible to generate generalized models for specific imaging systems, rather than sample types. Could you discuss further your thoughts on this aspect?

References

  1. Wang, Z., Chen, J. & Hoi, S. C. H. Deep Learning for Image Super-resolution: A Survey. eprint arXiv:1902.06068, arXiv:1902.06068 (2019).
  2. Jain, V. et al. Supervised Learning of Image Restoration with Convolutional Networks. (2007).
  3. Moen, E. et al. Deep learning for cellular image analysis. Nat Methods, doi:10.1038/s41592-019-0403-1 (2019).
  4. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat Methods 15, 1090-1097, doi:10.1038/s41592-018-0216-7 (2018).
  5. Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat Methods 16, 103-110, doi:10.1038/s41592-018-0239-0 (2019).
  6. Ouyang, W., Aristov, A., Lelek, M., Hao, X. & Zimmer, C. Deep learning massively accelerates super- resolution localization microscopy. Nat Biotechnol 36, 460-468, doi:10.1038/nbt.4106 (2018).
  7. Nelson, A. J. & Hess, S. T. Molecular imaging with neural training of identification algorithm (neural network localization identification). Microsc Res Tech 81, 966-972, doi:10.1002/jemt.23059 (2018).
  8. Li, Y. et al. DLBI: deep learning guided Bayesian inference for structure reconstruction of super- resolution fluorescence microscopy. Bioinformatics 34, i284-i294, doi:10.1093/bioinformatics/bty241 (2018).
  9. Buchholz, T. O. et al. Content-aware image restoration for electron microscopy. Methods Cell Biol 152, 277-289, doi:10.1016/bs.mcb.2019.05.001 (2019).
  10. Guo, M. et al. Accelerating iterative deconvolution and multiview fusion by orders of magnitude. bioRxiv, 647370, doi:10.1101/647370 (2019).
  11. Batson, J. & Royer, L. Noise2self: Blind denoising by self-supervision. arXiv preprint arXiv:1901.11365 (2019).
  12. Fang L., et al. Deep Learning-Based Point-Scanning Super-Resolution imaging, bioRxiv, doi:10.1101/740548 (2019).

 

Posted on: 30th September 2019

Read preprint (No Ratings Yet)




  • Author's response

    Linjing Fang and Uri Manor shared

    Open questions

    1. You mention in your discussion that although for various types of structures you analysed (e.g. mitochondria), you do not rule out the possibility that other structures may not be restored by your model with sufficient accuracy. Could you discuss a bit further which are the structures you think could be more challenging for your model, and why this might be the case?

    We don’t necessarily think there’s a particular typeof structure that should be particularly challenging. The issue is more that as the types of structures you’re imaging diverge from the structures we trained on, the model will start to fail. This is a well-known issue in deep learning modelling research, and an issue we are interested in overcoming. One analogy Linjing proposed that I like very much is that you can think of the model as an “optical transfer function” (i.e. OTF, what we use for deconvolution), except the OTF also contains content-specific information depending on the sample types it was trained on. If we instead train a model to only learn the OTF, it would be more “general”, but probably also not as accurate. One thing we are working on now is training a model on just the OTF, but then exploring “one-shot” transfer learning approaches to see if we can get a reasonable model with minimal data. The specific goal for now is to be able to take one (or very few) high resolution image(s) of a cell, then do the ultrafast timelapse and then use that high res image to restore the entire timelapse.

    1. You discuss also that fluorescence data has the potential to be much more variable than EM data, and the relevance of this for training the model for a specific sample type. Equally, you discuss elsewhere in your manuscript the importance of the system and sample preparation for the model. For some time I have been aware of ideas by different people, to create open access repositories for microscopy-derived data, to which the scientific community of all fields of research can contribute. Do you think the direction you and other labs are exploring on deep learning for image analysis, would make such repository an even more appealing, and eventually even necessary prospect?

    YES YES YES!!! I have been advocating for everyone to get together and organize an image repository for this exact purpose. I even proposed it in a recent grant application that was rejected, although I don’t know if that was the reason it was rejected, haha. Ultimately, we need something like “ImageNet” for microscopy, and as microscopes get better and more diverse, we need to continually update our databases.

    1. In terms of sample preparation, was there anything specific you faced, which makes it easier (or more challenging) to train the model?

    Because of the crappification approach we used, the only challenge is to make sure your ground truth high resolution data is at least as high quality as you want your model output to be. We didn’t need to worry about perfectly aligning low- versus high-resolution images, which is a giant time- and effort-saver. Even generating the real-world testing data which did require low- vs high-resolution acquisitions was not trivial. But to get high resolution, high quality data means you better know how to prepare your samples properly and how to use your microscope and push it to the limit. For the EM model, we had that problem solved for us by collaborating with Kristen Harris, whose work is unparalleled in terms of imaging and sample quality. For the fluorescence data we were lucky to have Airyscans available which can get much higher SNR and resolution than most standard confocal microscopes.

    1. What are the main limitations of the model when applied to live microscopy?

    The biggest limitation of our current model is that it is only reliable on data similar to what we trained on, and even then the pixel sizes must be pretty similar. So if you want to use our model on something other than mitochondria, or even mitochondria in a different cell type, you better make sure they’re similar dimensions in terms of absolute as well as pixel sizes. And no matter what, always validate, validate, validate before drawing any conclusions!

    1. Could you explain a bit further, why you used the progressive resizing and the discriminative learning rates only for EM, while you used only the best model preservation only for fluorescence?

    The huge difference in dataset size (EM data was ~80GB, light data was ~7-10GB) meant the EM data benefitted more from using these tricks to accelerate training. There were far fewer epochs for the EM data as well, so no need to preserve the best model. But honestly, the biggest reason for how things turned out in the end is that when we started with the EM data we were using different models, approaches, etc. When we switched to fluorescence, because of the challenges I highlighted above, we ended up trying and changing a jillion different things, and this is what we ended up settling on. We never went back to the EM data to try a jillion different things because we didn’t need to – the results were already satisfactory for our purposes.

    1. You mention in your discussion that in the near future it might be possible to generate generalized models for specific imaging systems, rather than sample types. Could you discuss further your thoughts on this aspect?

    As I mentioned above in question #1, fluorescence data in particular can vary a lot depending on what you are labelling. It would be useful to first determine whether we can generate a DL-based model for deconvolving and denoising while also doing super-resolution independent of sample types. Then we can use that model along with minimal additional training data (i.e. “one-shot transfer learning”) to fine tune the model to perform even better on whatever specific sample type or label you are using. Florian Jug and Loic Royer have already done some fantastic work making “generalized” denoising models –it would be great to extend this type of general capability to deconvolution and super-resolution. Even more thrilling is the idea of completely revamping how we build our microscopes to allow us to maximally leverage the enormous computational power of machine learning models. We invest a lot of money and effort in order to build devices that can reconstruct an interpretable image. But we also make compromises in sensitivity, resolution, spectral information, etc etc etc that may no longer be necessary. So that is another area of active research in my lab that we are very excited about.

    Have your say

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    Sign up to customise the site to your preferences and to receive alerts

    Register here

    Also in the cell biology category:

    Close