Content-Aware Image Restoration: Pushing the Limits of Fluorescence Microscopy

Martin Weigert, Uwe Schmidt, Tobias Boothe, Andreas Müller, Alexandr Dibrov, Akanksha Jain, Benjamin Wilhelm, Deborah Schmidt, Coleman Broaddus, Siân Culley, Maurício Rocha-Martins, Fabián Segovia-Miranda, Caren Norden, Ricardo Henriques, Marino Zerial, Michele Solimena, Jochen Rink, Pavel Tomancak, Loic Royer, Florian Jug, Eugene W. Myers

Preprint posted on January 23, 2018

Article now published in Nature Methods at

Deep learning-based image restoration overcomes previously intractable barriers to high resolution imaging

Selected by Uri Manor


Deep learning-based image restoration overcomes previously intractable barriers.


How this work fits the bigger picture

Machine learning has been used by microscopists for many years, most famously for image segmentation and analysis. This paper presents the first nearly comprehensive demonstration of how deep learning-based image restoration has the potential to overthrow previous barriers to imaging such as phototoxicity, slow imaging speed, and limited resolution. While image restoration methods such as denoising, deconvolution, and compressed sensing have long been employed with great success, they are computationally demanding and often suffer from debilitating artifacts. These artifacts usually arise from mismatches between reality and the assumptions that are intrinsic to the mathematical equations which are used to process the images. Biology is complex, and biological samples are often “messy”, and to model the lightpath from excitation to emission to detection in even a single cell would require an equation far too complex to derive. The number of parameters are not only computationally daunting – they often cannot even be measured in the same sample that is being imaged. Thus, “rules-based” computational image restoration approaches are often compromises – estimates – of what would be calculated in an ideal world.


Enter this paper: With CARE™ (content aware restoration), the number of parameters is no longer a real issue. Highly parallelized GPU-based hardware, along with a properly constructed algorithm, is amply equipped to identify and adjust as many parameters as can be thrown at it for most practical purposes. By training the algorithm with sufficient pairs of degraded images alongside “ground truth” images (either the same sample imaged at high vs. low resolution or signal-to-noise ratio, or an analogously generated synthetic dataset pair), the algorithm will “learn” how to reconstruct the “ground truth” image from the input. Weigert et al. demonstrate this capability beautifully, with a focus on three major areas of improvement for microscopy:

  • Denoising (so we can image with lower laser power, and/or lower concentrations of fluorophores)
  • Axial undersampling (so we can image fewer z-slices, thereby increasing imaging speed and decreasing photoxicity)
  • Superresolution (so we can get high resolution images at much faster acquisition speeds) – below is an example of how deep learning compares with deconvolution for low SNR .
Top row: Acquired image of secretory granules (magenta) and microtubules (green) in insulin-secreting INS-1 cells. Middle row: Deep learning-based restoration of the image in the top row. Bottom row: Standard deconvolution-based restoration of the acquired image in the top row. Note the higher contrast, lower background, and greater signal-to-noise in the deep learning-based restoration in comparison to both the original acquired image as well as the deconvolved dataset.


Open questions  

How does CARE compare with more advanced denoising or deconvolution software? ER-Decon from the Sedat/Agard groups does particularly well with both denoising and deconvolution, and is cited in the paper, but wasn’t used for comparing with CARE – this would be much more stiff competition!

How would CARE (or any other deep learning architecture) work if it was trained only on images of one type of structure vs. a very broad range of structures? How broad a range of structures would be necessary to enable the software to “converge” on the “common denominator” of fluorescence microscopy? In other words, when can we hope to develop a generalized deconvolution software, as opposed to software that knows how to refine linear and/or dot structures (e.g. microtubules and punctate cellular structures) that it was trained to detect? How do we best construct the software, and/or how do we best train said software?


Tags: artificial intelligence, deep learning, fluorescence, imaging, microscopy, superresolution

Posted on: 22nd February 2018

Read preprint (4 votes)

  • Have your say

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    Sign up to customise the site to your preferences and to receive alerts

    Register here