Close

Deep learning-enhanced light-field imaging with continuous validation

Nils Wagner, Fynn Beuttenmueller, Nils Norlin, Jakob Gierten, Joachim Wittbrodt, Martin Weigert, Lars Hufnagel, Robert Prevedel, Anna Kreshuk

Posted on: 4 November 2020

Preprint posted on 31 July 2020

Article now published in Nature Methods at http://dx.doi.org/10.1038/s41592-021-01136-0

Improving the toolkit for 3D imaging with deep learning: HyLFM and HyLFM-Net

Selected by Mariana De Niz

Categories: bioengineering, physiology

Background

Capturing highly dynamic physiological processes happening on milli-second time scales across large areas in living organisms requires imaging methods capable of such resolution. An attractive candidate for high-speed 3D image acquisition is light-field microscopy (LFM), which has already opened new avenues in the fields of neurobiology and cardiovascular dynamics. While some technical hindrances of this tool have been overcome since its conception, the widespread use of LFM has been hampered by a computationally demanding, iterative image reconstruction process that requires a complex computational infrastructure and adequate data management. Multiple algorithms derived from deep learning and convolutional neural networks have recently been proposed to replace iterative deconvolution procedures, and offer new methods for deblurring, denoising and super-resolution. While many of these methods have excellent performance in various biologically relevant settings, many are not optimal for dynamic imaging with LFM given the complexity of dynamic processes in small animals. In their work, Wagner and Beuttenmueller et al (1) present a novel framework consisting of a hybrid light-field light-sheet microscope (HyLFM) and deep-learning-based volume reconstruction. In it, single light-sheet acquisitions continuously serve as training data and validation for the convolutional neural network (termed HyLFM-Net) reconstructing the LFM volume.

Figure 1. Visual pipeline of deep learning-enhanced light-field imaging with continuous validation.

Key findings and developments

A simultaneous selective-plane illumination microscopy (SPIM) modality was added into a standard LFM microscope, allowing the generation of high-resolution ground truth images of single planes for validation, training, and refinement for the convolutional neural network. Training can be done on static sample volumes, or dynamically from a single SPIM plane going through the volume during 3D image acquisition. An automated image processing pipeline ensures that LFM and SPIM volumes are co-registered in a common reference volume and coordinate system with high precision. This is important for convolutional neural network training and validation, and the systems’ ability to acquire 2D and 3D training data is key for reliable convolutional neural network reconstructions, including data never seen in training. Altogether, the HyLFM-Net is trained on pairs of SPIM-LFM images.

To evaluate the performance of the HyLFM system, the authors imaged sub-diffraction sized fluorescent beads suspended in agarose, and quantified the improvement in spatial resolution and image quality compared to standard iterative light-field deconvolution. They concluded that HyLFM-Net correctly inferred the 3D imaging volume from the raw light-field data, with better resolution than that which could be obtained by light field deconvolution, and without artifacts commonly found in light field deconvolution. The authors point out the importance of training on diverse datasets, to avoid biases in performance.

As proof of principle, the authors explore the capabilities of the HyLFM system by imaging the dynamics of a hatchling medaka fish heart in vivo, to demonstrate the capability of the system to correctly capture dynamic cellular movements in 3D. The HyLFM-Net allowed acquiring high image quality metrics compared to SPIM, and allowed 3D volume inference at up to 18Hz, with at least 1000-fold reconstruction speed compared to light field deconvolution. The authors note that the network trained on dynamically acquired SPIM single planes performed equally well or better than the network trained on fully static volumes.

The authors also tested the HyLFM system on transgenic larval zebrafish brains expressing calcium indicators, to monitor neural activity. The ground truth data enabled the HyLFM system to faithfully learn and infer structural, as well as intensity-based information. The authors conclude HyLFM is thus an attractive method for visualizing neural activity.

Altogether, the new system allows reconstructing light-field volumes at sub-second rates, eliminating the main computational hindrances for light field imaging. Moreover, the system enables acquiring appropriate training data simultaneously and on-the-fly, while allowing continuous validation and fine-tuning. The network over time learns on the actual experimental data, rather than requiring pre-acquisition of training images in separate microscopes, solving the hindrance of transferability.

What I like about this preprint

I like that the authors address a hugely important technical gap in a fast-advancing microscopy area. It has not been uncommon over the past few decades that major advances in microscopy tools occur, and the methods for image analysis stay behind. This at some point becomes a limiting factor in itself. I think that tools addressing this gap to overcome those limitations are key, and ultimately allow the different microscopy tools to be used to their full potential, and become widespread.

 

References

1. Wagner N, Beuttenmueller F, et al, Deep learning-enganced light-field imaging with continuous validation, bioRxiv, 2020.

 

doi: https://doi.org/10.1242/prelights.25619

Read preprint (No Ratings Yet)

Author's response

Robert Prevedel and Anna Kreshuk shared

Open questions

1.This is a great advance, and you introduced the proof of concept for a beating heart and neural activity. This opens a window to multiple applications in various research fields. What are some further limitations to be aware of? For instance, related to movement of living organs such as the heart, the lungs, or the digestive tract?

AK+RP: For the heart experiments we utilize dual-color labelling so that we can record both the SPIM plane and the LFM volume at the very same time. Because of this we would not expect any movement-related artefacts, however the dual-color labelling might not be possible for all experiments, depending on the biology. If only one color can be used – like in our neuroimaging experiments – we need to acquire the SPIM and LFM images sequentially, which are then separated by a few milliseconds. Here, very fast dynamics or motion artefacts such as animal twitching could potentially be a problem.

2.What is the speed limit of the tool you have introduced?
RP: In our paper, the largest volume rate we demonstrated was ~100 Hz for the LFM modality. In principle, volume rate is limited only by fluorescence signal strength or camera frame-rate in LFM, which is one of its main advantages. Employing state-of-the-art camera hardware, the speed could technically go up to kHz for smaller FOVs.

3.You discuss in your work how the HyLFM-Net can be used in the context where high resolution of small particles is needed, and in the context where a larger organ is visualized, and the importance of training the net with a diverse dataset, while at the same time being aware of the limitations of generalizations. Can you expand further on how this can be addressed/improved?

AK: In general, it’s impossible to guarantee that a pre-trained CNN will generalize to unseen data, that’s why we consider continuous validation important as the raw LFM images are not interpretable by eye. Informally, networks act unpredictably on out-of-distribution samples (a property that is actually exploited in adversarial attacks), so one should either sample more widely (more diverse training data) or sample from the right distribution (more representative training data). In the HyLFM setup, training data can be acquired directly during the experiments, so it is representative. At the same time, training data from other experiment runs or even from different experiments can be mixed in for more diversity.

4.Do you think HyLFM-Net can be the basis for similar improvements done in other contexts of microscopy, for instance intravital microscopy-derived image processing?
RP: I would really hope so. I think the main insight from this work was that simulataneous high-resolution ground truth data, even when only obtained for a single 2D plane at a time, can already help to improve a computational 3D method such as LFM. Likewise, I wouldn’t be surprised if similar approaches could help with other computational (and deep-learning) based 3D imaging methods, such as e.g. CNN-enhanced 3D imaging based on wide-field microscopy or other fluorescence or photoacoustic tomography schemes. Therefore, having the ability to acquire spatially or temporally sparse higher-resolution images can potentially also help in intravital microscopy and its image processing.

5.What future directions do you envisage for HyLFM given the rapidly advancing field of LFM?
RP: I think HyLFM is really geared towards high-throughput imaging in biology. I am thinking of e.g. imaging screens and other biological experiments that rely on hundreds to thousands of imaging replicates. Reconstructing all those LFM data the standard way would take unfeasibly long or demand top-notch computing cluster infrastructure. Here our method really makes a difference, as one now can reconstruct all this data in near-real time. This ability also allows you to keep all that data stored as raw light-field images, which are ~100-fold smaller in size. While some of these aspects were of course also demonstrated by other CNN-LFM papers, our hybrid modality allows continuous validation and thus gives the experimentalist the assurance that the reconstructions can really be trusted. So, I hope our method finally convinces biologist that LFM can be a practical, trustworthy high-speed imaging method. Once we’ve achieved this, the applications are endless. It would be great to see this applied to e.g. voltage imaging in small, transparent model organism such as zebrafish.

Have your say

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sign up to customise the site to your preferences and to receive alerts

Register here

Also in the bioengineering category:

Green synthesized silver nanoparticles from Moringa: Potential for preventative treatment of SARS-CoV-2 contaminated water

Adebayo J. Bello, Omorilewa B. Ebunoluwa, Rukayat O. Ayorinde, et al.

Selected by 14 November 2024

Safieh Shah, Benjamin Dominik Maier

Epidemiology

Engineered Nanotopographies Induce Transient Openings in the Nuclear Membrane

Einollah Sarikhani, Vrund Patel, Zhi Li, et al.

Selected by 23 September 2024

Sristilekha Nath

Bioengineering

Scalable and efficient generation of mouse primordial germ cell-like cells

Xinbao Ding, Liangdao Li, Jingyi Gao, et al.

Selected by 05 March 2024

Carly Guiltinan

Cell Biology

Also in the physiology category:

Investigating Mechanically Activated Currents from Trigeminal Neurons of Non-Human Primates

Karen A Lindquist, Jennifer Mecklenburg, Anahit H. Hovhannisyan, et al.

Selected by 04 December 2024

Vanessa Ehlers

Neuroscience

Geometric analysis of airway trees shows that lung anatomy evolved to enable explosive ventilation and prevent barotrauma in cetaceans

Robert L. Cieri, Merryn H. Tawhai, Marina Piscitelli-Doshkov, et al.

Selected by 26 November 2024

Sarah Young-Veenstra

Evolutionary Biology

Precision Farming in Aquaculture: Use of a non-invasive, AI-powered real-time automated behavioural monitoring approach to predict gill health and improve welfare in Atlantic salmon (Salmo salar) aquaculture farms

Meredith Burke, Dragana Nikolic, Pieter Fabry, et al.

Selected by 11 September 2024

Jasmine Talevi

Animal Behavior and Cognition
Close