ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy
Preprint posted on March 20, 2020 https://www.biorxiv.org/content/10.1101/2020.03.20.000133v1
Deep Learning methods are recognised as powerful analytical tools with great use and increased potential for image analysis and microscopy. However, a current challenge for the widespread use of deep learning is the technological, resource, and knowledge barriers separating microscopy users with high knowledge of computational platforms for image analysis, and novice users with limited knowledge of such tools. To bridge this gap, von Chamier et al present ZeroCostDL4Mic (1), a platform based on Google Colab, which simplifies deployment, access and use of deep learning tools (Figure 1).
Key findings and developments
- ZeroCostDL4Mic is a collection of self-explanatory Jupyter Notebooks, for Google Colab. The latter provides free, cloud-based computational resources needed.
- ZeroCostDL4Mic provides a single simple interface for users at all levels of expertise to install, test, train, and use the popular deep learning networks U-net, Stardist, CARE, Noise2Void, and Label-free prediction.
- U-net was designed by Ronnenberg et al in 2015 (2), and is a deep learning architecture originally developed for segmentation of EM images.
- Stardist was designed by Schmidt et al in 2018 (3) and is a deep learning method designed to segment cell nuclei in microscopy images.
- CARE is a deep learning method designed by Weigert et al in 2018 (4), and is capable of image restoration from corrupted bio-images (e.g. corrupted by noise, artefacts or low resolution). The network allows image denoising and resolution improvement in 2D and 3D images, using supervised training.
- Noise2Void is a deep learning method designed by Krull et al in 2019 (5) to perform denoising on microscopy images, using an unsupervised training approach.
- Label-free prediction (fnet) is a deep learning method desinged by Ounkomol et al in 2018 (6) as a tool for label-predictions from unannotated brightfield and EM images.
- ZeroCostDL4Mic promotes the acquisition of knowledge and dexterity in the use of these networks. In their work, the authors provide training datasets for each of the networks used.
- While ZeroCostDL4Mic provides a friendly and easy-to-use interface for users with little coding experience, the underlying code remains accessible, allowing advanced users to explore and edit the programmatic structure of the notebooks.
- For access to and use of ZeroCostDL4Mic, no extra resources beyond a web browser and a Google Drive account are needed.
- ZeroCostDL4Mic provides access to Deep Learning to run tasks of image segmentation, denoising, restoration, and artificial labelling.
- Beyond its current uses, the authors discuss the potential of this tool for the future, to aid in the rapid dissemination of novel technologies, allowing users of all levels of expertise to use multiple tools for deep-learning-based image analysis in a reproducible and testable manner.
Notes by authors on limitations and further considerations.
- The Google Colab platform offers a free and straightforward access to a GPU or TPU, which significantly lowers the entry barrier for new users of Deep Learning methods. However, this access comes with some drawbacks, which the authors carefully explain. These include:
- Limited free Google Drive storage, with a maximum of 15 GB feely accessible by Google Colab notebooks. However, additional storage space can be purchased.
- A 12.72 RAM limit. Exceeding this RAM limit can cause the notebook to crash or show an error.
- A 12-hour time-out, and a log-out if idle-30-90 min time, after which data loaded into the network is deleted. If training has not been completed, all progress might be lost if not saved.
- Google Colab does not guarantee access to a GPU, as the number of users of the service may be larger than the number of available devices.
- Google Colab uses different GPUs which currently include Nvidia K80, P4 and P100. The user cannot decide which GPU will be available when using the notebook. This may affect the speed at which networks can be trained and used.
- While assessing these limitations, the authors offer a detailed discussion on how these limitations can be mitigated.
- The authors include a supplementary discussion emphasizing the importance of re-training. They discuss that many labs take the approach of using pre-trained network models that can be used to process imaging data. However, pre-trained models, although very powerful, can also be very specific to the microscopes and sample types used in their training. This may lead to erroneous or artefactual results when applied to widely different dataset types than those in which they were trained on. The authors emphasize the importance of training the models with own specific data, to produce high-fidelity and reliable results.
What I like about this preprint
The main point I like about this preprint is that it hugely promotes open science. Significant barriers exist that prevent even experienced microscopists from having access to deep-learning based tools that are revolutionizing the field of image analysis. This work endeavours to give access to everyone, regardless of level of expertise, to the latest advances in image analysis. Furthermore, it also promotes that scientists with multiple expertise continue to contribute to ZeroCostDL4Mic. Moreover, beyond the knowledge barrier being addressed, the video tutorials and other training material are very user-friendly, and of free access. It is my belief the microscopy community (with all levels of image analysis expertise) will greatly benefit from this important resource.
*Note: all questions with answers are shown at the end of this highlight.
- In your discussion on the future perspectives of ZeroCostDL4Mic, you mention that you expect to grow the number of networks available. Will it be possible to compare the output of multiple networks so as to define the most suitable for specific analyses?
- You discuss in your work the need to re-train models, and to use one’s own specific data. Large imaging repositories are not yet a reality, but if there were, could you incorporate this to address your discussion point on pre-trained models, and to build altogether stronger models for multiple types of data?
- Is there a way ZeroCostDL4Mic can join efforts with resources such as BIAFLOWS, as the purpose of accessibility and training is shared?
- One of your purposes is that ZeroCostDL4Mic grows in terms of number of networks available. Following from the question above, have you considered the possibility that ZeroCostDL4Mic guides users on the choice of network, based on input regarding the type of image (eg. super-resolution, time-lapse, etc), and the expected type of analysis?
- I might have asked this to various different authors, but wouldn’t an image repository be of great use for resources such as yours and those of others? And in general, for the scientific community?
- von Chamier et al, ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy, bioRxiv, 2020
- Ronnenberg et al, U-net: convolutional networks for biomedical image segmentation. International Conference on Medical Image computing and computer-assisted intervention, 234-241, Springer, 2015.
- Schmidt et al, Cell detection with star-convex polygons, International Conference on Medical Image computing and computer-assisted intervention, 265-273, 2018
- Weigert et al, Content-aware image restoration: pushing the limits of fluorescence microscopy. Nature Methods, 15(2):1090-1097, 2018.
- Krull et al, Noise2Void-learning denoising from single noisy images. Proceedings of the IEEE conference on computer vision and Pattern Recognition, 2129-2137, 2019.
- Ounkomol et al, Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy, Nature methods, 15(11), 917-920, 2018.
I thank Ricardo Henriques for his input and engagement, and Mate Palfy for his helpful suggestions.
Posted on: 24th March 2020Read preprint
Also in the bioinformatics category:
Ruler elements in chromatin remodelers set nucleosome array spacing and phasing
|Selected by||Katie Weiner|
Drug mechanism-of-action discovery through the integration of pharmacological and CRISPR screens
|Selected by||Georgina Sava|
CoolMPS™: Advanced massively parallel sequencing using antibodies specific to each natural nucleobase
|Selected by||Kerryn Elliott|
Also in the cell biology category:
An Endocytic Capture Model for Skeletal Muscle T-tubule Formation
|Selected by||Helena Pinheiro|
Quantitative analysis of the ubiquitin-proteasome system under proteolytic and folding stressors
|Selected by||Sandra Malmgren Hill|
Tissue size controls patterns of cell proliferation and migration in freely-expanding epithelia
|Selected by||Mariana De Niz|
preListsbioinformatics category:in the
Antimicrobials: Discovery, clinical use, and development of resistance
Preprints that describe the discovery of new antimicrobials and any improvements made regarding their clinical use. Includes preprints that detail the factors affecting antimicrobial selection and the development of antimicrobial resistance.
|List by||Zhang-He Goh|
Also in the cell biology category:
A curated list of preprints related to Gastruloids (in vitro models of early development obtained by 3D aggregation of embryonic cells)
|List by||Paul Gerald L. Sanchez and Stefano Vianello|
ECFG15 – Fungal biology
Preprints presented at 15th European Conference on Fungal Genetics 17-20 February 2020 Rome
|List by||Hiral Shah|
ASCB EMBO Annual Meeting 2019
A collection of preprints presented at the 2019 ASCB EMBO Meeting in Washington, DC (December 7-11)
|List by||Madhuja Samaddar, Ramona Jühlen, Amanda Haage, Laura McCormick, Maiko Kitaoka|
EMBL Seeing is Believing – Imaging the Molecular Processes of Life
Preprints discussed at the 2019 edition of Seeing is Believing, at EMBL Heidelberg from the 9th-12th October 2019
|List by||Gautam Dey|
Preprints on autophagy and lysosomal degradation and its role in neurodegeneration and disease. Includes molecular mechanisms, upstream signalling and regulation as well as studies on pharmaceutical interventions to upregulate the process.
|List by||Sandra Malmgren Hill|
Lung Disease and Regeneration
This preprint list compiles highlights from the field of lung biology.
|List by||Rob Hynds|
A curated list of preprints related to cellular metabolism at Biorxiv by Pablo Ranea Robles from the Prelights community. Special interest on lipid metabolism, peroxisomes and mitochondria.
|List by||Pablo Ranea Robles|
BSCB/BSDB Annual Meeting 2019
Preprints presented at the BSCB/BSDB Annual Meeting 2019
|List by||Gautam Dey|
This list of preprints is focused on work expanding our knowledge on mitochondria in any organism, tissue or cell type, from the normal biology to the pathology.
|List by||Sandra Franco Iborra|
Biophysical Society Annual Meeting 2019
Few of the preprints that were discussed in the recent BPS annual meeting at Baltimore, USA
|List by||Joseph Jose Thottacherry|
ASCB/EMBO Annual Meeting 2018
This list relates to preprints that were discussed at the recent ASCB conference.
|List by||Gautam Dey, Amanda Haage|