Close

Interactive design of GPU-accelerated Image Data Flow Graphs and cross-platform deployment using multi-lingual code generation

Robert Haase, Akanksha Jain, Stéphane Rigaud, Daniela Vorkel, Pradeep Rajasekhar, Theresa Suckert, Talley J. Lambert, Juan Nunez-Iglesias, Daniel P. Poole, Pavel Tomancak, Eugene W. Myers

Posted on: 20 December 2020

Preprint posted on 20 November 2020

Image data flow graphs - democratising image analysis!

Selected by Mariana De Niz

Background

Modern research in life sciences relies heavily on fluorescence microscopy, and subsequent quantitative image analysis. The availability of image analysis algorithms to a broader audience boosts the need for accessible tools for building graphics processing units (GPU)-accelerated image analysis workflows. The rise of GPUs in the context of image processing enables batch processing large amounts of image data at unprecedented speed. Usually, designing data analysis procedures using GPUs requires expertise in programming, and knowledge of GPU-specific programming languages. Haase and colleagues (1) present here an expert system based on the GPU-accelerated image processing library CLIJ. They hereby demonstrate the construction of complete image analysis pipelines by assembling workflows from operations provided by the CLIJ framework. The user interface CLIJ-assistant allows interactive design of image data flow graphs (IDFGs) in ImageJ or Fiji, while guiding the user by keeping track of which operations formed an image, and suggesting subsequent operations. Operations, their parameters and connections in the IDFG are stored at any point in time enabling the CLIJ-assistant to offer an undo-function for virtually unlimited rewinding parameter changes. The CLIJ-assistant can generate code from IDFGs in multiple programming languages for later use in multiple image analysis platforms. The CLIJ-assistant is open source, and available online at https://clij.github.io/assistant/

Figure 1. An IDFG allows assembly of image processing workflows and display of intermediate results in real-time (Ref 1).

 

Key findings and developments

Key developments

Design. CLIJ-assistant was implemented under the ImageJ user-interface, to improve accessibility by users. In the background, CLIJ-assistant manages an IDFG. The image data are propagated in the graph in one direction only, without loops being constructed. The approach used is similar to the user interfaces of Icy and Knime, with the advantage that intermediate results of the whole graph are always images which can be updated instantly in ImageJ’s user interface, while the user changes parameters. To optimize performance, the image stacks are kept in the GPU’s memory while minimizing pushing and pulling image data to/from GPU memory. To visualize the state of a graph, IDFG windows appear differently than standard ImageJ windows by their frame colour, visualizing the graph’s execution state: a red frame indicates that the shown image is invalid and will be computed when intermediate results higher in the graph hierarchy are available. A yellow frame shows a currently ongoing computation. A green frame indicates a computation is finished. Upon moving a window, all downstream windows also move, thus giving the user an impression of connections between graph nodes. This way of user interaction with the imaging data allows users to learn image processing and analysis, and the relationship between operations more efficiently, as it keeps technical implementation details out of sight.

For every processed image, CLIJ-assistant can backtrack, so that it is not necessary to store intermediate results of various parameters configurations in the computer memory. The possibility of rewinding to former parameter settings brings a virtually unlimited undo functionality to Fiji. Parameter changes can be documented within and between projects.

Available operations and extensibility.  Following installation of the CLIJ-assistant, about 249 image processing operations become available. The CLIJ-assistant uses ImageJ2 plugin mechanisms to automatically discover additional CLIJ-compatible plugins, therefore, custom third-party operations can be introduced as graph nodes.

Expert system. Image analysis workflows can be thought of as an assembly of operations. In recent years, in order to guide users, a search-bar and an auto-completion of commands in the script editor were introduced to facilitate accessibility and application of the steps in the image analysis workflow. However, neither the search-bar nor the auto-completion tools suggest what to do next. To overcome this possible limitation, the authors implemented an expert system in the CLIJ-assistant. Expert systems are a form of artificial intelligence developed to for instance, help users in the decision making process. In this case, the expert system guides users in choosing the right operations step by step, by making context-dependent suggestions based on previously executed operations. Nevertheless, all operations are available under all circumstances from Fiji’s search bar, giving the user full access to all available CLIJ-assistant compatible operations.

Semi-automated parameter optimization. The authors introduce a simple annotation tool based on ImageJ’s ROI manager and an automatic parameter optimization tool. It is recommended to start the optimization with a good manual initial guess.

Code generation for automation, documentation and knowledge exchange. The authors implemented code generation capabilities in the CLIJ-assistant using an IDFG as a starting point. After an IDFG has been set up, configured, and optimized, the graph can be exported in various programming languages. Moreover, it is possible to compare scripts in multiple languages. The exported scripts allow the user to go beyond ImageJ and Fiji because they also offer programming languages applicable in other platforms. Supported languages are ImageJ Macro, Icy, Javascript, Matlab, Fiji Groovy, JavaScript, and Fiji Jython. Meanwhile, support for Python and C++ are under development, allowing prototype testing. In order to foster reproducibility of image analysis procedures by clear documentation, human-readable protocols of the IDFG can be exported. These protocols can be used to communicate the applied image processing workflow with scientists using other platforms, or people without coding experience.

 

Key applications

The authors demonstrate the capabilities of CLIJ-assistant in four different contexts, namely gut neuroscience, developmental biology, and cancer research.

Gut neuroscience context: Interactions between the enteric nervous system and the resident immune cells are important for the normal functioning of the gut. In disease, the spatial organization of these cells can be disrupted. The authors used a combination of LSFM and optical clearing to acquire images of an optically cleared mouse colon in which different structures were fluorescently labeled. The challenge of this dataset is the separation of the different layers of the gut within the image, and the ability to view cell types within each layer. The authors developed and demonstrated a workflow with multiple steps to answer the question on spatial organization of cells in the gut in health and disease.

Developing Tribolium embryos: Gastrulation is a major developmental event in an organism’s life. 2D cell shape changes and 3D volumetric shape changes are observed in tissues in an embryo as they acquire their final shapes. The authors developed and demonstrated a workflow to understand the contribution of cell behaviours to tissue morphogenesis in developing embryos. The authors also investigated Tribolium embryo development upon digital serosa removal. Serosa is an outer extra-embryonic protective layer, with different cell shapes and mechanical properties along the dorsal-ventral axis. The authors developed an IDFG to identify and selectively digitally remove the serosa.

Cell classification on 2D histological mouse brain sections. The main question in these sections was to investigate how anti-cancer treatments affect both the tumour itself but also the surrounding healthy tissue. A reporter mouse line offers the possibility to study the effects of radiation on cells, including investigating the damaged cell fraction, and how it correlated with the applied radiation dose.

What I like about this preprint

I like the fact that CLIJ-assistant offers huge versatility and applicability, and that the authors actively encourage users to take part in open calls for contributions. I think as a development this is a fantastic tool for the scientific community, including everyone using microscopy who might have been so far limited if coding knowledge was missing. CLIJ-assistant is consistent with the philosophy of democratizing science, and producing open science.

References

  1. Haase et al, Interactive design of GPU-accelerated image data flow graphs and cross-platform deployment using multi-lingual code generation, bioRxiv, 2020.

 

doi: https://doi.org/10.1242/prelights.26574

Read preprint (No Ratings Yet)

Author's response

Robert Haase shared

Open questions 

1.This is a fantastic tool! What is the range of microscopes from which the output can be analysed using CLIJ-assistant? You showed applicability to some techniques here as proof of principle. Do you expect variations in performance depending on the input- For instance would it support analysis of super-resolution microscopy, intravital imaging, motility assays, or optoacoustic imaging?

RH: Thanks for the flowers! The idea for the toolbox came up while I dived with my collaborators into developmental biology and light sheet microscopy. Thus, if you work with four-dimensional imaging data of specimen developing over time, such as beetle or fly embryos, potentially with multiple channels, these tools will be most beneficial for you; independent from the microscopy technique. That also means, if your image data that has less dimensions, for example a multichannel, three-dimensional image stacks of mouse colon or large two-dimensional slices of mouse brain, the tools will work as well. I am personally most intrigued by studying neighborhood-relationships of cells forming tissues and that is why the CLIJ-assistant might be best suited for answering questions on a cellular level. I have been asked by some collaborators if CLIJ does help with reconstruction of super-resolution microscopy data and was considering for a moment to extend CLIJ towards this direction. However, this would mean reinventing the wheel: Such data can be well treated using the NanoJ toolbox, which is by the way also GPU-accelerated using OpenCL [https://iopscience.iop.org/article/10.1088/1361-6463/ab0261].

2.You mention that despite the assistant facilitating the following steps in a certain workflow, all functions are always available to the users. How does this impact workflow performance? For instance, if you add unexpected steps from what the expert system would have predicted.

RH: Performance-wise there should be no drawback from not following the expert system’s suggestions. When processing data of some kind, no involved expert has worked with earlier, steps have to be taken which are not suggested. These scenarios are important for the fourth question you ask further back. We can make the expert system better for other scenarios by expanding its knowledge base.

Furthermore, the expert system is just aware of a subset of CLIJ’s operations. Some operations are hidden in the user-interface because they are easy to use from scripting, but hard to make available in a graphical user interface. Those operations allow to design image processing workflows which are more flexible and potentially also faster than workflows generated by the assistant. Thus, the CLIJ-assistant and its expert system are a starting point for building workflows and facilitate writing the first scripts. Learning programming CLIJ and ImageJ Macro or Python is still worth the effort: With custom scripting you can go further than with generated scripts.

3.Can you expand briefly on what are limiting factors for the CLIJ-assistant under different experimental conditions and types of data? Do you have troubleshooting advice for a range of scenarios, implemented perhaps as a help wizard?

RH: After the publication of CLIJ [1] during discussions with collaborators I realized that the major limitation in building image processing workflows in general is a common knowledge gap about what steps could be applied to data, how could they be applied and in which order they should be applied. The implemented expert system demonstrates how we can overcome this limitation, but it isn’t built yet on the expert knowledge of abroad audience from various fields. Image processing for example in neuroscience typically consists of different processing steps compared to developmental biology. Imagine, we had the chance to build up a knowledge base taken from neuroscience bio-image analysis experts and some experts from other fields. When starting the CLIJ-assistant, you could then choose “Show me suggestions from the neuroscience community.” and then suggestions will fit better to your data. We are going there but it may take some time.

4.You encourage users to contribute users to take part in open calls to contribute to expanding this tool. Can you expand further on this, namely what range of contributions is expected, how would it help CLIJ-assistant grow, and how would it be implemented?

RH: You are referring to the CLIJ-usage miner [https://clij.github.io/usage-miner/], a Fiji plugin that allows extracting a knowledge base from a folder of ImageJ macros which use CLIJ. After scientists worked with CLIJ for some time and saved their processing workflows as macro files to a folder, the usage miner can scan those files and extract a knowledge base which can be shared with the community. This knowledge base is a human readable text file with some descriptive statistics measures on what operations are called after each other in the macro files which were scanned during the extraction process. Assume you often use the operation “Threshold Otsu” after “Gaussian Blur”, then these two operations will be connected with a high number in the text file. If we had many of those text files from many scientists working with different kind of data, the expert system could make better suggestions. Thus, after you worked with CLIJ in a couple of projects, go to the usage miner website and follow the instructions there.

Furthermore, as the CLIJ-assistant comes with a Fiji plugin generator, you can share these plugins with collaborators. What we haven’t mentioned explicitly in the preprint: If you design CLIJ Fiji plugins which might be useful to others, it would be an honor if you provide them to become part of the next CLIJ release and make them available to a broader community. Our community guidelines website explains how we work together [https://clij.github.io/clij2-docs/community_guidelines].

Last but not least, feedback is so important. I would like to thank all the scientists who tested CLIJ and the CLIJ-assistant and provided feedback. Computer scientists like myself look at microscopy images differently that biologists do. Our fascination comes from a different perspective. Thus, to make the most versatile toolbox for analyzing biological imaging data, we need to work together, communicate a lot, exchange ideas, data, solutions and expert knowledge. Any kind of feedback is very welcome and I encourage users to get in touch. Let’s make the next CLIJ, which will be called “clEsperanto” [http://clesperanto.net/], a truly interdisciplinary, cross-platform, multi-lingual adventure in the space between pixels and cells!

 

Have your say

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sign up to customise the site to your preferences and to receive alerts

Register here
Close