DeepImageTranslator: a free, user-friendly graphical interface for image translation using deep-learning and its applications in 3D CT image analysis

Run Zhou Ye, Christophe Noll, Gabriel Richard, Martin Lepage, Éric E. Turcotte, André C. Carpentier

Preprint posted on 17 May 2021

Article now published in SLAS Technology at

DeepImageTranslator: A new open-access graphical interface that enables researchers with no programming experience to design and evaluate deep-learning models for image translation applications.

Selected by Afonso Mendes

Categories: bioinformatics


Imaging is a fundamental tool in biomedical research, but its use requires the interpretation of the images acquired to detect and annotate relevant features (i.e., image classification). The development of imaging frameworks led to the establishment of high- throughput methods that generate large image libraries. Thus, manually processing such large-scale datasets becomes impractical, and prone to a high degree of error and low reproducibility. Deep-learning algorithms are computer processing systems with an architecture inspired by the neural networks composing animal brains. Artificial neural networks, usually called convolutional neural networks (CNNs), consist of interconnected layers of nodes (or neurons), where each neuron receives input from the neurons in the previous layer and sends a processed output to all the neurons in the next layer. CNNs excel at performing tasks involving pattern detection and decision-making. The advent of deep-learning algorithms had a disruptive effect in image translation, drastically improving the ability to analyse large image datasets and extract relevant information[1-3]. However, using these methods typically requires computer programming experience, which limits their application in biomedical research. A small number of open-access graphical interfaces enabling researchers with no programming experience to apply deep-learning algorithms to image translation are available, but they usually deploy previously designed algorithms with a fixed architecture[4]. In this preprint, Zhou et al. developed a simple and free graphical interface that allows inexperienced users to create, train, and evaluate their own deep-learning models for image translation.


Key Findings:

1) Development of DeepImageTranslator: a simple and open-access graphical interface that enables the creation, training, and evaluation of deep-learning pipelines for image translation.

The authors start by presenting the software developed. It is a simple and open- access graphical interface encompassing key features. First, it includes a main window to visualize the training, validation, and test datasets (Fig. 1a). Another window contains options to select the type of model optimiser, loss function, training metrics, batch size, and number of epochs/iterations (Fig. 1b). Moreover, it includes a window that enables the modulation of the CNN’s architectural features, such as the number and type of convolutional layers (Fig. 1c). It also includes a window to monitor the training process (Fig. 1d) and another with options to modulate the data augmentation scheme (Fig. 1e). The neural network employed follows the general structure of U-net[5].


Figure 1 – Showcase of the graphical interface’s features. a, The main window enables the visualization of the training, validation, and test datasets. b, A training hyperparameter selection window includes options to select the model optimizer, loss function, training metrics, batch size, and number of epochs/iterations. c, The model builder window allows for the modulation of the neural network’s architectural features, such as the number and type of convolutional layers. d, The training process can be monitored in the monitoring window. e, A window dedicated to the data augmentation scheme allows the user to modulate several options. Adapted from Figure 1 of the preprint.


2) DeepImageTranslator can be applied to computed tomography (CT) datasets for segmentation tasks and produces models with a high degree of generalisability.

In the next sections, the authors demonstrate the application of their software using CT image libraries. The segmentation of different features in an image is a crucial task in image translation. The authors create a CNN capable of performing a segmentation task involving the differentiation of subcutaneous adipose tissue, visceral adipose tissue, and lean tissues from CT images (Fig. 2a). The generalisability of the model is showcased in several manners. The neural network was capable of performing the segmentation task on CT images of legs, thorax, and scapular regions, even though it was trained using images from the abdominal region (Fig. 2b). Moreover, the model performed well regardless of the subjects’ bodyweight, body composition, and gender (Fig. 2c). Importantly, the model was able to achieve a remarkably high predictive power using a sample as small as 17 images, outperforming previously reported models based on significantly larger datasets.


Figure 2 – Assessment of out-of-sample model generalisability based on scans from a severely obese male subject and a very lean female subject. a, Model generalisability in the obese male subject. The left panel shows non-segmented input images. The subcutaneous adipose tissue, visceral adipose tissue, and lean tissues were manually segmented (middle panel) and compared to the segmentation output from the neural network, which performed well. b, Although trained using images of the abdominal region, the model accurately performed the segmentation task on input images from the legs (top), thorax (middle), and shoulder (bottom). c, The model was also able to perform the segmentation task on images from  lean female subjects (left panel), highlighting its generalisability regardless of the subjects’ gender, body weight, and body composition. Adapted from Figure 6 of the preprint.


3) DeepImageTranslator enables noise reduction for thoracic CT images.

The versatility of the neural networks produced using DeepImageTranslator is finally showcased by using the software for another crucial task in image translation – noise reduction. The authors used a dataset containing CT images of thoracic regions further manipulated to become noisy. The model was able to considerably reduce the noise introduced in the images and produce predictions that were almost indistinguishable from the noiseless images (Fig. 3). Interestingly, the model was able to recover details of the pulmonary vasculature invisible on the noisy images.


Figure 3 – Assessment of model performance and generalisability for noise reduction. a, Noisy input images of the thoracic region (left panel) were manually generated from the original images (middle panel) and used to test the noise reduction performance of the neural network by making predictions (right panel). b, Close-ups of the images provided to and retrieved from the model demonstrating the model’s ability to recover fine details of the pulmonary vasculature that were masked by noise in the input images. c, Although trained with images corresponding to the abdominal region, the model performed well in making predictions from images taken from the legs (top), abdomen (middle), and shoulder (bottom), highlighting its generalisability. Adapted from Figure 9 of the preprint.


Why I think this work is important:

As stated throughout this post, deep-learning models excel in performing crucial image translation tasks for biomedical research, such as segmentation and noise reduction. A major caveat for the application of this approach is that it usually requires programming experience, which is not a common skill among biomedical researchers. The development of tools such as DeepImageTranslator facilitate the access of inexperienced users to this technology and is important to enable its widespread application. While other projects that provide user-friendly access to deep-learning tools for image translation are available[4], DeepImageTranslator goes one step further and enables the user to modulate architectural elements of the neural network and evaluate its capability to perform specific tasks.


Questions for the authors:

  • Do you plan to support the software, for example, by providing updates in the future? If so, did you consider including a framework to share deep-learning pipelines between users?


  • Although you demonstrate that commonly used computers were able to perform the intended tasks in 5 to 17 hours, do you plan to use cloud-based processing to enable users with low-end computers to employ your software?



[1] Yasaka et al. (2018) Deep learning with convolutional neural network in radiology. Jpn J Radiol. 36(4):257-272. Doi: 1007/s11604-018-0726-3.

[2] Chi et al. (2019) Computed Tomography image quality enhancement via a uniform framework integrating noise estimation and super-resolution networks. Sensors. 19(15):3348. Doi: 10.3390/s19153348.

[3] Xiang et al. (2018) Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image. Medical Image Analysis. 47:31-44. Doi: 10.1016/

[4] Chamier et al. (2021) Democratising deep learning for microscopy with ZeroCostDL4Mic. Nature Communications. 12:2276. Doi: 1038/s41467-021-22518-0.

[5] Falk et al. (2019) U-Net: deep learning for cell counting, detection, and morphometry. Nature Methods. 16:67-70. Doi: 1038/s41592-018-0261-2.

Tags: deep learning, image analysis, neural network, noise reduction, segmentation

Posted on: 3 June 2021


Read preprint (No Ratings Yet)

Have your say

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sign up to customise the site to your preferences and to receive alerts

Register here