Making sense of your imaging data traditionally was and mostly still is a tedious process. One of the baselines of any neuroscientific research: Neuron Detection and Segmentation. Whether you are aiming to map brain structures in 3D, understand the wiring of different parts of the brain or understand the defects diseases like Alzheimer’s are causing within the brain structures, it all starts with neuron detection and segmentation. Only with a good segmentation in your hands, the extraction of relevant statistical data becomes possible.
In our approach, we have acquired confocal Airyscan data with a ZEISS LSM 880 of mouse brain. We used Thy1-GFP-M Mice, perfused and postfixed overnight and produced vibratome sections with 50 µm thickness. The images are representative of data collected by numerous neuroscientists to answer various scientific questions.
We made use of the machine-learning module ZEISS ZEN Intellesis and spend some time categorizing the parts of the images we wanted to analyze. More concretely, we used two of the confocal image stacks and invested about one day to quickly paint some neurons, background and nuclei. In each stack we classified our different structures of interest in about 10 slices. It took the machine roughly 90 minutes to learn.
Once you have trained a model, the first thing you want is to apply it to the images. And you could run it from your local workstation with ZEISS ZEN Intellesis, that would be perfectly fine. However, that would block your workstation for the time it takes to run the segmentation, time you could start working on more image processing or visualization tasks. And we all know workstations are in short supply.
We propose to choose a way that is easily executable due to a convenient layout, repeatable and shareable. The cloud computation platform APEER offers a workflow editor that lets you set up a Neuron Segmentation Workflow.
Our Neuron Segmentation Workflow starts with an input to upload images or directly start the workflow from the microscope. The next module converts the file format, in our case a CZI image, and sends the files to the machine-learning module for segmentation. The training files we generated offline are used now to segment our dataset. Afterwards, we get two resulting image series, the segmentation, and the probability map, a measure of the certainty of our segmentation. Both image series are converted back to image stacks with additional modules.
After running the initial image through the workflow, we received a segmentation like the one below in our APEER account. Using the online viewer an initial visualization is right at your fingertips.
Finally, our example here is just the beginning; the workflow is easily extended with additional modules that use the segmentation for more in depth analyses.
In a nutshell, this approach offers you a fast and reliable way towards neuron detection and segmentation. You will be able to make this workflow, or single steps, accessible to your lab or international collaborators you are writing papers with. Additionally, once you have published a paper based on this research, you can make it repeatable with one click. No more “I tried to copy your research, but I did not succeed.” The opportunities are endless. Welcome to the future.