The Pains of Scientific Image Processing

11.10.2018
Image Processing

Like many of you, I want my work to make a positive contribution to the well being of society. I would like to make a difference in the lives of patients suffering from diseases like cancer, Alzheimer’s and diabetes –  all curable by understanding the core cellular processes. Unfortunately, many researchers waste their time trying to figure out scientific image processing tasks.

Just like in the 90’s, routine image processing tasks are still done manually

For my latest project, I was lucky enough to actually meet and talk with quite a few researchers in the fields of neuroscience and cancer research. I was surprised to learn about how valuable research time is wasted in doing routine manual tasks. In fact, their microscopy workflows were as manual as mine from a couple of decades ago.

Back in mid-1990s, I’ve used my first microscope, Cambridge (now ZEISS) Stereoscan Scanning Electron Microscope (SEM) to understand how to make Indian wedding saris inexpensive.  The research challenge was to replace some gold with copper and to maximize the extrudability of gold fiber, by not losing any luster.

The process was mostly manual, all the way from sample preparation to results and report generation. The workflow started with a backscattered electron (BSE) image capture on a 4 x 5 Polaroid film.

Manual image processing is not as accurate as researchers need it to be

The aim was to estimate the area fraction of each phase in the BSE images. To achieve this task I placed a transparency slide with printed grid over the Polaroid image and counted the number of times dark & bright regions intercepted grid axes. I repeated the process by rotating the grid 45 degrees, to account for any preferred orientation of phases. I even applied smaller and smaller grids to improve accuracy but I was never happy, as I knew there were some regions (pixels) that I didn’t include in the measurement.

Interesting fact: The magnification value on SEM images, even the ones from modern SEMs, is reported by referencing the scan size against a 4 x 5 Polaroid film.

Image Processing Software is no silver bullet for your scientific needs

A miracle occurred the following year; the department got funding to purchase software!

We purchased Adobe Photoshop 3.0, a great piece of software but not built for microscopy and scientific image processing needs. However, Photoshop came with enough tools to automate and speed up part of my workflow. The updated workflow with partial digitization helped with improved accuracy and repeatability. For each sample, I ended up with a bunch of images, XRD spectra, pole figures, and accompanying data and metadata. It was a pain to keep track and share results with my adviser.

Today, the situation doesn’t seem to be much different; we still adapt existing software to our needs. There are great commercial software packages for image processing but I think people mistake them to be that one silver bullet that’s going to solve all problems. Some of them do have solutions in specific niche areas but not very useful for most applications. One of the researchers I talked with expressed frustration about how her time gets wasted by importing and exporting images into multiple pieces of software that do not talk to each other. Another researcher explained, “I am tired of attending machine learning talks on campus. I am a neuroscientist, not a data scientist. I need a button I can push to get reliable results so I can interpret them and design my next experiment.”

“I am a Neuroscientist, not a data scientist”

Lack of easy-to-use code sharing causes a lot of reinventing

Latest developments in the fields of machine learning and AI, especially with Python, enabled us to work with large datasets with some ease. I think Python is wonderful and for many people it is a good option as it enables anyone to have access to advanced algorithms. The problem is with researchers having to adapt these algorithms to their specific research needs and it is not easy with bulky code. Very few people can do this and these people leave when they find ‘permanent’ positions elsewhere. The new researcher wastes time in reinventing the same code because the previous code was not properly documented or was written in a different programming language.

It is appalling the amount of time researchers waste in rewriting code, in finding the right software, in moving files between various pieces of software, in changing file formats, in waiting for results on outdated computers, in sharing the results, and in writing reports by bringing all data together.

Saving valuable research time is saving lives

As part of the arivis Cloud team, I am working with an incredibly bright group of people who are devoted to solving the tough challenges researchers go through with their microscopy workflows. We believe we can make a difference by enabling automation of workflows and reducing the overhead from routine tasks. I believe that every hour the researcher saves is an hour put back into research and potentially an hour added to someone’s life.

This is why I love my job; I get to help researchers do what they want to do – research.

Roman Zinner

Content Manager

Related Posts

Stay in touch with our newsletter!

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form