APEER Release Notes | April 2019

25.4.2019
Release Notes

Welcome to APEER’s April 2019 release. In the last weeks we mainly concentrated on content and visuals. This release’s key highlights are:

  • New Landing Page
  • APEER Challenges
  • Contact Us
  • Train U-Net on APEER

New Landing Page

We completely re-designed our landing page to give a quick an easy overview about the vision of APEER and the core features of our platform.

APEER’s vision is to help imaging specialists to create one good workflow by using only one platform – instead of using numerous different software solutions. Therefore APEER enables researchers to create their own modules and combine them into customized workflows. But also imaging students with no prior coding knowledge can create workflows easily. They can use already existing public modules. It’s also possible to share the modules and workflows with team members or the whole APEER community. This function will help to increase productivity and reproducibility in research.

APEER Challenges

The community aspect is also an important part of APEER. One of the key visions of our platform is to bring together researchers and developers from across the world.
So users can easily share their knowledge and skills with others and profit from each other. Our new APEER Challenges bring us closer to this vision. If you are working in image processing and would like to contribute to the community by solving problems then head over to our challenges page. Here you can check out the challenges and get your hands on curated data. In case you are facing an image processing problem you can seek help from the APEER community. For this just submit your challenge to us and we will publish it on our website.

Contact Us

Our new Contact Us form will now let you select a specific area of support. Just tell us what’s on your mind and the right person will come back to you.

Train U-Net on APEER

By using our new Supervised Segmentation Trainer Module you are now able to train a deep neural network (U-Net) in order to segment the objects of interest on your images like cells, brain tissue, particles, etc. This fully supervised segmentation model works based on raw images and corresponding ground truth masks. It is also possible to tune the hyper-parameters of training without making any changes in the source code.

If you already have your Keras model trained, then you can use our Segmentation Mask Generator Module in order to apply it to an unseen set of images. All you need to do is to provide the module with the list of images and the pre-trained Keras model. The module will take care of the rest.

Thomas Irmer

Software Developer

Related Posts

Stay in touch with our newsletter!

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form