Discover
/
Article

Machine learning for image restoration

JAN 03, 2019
A neural network improves image quality when it knows what to look for.

Fluorescence microscopy usually involves a trade-off between producing a quality image and having a healthy sample. Illuminating the sample with higher laser power strengthens the fluorescent signal but risks damaging biological samples and photobleaching fluorescent dyes. Imaging at a slower frame rate with lower laser power often produces high-quality images but sacrifices information in samples that move.

4654/f1-2.jpg

When such compromises hinder the recording of high-quality images, researchers often try to improve the images after the fact. To that end, Loïc Royer at the Chan Zuckerberg Biohub in San Francisco and Martin Weigert, Florian Jug, and Eugene Myers at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, have developed content-aware image restoration (CARE), a convolutional neural network trained on features specific to the system being observed. Using pairs of images—high-resolution ground-truth images and low-resolution images of the same areas—CARE learns to improve the low-resolution ones. Once trained, it can quickly improve the quality of unseen low-resolution images.

The first test for CARE was a flatworm whose muscles reflexively flinch under even moderate amounts of light. The researchers used dead worm samples to generate pairs of training images for the CARE network. Once it was trained, the network turned low-light images of worms with fluorescently dyed cell nuclei into high-resolution images, as illustrated in the figure.

CARE also improved feature identification in systems that were developing over time, such as beetle embryos and fruit-fly epithelia. The researchers maintained their ability to identify individual cell nuclei—and sometimes improved it—in images taken with 1/60 as much light as was needed without CARE. With less light used per image, researchers can record at a higher frame rate or for longer times.

Weigert, Royer, Jug, and colleagues investigated the reliability of CARE-restored images by training multiple networks and comparing their output images. Often, all were in close agreement, which indicated a reliable result. Disagreement between the networks allowed the researchers to identify possible inaccuracies in the restored images. (M. Weigert et al., Nat. Methods 15, 1090, 2018 .)

Related content
/
Article
/
Article
The availability of free translation software clinched the decision for the new policy. To some researchers, it’s anathema.
/
Article
The Nancy Grace Roman Space Telescope will survey the sky for vestiges of the universe’s expansion.

Get PT in your inbox

pt_newsletter_card_blue.png
PT The Week in Physics

A collection of PT's content from the previous week delivered every Monday.

pt_newsletter_card_darkblue.png
PT New Issue Alert

Be notified about the new issue with links to highlights and the full TOC.

pt_newsletter_card_pink.png
PT Webinars & White Papers

The latest webinars, white papers and other informational resources.

By signing up you agree to allow AIP to send you email newsletters. You further agree to our privacy policy and terms of service.