Neural network control of focal position during time-lapse microscopy of cells

Abstract Live-cell microscopy is quickly becoming an indispensable technique for studying the dynamics of cellular processes. Maintaining the specimen in focus during image acquisition is crucial for high-throughput applications, especially for long experiments or when a large sample is being contin...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Ling Wei, Elijah Roberts
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2018
Materias:
R
Q
Acceso en línea:https://doaj.org/article/c2496b00bc6c46dfbd0577b8fbb0bc40
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract Live-cell microscopy is quickly becoming an indispensable technique for studying the dynamics of cellular processes. Maintaining the specimen in focus during image acquisition is crucial for high-throughput applications, especially for long experiments or when a large sample is being continuously scanned. Automated focus control methods are often expensive, imperfect, or ill-adapted to a specific application and are a bottleneck for widespread adoption of high-throughput, live-cell imaging. Here, we demonstrate a neural network approach for automatically maintaining focus during bright-field microscopy. Z-stacks of yeast cells growing in a microfluidic device were collected and used to train a convolutional neural network to classify images according to their z-position. We studied the effect on prediction accuracy of the various hyperparameters of the neural network, including downsampling, batch size, and z-bin resolution. The network was able to predict the z-position of an image with ±1 μm accuracy, outperforming human annotators. Finally, we used our neural network to control microscope focus in real-time during a 24 hour growth experiment. The method robustly maintained the correct focal position compensating for 40 μm of focal drift and was insensitive to changes in the field of view. About ~100 annotated z-stacks were required to train the network making our method quite practical for custom autofocus applications.