Learning to see colours: Biologically relevant virtual staining for adipocyte cell images.

Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as tho...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Håkan Wieslander, Ankit Gupta, Ebba Bergman, Erik Hallström, Philip John Harrison
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/f5a33d28812c4130a3416306c4697143
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:f5a33d28812c4130a3416306c4697143
record_format dspace
spelling oai:doaj.org-article:f5a33d28812c4130a3416306c46971432021-12-02T20:13:39ZLearning to see colours: Biologically relevant virtual staining for adipocyte cell images.1932-620310.1371/journal.pone.0258546https://doaj.org/article/f5a33d28812c4130a3416306c46971432021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0258546https://doaj.org/toc/1932-6203Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images using virtual staining (also known as "label-free prediction" and "in-silico labeling") can get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels.Håkan WieslanderAnkit GuptaEbba BergmanErik HallströmPhilip John HarrisonPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 10, p e0258546 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Håkan Wieslander
Ankit Gupta
Ebba Bergman
Erik Hallström
Philip John Harrison
Learning to see colours: Biologically relevant virtual staining for adipocyte cell images.
description Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images using virtual staining (also known as "label-free prediction" and "in-silico labeling") can get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels.
format article
author Håkan Wieslander
Ankit Gupta
Ebba Bergman
Erik Hallström
Philip John Harrison
author_facet Håkan Wieslander
Ankit Gupta
Ebba Bergman
Erik Hallström
Philip John Harrison
author_sort Håkan Wieslander
title Learning to see colours: Biologically relevant virtual staining for adipocyte cell images.
title_short Learning to see colours: Biologically relevant virtual staining for adipocyte cell images.
title_full Learning to see colours: Biologically relevant virtual staining for adipocyte cell images.
title_fullStr Learning to see colours: Biologically relevant virtual staining for adipocyte cell images.
title_full_unstemmed Learning to see colours: Biologically relevant virtual staining for adipocyte cell images.
title_sort learning to see colours: biologically relevant virtual staining for adipocyte cell images.
publisher Public Library of Science (PLoS)
publishDate 2021
url https://doaj.org/article/f5a33d28812c4130a3416306c4697143
work_keys_str_mv AT hakanwieslander learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
AT ankitgupta learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
AT ebbabergman learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
AT erikhallstrom learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
AT philipjohnharrison learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
_version_ 1718374802136236032