Deep Vision for Breast Cancer Classification and Segmentation
(1) Background: Female breast cancer diagnoses odds have increased from 11:1 in 1975 to 8:1 today. Mammography false positive rates (FPR) are associated with overdiagnoses and overtreatment, while false negative rates (FNR) increase morbidity and mortality. (2) Methods: Deep vision supervised learni...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/93a512ab233d49478864a87120cc3ad8 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:93a512ab233d49478864a87120cc3ad8 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:93a512ab233d49478864a87120cc3ad82021-11-11T15:29:43ZDeep Vision for Breast Cancer Classification and Segmentation10.3390/cancers132153842072-6694https://doaj.org/article/93a512ab233d49478864a87120cc3ad82021-10-01T00:00:00Zhttps://www.mdpi.com/2072-6694/13/21/5384https://doaj.org/toc/2072-6694(1) Background: Female breast cancer diagnoses odds have increased from 11:1 in 1975 to 8:1 today. Mammography false positive rates (FPR) are associated with overdiagnoses and overtreatment, while false negative rates (FNR) increase morbidity and mortality. (2) Methods: Deep vision supervised learning classifies 299 × 299 pixel de-noised mammography images as negative or non-negative using models built on 55,890 pre-processed training images and applied to 15,364 unseen test images. A small image representation from the fitted training model is returned to evaluate the portion of the loss function gradient with respect to the image that maximizes the classification probability. This gradient is then re-mapped back to the original images, highlighting the areas of the original image that are most influential for classification (perhaps masses or boundary areas). (3) Results: initial classification results were 97% accurate, 99% specific, and 83% sensitive. Gradient techniques for unsupervised region of interest mapping identified areas most associated with the classification results clearly on positive mammograms and might be used to support clinician analysis. (4) Conclusions: deep vision techniques hold promise for addressing the overdiagnoses and treatment, underdiagnoses, and automated region of interest identification on mammography.Lawrence FultonAlex McLeodDiane DolezelNathaniel BastianChristopher P. FultonMDPI AGarticledeep visionbreast cancermachine learningregion of interest detectionNeoplasms. Tumors. Oncology. Including cancer and carcinogensRC254-282ENCancers, Vol 13, Iss 5384, p 5384 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
deep vision breast cancer machine learning region of interest detection Neoplasms. Tumors. Oncology. Including cancer and carcinogens RC254-282 |
spellingShingle |
deep vision breast cancer machine learning region of interest detection Neoplasms. Tumors. Oncology. Including cancer and carcinogens RC254-282 Lawrence Fulton Alex McLeod Diane Dolezel Nathaniel Bastian Christopher P. Fulton Deep Vision for Breast Cancer Classification and Segmentation |
description |
(1) Background: Female breast cancer diagnoses odds have increased from 11:1 in 1975 to 8:1 today. Mammography false positive rates (FPR) are associated with overdiagnoses and overtreatment, while false negative rates (FNR) increase morbidity and mortality. (2) Methods: Deep vision supervised learning classifies 299 × 299 pixel de-noised mammography images as negative or non-negative using models built on 55,890 pre-processed training images and applied to 15,364 unseen test images. A small image representation from the fitted training model is returned to evaluate the portion of the loss function gradient with respect to the image that maximizes the classification probability. This gradient is then re-mapped back to the original images, highlighting the areas of the original image that are most influential for classification (perhaps masses or boundary areas). (3) Results: initial classification results were 97% accurate, 99% specific, and 83% sensitive. Gradient techniques for unsupervised region of interest mapping identified areas most associated with the classification results clearly on positive mammograms and might be used to support clinician analysis. (4) Conclusions: deep vision techniques hold promise for addressing the overdiagnoses and treatment, underdiagnoses, and automated region of interest identification on mammography. |
format |
article |
author |
Lawrence Fulton Alex McLeod Diane Dolezel Nathaniel Bastian Christopher P. Fulton |
author_facet |
Lawrence Fulton Alex McLeod Diane Dolezel Nathaniel Bastian Christopher P. Fulton |
author_sort |
Lawrence Fulton |
title |
Deep Vision for Breast Cancer Classification and Segmentation |
title_short |
Deep Vision for Breast Cancer Classification and Segmentation |
title_full |
Deep Vision for Breast Cancer Classification and Segmentation |
title_fullStr |
Deep Vision for Breast Cancer Classification and Segmentation |
title_full_unstemmed |
Deep Vision for Breast Cancer Classification and Segmentation |
title_sort |
deep vision for breast cancer classification and segmentation |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/93a512ab233d49478864a87120cc3ad8 |
work_keys_str_mv |
AT lawrencefulton deepvisionforbreastcancerclassificationandsegmentation AT alexmcleod deepvisionforbreastcancerclassificationandsegmentation AT dianedolezel deepvisionforbreastcancerclassificationandsegmentation AT nathanielbastian deepvisionforbreastcancerclassificationandsegmentation AT christopherpfulton deepvisionforbreastcancerclassificationandsegmentation |
_version_ |
1718435253446508544 |