Direct pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain)
Abstract Regional soft tissue mechanical strain offers crucial insights into tissue's mechanical function and vital indicators for different related disorders. Tagging magnetic resonance imaging (tMRI) has been the standard method for assessing the mechanical characteristics of organs such as t...
Guardado en:
Autores principales: | , , , , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/289b609cbbd642d3b7f30fe70e97a18b |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:289b609cbbd642d3b7f30fe70e97a18b |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:289b609cbbd642d3b7f30fe70e97a18b2021-11-28T12:20:17ZDirect pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain)10.1038/s41598-021-02279-y2045-2322https://doaj.org/article/289b609cbbd642d3b7f30fe70e97a18b2021-11-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-02279-yhttps://doaj.org/toc/2045-2322Abstract Regional soft tissue mechanical strain offers crucial insights into tissue's mechanical function and vital indicators for different related disorders. Tagging magnetic resonance imaging (tMRI) has been the standard method for assessing the mechanical characteristics of organs such as the heart, the liver, and the brain. However, constructing accurate artifact-free pixelwise strain maps at the native resolution of the tagged images has for decades been a challenging unsolved task. In this work, we developed an end-to-end deep-learning framework for pixel-to-pixel mapping of the two-dimensional Eulerian principal strains $$\varvec{{\varepsilon }}_{\boldsymbol{p1}}$$ ε p 1 and $$\varvec{{\varepsilon }}_{\boldsymbol{p2}}$$ ε p 2 directly from 1-1 spatial modulation of magnetization (SPAMM) tMRI at native image resolution using convolutional neural network (CNN). Four different deep learning conditional generative adversarial network (cGAN) approaches were examined. Validations were performed using Monte Carlo computational model simulations, and in-vivo datasets, and compared to the harmonic phase (HARP) method, a conventional and validated method for tMRI analysis, with six different filter settings. Principal strain maps of Monte Carlo tMRI simulations with various anatomical, functional, and imaging parameters demonstrate artifact-free solid agreements with the corresponding ground-truth maps. Correlations with the ground-truth strain maps were R = 0.90 and 0.92 for the best-proposed cGAN approach compared to R = 0.12 and 0.73 for the best HARP method for $$\varvec{{\varepsilon }}_{\boldsymbol{p1}}$$ ε p 1 and $$\varvec{{\varepsilon }}_{\boldsymbol{p2}}$$ ε p 2 , respectively. The proposed cGAN approach's error was substantially lower than the error in the best HARP method at all strain ranges. In-vivo results are presented for both healthy subjects and patients with cardiac conditions (Pulmonary Hypertension). Strain maps, obtained directly from their corresponding tagged MR images, depict for the first time anatomical, functional, and temporal details at pixelwise native high resolution with unprecedented clarity. This work demonstrates the feasibility of using the deep learning cGAN for direct myocardial and liver Eulerian strain mapping from tMRI at native image resolution with minimal artifacts.Khaled Z. Abd-ElmoniemInas A. YassineNader S. MetwalliAhmed HamimiRonald OuwerkerkJatin R. MattaMia WesselMichael A. SolomonJason M. ElinoffAhmed M. GhanemAhmed M. GharibNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-20 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Medicine R Science Q |
spellingShingle |
Medicine R Science Q Khaled Z. Abd-Elmoniem Inas A. Yassine Nader S. Metwalli Ahmed Hamimi Ronald Ouwerkerk Jatin R. Matta Mia Wessel Michael A. Solomon Jason M. Elinoff Ahmed M. Ghanem Ahmed M. Gharib Direct pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain) |
description |
Abstract Regional soft tissue mechanical strain offers crucial insights into tissue's mechanical function and vital indicators for different related disorders. Tagging magnetic resonance imaging (tMRI) has been the standard method for assessing the mechanical characteristics of organs such as the heart, the liver, and the brain. However, constructing accurate artifact-free pixelwise strain maps at the native resolution of the tagged images has for decades been a challenging unsolved task. In this work, we developed an end-to-end deep-learning framework for pixel-to-pixel mapping of the two-dimensional Eulerian principal strains $$\varvec{{\varepsilon }}_{\boldsymbol{p1}}$$ ε p 1 and $$\varvec{{\varepsilon }}_{\boldsymbol{p2}}$$ ε p 2 directly from 1-1 spatial modulation of magnetization (SPAMM) tMRI at native image resolution using convolutional neural network (CNN). Four different deep learning conditional generative adversarial network (cGAN) approaches were examined. Validations were performed using Monte Carlo computational model simulations, and in-vivo datasets, and compared to the harmonic phase (HARP) method, a conventional and validated method for tMRI analysis, with six different filter settings. Principal strain maps of Monte Carlo tMRI simulations with various anatomical, functional, and imaging parameters demonstrate artifact-free solid agreements with the corresponding ground-truth maps. Correlations with the ground-truth strain maps were R = 0.90 and 0.92 for the best-proposed cGAN approach compared to R = 0.12 and 0.73 for the best HARP method for $$\varvec{{\varepsilon }}_{\boldsymbol{p1}}$$ ε p 1 and $$\varvec{{\varepsilon }}_{\boldsymbol{p2}}$$ ε p 2 , respectively. The proposed cGAN approach's error was substantially lower than the error in the best HARP method at all strain ranges. In-vivo results are presented for both healthy subjects and patients with cardiac conditions (Pulmonary Hypertension). Strain maps, obtained directly from their corresponding tagged MR images, depict for the first time anatomical, functional, and temporal details at pixelwise native high resolution with unprecedented clarity. This work demonstrates the feasibility of using the deep learning cGAN for direct myocardial and liver Eulerian strain mapping from tMRI at native image resolution with minimal artifacts. |
format |
article |
author |
Khaled Z. Abd-Elmoniem Inas A. Yassine Nader S. Metwalli Ahmed Hamimi Ronald Ouwerkerk Jatin R. Matta Mia Wessel Michael A. Solomon Jason M. Elinoff Ahmed M. Ghanem Ahmed M. Gharib |
author_facet |
Khaled Z. Abd-Elmoniem Inas A. Yassine Nader S. Metwalli Ahmed Hamimi Ronald Ouwerkerk Jatin R. Matta Mia Wessel Michael A. Solomon Jason M. Elinoff Ahmed M. Ghanem Ahmed M. Gharib |
author_sort |
Khaled Z. Abd-Elmoniem |
title |
Direct pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain) |
title_short |
Direct pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain) |
title_full |
Direct pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain) |
title_fullStr |
Direct pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain) |
title_full_unstemmed |
Direct pixel to pixel principal strain mapping from tagging MRI using end to end deep convolutional neural network (DeepStrain) |
title_sort |
direct pixel to pixel principal strain mapping from tagging mri using end to end deep convolutional neural network (deepstrain) |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/289b609cbbd642d3b7f30fe70e97a18b |
work_keys_str_mv |
AT khaledzabdelmoniem directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT inasayassine directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT nadersmetwalli directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT ahmedhamimi directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT ronaldouwerkerk directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT jatinrmatta directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT miawessel directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT michaelasolomon directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT jasonmelinoff directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT ahmedmghanem directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain AT ahmedmgharib directpixeltopixelprincipalstrainmappingfromtaggingmriusingendtoenddeepconvolutionalneuralnetworkdeepstrain |
_version_ |
1718408001874821120 |