MOCOnet: Robust Motion Correction of Cardiovascular Magnetic Resonance T1 Mapping Using Convolutional Neural Networks

Background: Quantitative cardiovascular magnetic resonance (CMR) T1 mapping has shown promise for advanced tissue characterisation in routine clinical practise. However, T1 mapping is prone to motion artefacts, which affects its robustness and clinical interpretation. Current methods for motion corr...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Ricardo A. Gonzales, Qiang Zhang, Bartłomiej W. Papież, Konrad Werys, Elena Lukaschuk, Iulia A. Popescu, Matthew K. Burrage, Mayooran Shanmuganathan, Vanessa M. Ferreira, Stefan K. Piechnik
Formato: article
Lenguaje:EN
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://doaj.org/article/b23d9ebf5a7e48dd830b04b907644f6b
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Background: Quantitative cardiovascular magnetic resonance (CMR) T1 mapping has shown promise for advanced tissue characterisation in routine clinical practise. However, T1 mapping is prone to motion artefacts, which affects its robustness and clinical interpretation. Current methods for motion correction on T1 mapping are model-driven with no guarantee on generalisability, limiting its widespread use. In contrast, emerging data-driven deep learning approaches have shown good performance in general image registration tasks. We propose MOCOnet, a convolutional neural network solution, for generalisable motion artefact correction in T1 maps.Methods: The network architecture employs U-Net for producing distance vector fields and utilises warping layers to apply deformation to the feature maps in a coarse-to-fine manner. Using the UK Biobank imaging dataset scanned at 1.5T, MOCOnet was trained on 1,536 mid-ventricular T1 maps (acquired using the ShMOLLI method) with motion artefacts, generated by a customised deformation procedure, and tested on a different set of 200 samples with a diverse range of motion. MOCOnet was compared to a well-validated baseline multi-modal image registration method. Motion reduction was visually assessed by 3 human experts, with motion scores ranging from 0% (strictly no motion) to 100% (very severe motion).Results: MOCOnet achieved fast image registration (<1 second per T1 map) and successfully suppressed a wide range of motion artefacts. MOCOnet significantly reduced motion scores from 37.1±21.5 to 13.3±10.5 (p < 0.001), whereas the baseline method reduced it to 15.8±15.6 (p < 0.001). MOCOnet was significantly better than the baseline method in suppressing motion artefacts and more consistently (p = 0.007).Conclusion: MOCOnet demonstrated significantly better motion correction performance compared to a traditional image registration approach. Salvaging data affected by motion with robustness and in a time-efficient manner may enable better image quality and reliable images for immediate clinical interpretation.