Cross-Model Transformer Method for Medical Image Synthesis

Acquiring complementary information about tissue morphology from multimodal medical images is beneficial to clinical disease diagnosis, but it cannot be widely used due to the cost of scans. In such cases, medical image synthesis has become a popular area. Recently, generative adversarial network (G...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Zebin Hu, Hao Liu, Zhendong Li, Zekuan Yu
Formato: article
Lenguaje:EN
Publicado: Hindawi-Wiley 2021
Materias:
Acceso en línea:https://doaj.org/article/6acf5f7704c9454cba88120c0adc4688
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:6acf5f7704c9454cba88120c0adc4688
record_format dspace
spelling oai:doaj.org-article:6acf5f7704c9454cba88120c0adc46882021-11-15T01:19:19ZCross-Model Transformer Method for Medical Image Synthesis1099-052610.1155/2021/5624909https://doaj.org/article/6acf5f7704c9454cba88120c0adc46882021-01-01T00:00:00Zhttp://dx.doi.org/10.1155/2021/5624909https://doaj.org/toc/1099-0526Acquiring complementary information about tissue morphology from multimodal medical images is beneficial to clinical disease diagnosis, but it cannot be widely used due to the cost of scans. In such cases, medical image synthesis has become a popular area. Recently, generative adversarial network (GAN) models are applied to many medical image synthesis tasks and show prior performance, since they enable to capture structural details clearly. However, GAN still builds the main framework based on convolutional neural network (CNN) that exhibits a strong locality bias and spatial invariance through the use of shared weights across all positions. Therefore, the long-range dependencies have been destroyed in this processing. To address this issue, we introduce a double-scale deep learning method for cross-modal medical image synthesis. More specifically, the proposed method captures locality feature via local discriminator based on CNN and utilizes long-range dependencies to learning global feature through global discriminator based on transformer architecture. To evaluate the effectiveness of double-scale GAN, we conduct folds of experiments on the standard benchmark IXI dataset and experimental results demonstrate the effectiveness of our method.Zebin HuHao LiuZhendong LiZekuan YuHindawi-WileyarticleElectronic computers. Computer scienceQA75.5-76.95ENComplexity, Vol 2021 (2021)
institution DOAJ
collection DOAJ
language EN
topic Electronic computers. Computer science
QA75.5-76.95
spellingShingle Electronic computers. Computer science
QA75.5-76.95
Zebin Hu
Hao Liu
Zhendong Li
Zekuan Yu
Cross-Model Transformer Method for Medical Image Synthesis
description Acquiring complementary information about tissue morphology from multimodal medical images is beneficial to clinical disease diagnosis, but it cannot be widely used due to the cost of scans. In such cases, medical image synthesis has become a popular area. Recently, generative adversarial network (GAN) models are applied to many medical image synthesis tasks and show prior performance, since they enable to capture structural details clearly. However, GAN still builds the main framework based on convolutional neural network (CNN) that exhibits a strong locality bias and spatial invariance through the use of shared weights across all positions. Therefore, the long-range dependencies have been destroyed in this processing. To address this issue, we introduce a double-scale deep learning method for cross-modal medical image synthesis. More specifically, the proposed method captures locality feature via local discriminator based on CNN and utilizes long-range dependencies to learning global feature through global discriminator based on transformer architecture. To evaluate the effectiveness of double-scale GAN, we conduct folds of experiments on the standard benchmark IXI dataset and experimental results demonstrate the effectiveness of our method.
format article
author Zebin Hu
Hao Liu
Zhendong Li
Zekuan Yu
author_facet Zebin Hu
Hao Liu
Zhendong Li
Zekuan Yu
author_sort Zebin Hu
title Cross-Model Transformer Method for Medical Image Synthesis
title_short Cross-Model Transformer Method for Medical Image Synthesis
title_full Cross-Model Transformer Method for Medical Image Synthesis
title_fullStr Cross-Model Transformer Method for Medical Image Synthesis
title_full_unstemmed Cross-Model Transformer Method for Medical Image Synthesis
title_sort cross-model transformer method for medical image synthesis
publisher Hindawi-Wiley
publishDate 2021
url https://doaj.org/article/6acf5f7704c9454cba88120c0adc4688
work_keys_str_mv AT zebinhu crossmodeltransformermethodformedicalimagesynthesis
AT haoliu crossmodeltransformermethodformedicalimagesynthesis
AT zhendongli crossmodeltransformermethodformedicalimagesynthesis
AT zekuanyu crossmodeltransformermethodformedicalimagesynthesis
_version_ 1718428989876338688