Multimodal Medical Image Fusion Based on Gabor Representation Combination of Multi-CNN and Fuzzy Neural Network

Aiming at the current multimodal medical image fusion methods that cannot fully characterize the complex textures and edge information of the lesion in the fused image, a method based on Gabor representation of multi-CNN combination and fuzzy neural network is proposed. This method first filters the...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Lifang Wang, Jin Zhang, Yang Liu, Jia Mi, Jiong Zhang
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/c56b7a66d70d45ce82060e474d890372
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Aiming at the current multimodal medical image fusion methods that cannot fully characterize the complex textures and edge information of the lesion in the fused image, a method based on Gabor representation of multi-CNN combination and fuzzy neural network is proposed. This method first filters the CT and MR image sets through a set of Gabor filter banks with different proportions and directions to obtain different Gabor representations pairs of CT and MR, each pair of different Gabor representations is used to train the corresponding CNN to generate a G- CNN and multiple G- CNN form a G- CNN group, namely G- CNNs; then when fusing CT and MR images, CT and MR are represented by Gabors to get Gabor representation pairs firstly, each Gabor representation pair is put into the corresponding trained G- CNN for preliminary fusion, then use the fuzzy neural network to fuse multiple outputs of the G- CNNs to obtain the final fused image. Compared with the nine recent state-of-the-art multimodal fusion methods, the average mutual information of the three groups of experiments has increased by 13%, 10.3%, and 10% respectively; the average spatial frequency has increased by 10.3%, 20%, and 10.7%; the average standard deviation has increased respectively 12.4%, 10.8%, 14.4%; the average edge retention information increased by 33.5%, 22%, and 43%. The experimental results show that the proposed fusion method is significantly better than the other comparative fusion methods in objective evaluation and visual quality. It has the best performance on the four indicators and can better integrate the rich texture features and the clear edge information of the source images into the final fused image, which improves the quality of multimodal medical image fusion, and effectively assists doctors in disease diagnosis.