Feature fusion for inverse synthetic aperture radar image classification via learning shared hidden space

Abstract Multi‐sensor fusion recognition is a meaningful task in ISAR image recognition. Compared with a single sensor, multi‐sensor fusion can provide richer target information, which is conducive to more accurate and robust identification. However, previous deep learning‐based fusion methods do no...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Wenhao Lin, Xunzhang Gao
Formato: article
Lenguaje:EN
Publicado: Wiley 2021
Materias:
Acceso en línea:https://doaj.org/article/982ad391af2947dc9064a06a82077124
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract Multi‐sensor fusion recognition is a meaningful task in ISAR image recognition. Compared with a single sensor, multi‐sensor fusion can provide richer target information, which is conducive to more accurate and robust identification. However, previous deep learning‐based fusion methods do not effectively deal with the redundancy and complementarity of information between different sources. In this letter, we construct a shared hidden space to align features from different sources. Accordingly, we design an end‐to‐end deep fusion framework to fuse dual ISAR images at the feature level. For combining the multi‐source information, deep generalised canonical correlation analysis (DGCCA) is used as the loss item to map features extracted from dual input onto the shared hidden space. Moreover, we propose an efficient and lightweight spatial attention module, named united attention module, which can be embedded between dual‐stream convolutional neural networks (CNNs) to promote DGCCA optimisation by information interaction. Compared with other deep fusion frameworks, our model obtains the competitive performance in ISAR image fusion for classification.