Improving the Performance of Infrared and Visible Image Fusion Based on Latent Low-Rank Representation Nested With Rolling Guided Image Filtering
The fusion quality of infrared and visible image is very important for subsequent human understanding of image information and target processing. The fusion quality of the existing infrared and visible image fusion methods still has room for improvement in terms of image contrast, sharpness and rich...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/62fc5228386c462e8438e072e0ddb046 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | The fusion quality of infrared and visible image is very important for subsequent human understanding of image information and target processing. The fusion quality of the existing infrared and visible image fusion methods still has room for improvement in terms of image contrast, sharpness and richness of detailed information. To obtain better fusion performance, an infrared and visible image fusion algorithm based on latent low-rank representation (LatLRR) nested with rolling guided image filtering (RGIF) is proposed that is a novel solution that integrates two-level decomposition and three-layer fusion. First, infrared and visible images are decomposed using LatLRR to obtain the low-rank sublayers, saliency sublayers, and sparse noise sublayers. Then, RGIF is used to perform further multiscale decomposition of the low-rank sublayers to extract multiple detail layers, which are fused using convolutional neural network (CNN)-based fusion rules to obtain the detail-enhanced layer. Next, an algorithm based on improved visual saliency mapping with weighted guided image filtering (IVSM-GIF) is used to fuse the low-rank sublayers, and an algorithm for adaptive weighting of regional energy features based on Laplacian pyramid decomposition is used to fuse the saliency sublayers. Finally, the fused low-rank sublayer, saliency sublayer, and detail-enhanced layer are used to reconstruct the final image. The experimental results show that the proposed method outperforms other state-of-the-art fusion methods in terms of visual quality and objective evaluation, achieving the highest average values in six objective evaluation metrics. |
---|