Cross-Bands Information Transfer to Offset Ambiguities and Atmospheric Phenomena for Multispectral Data Visualization

Visualization of multispectral images through band selection methods determines an information loss that in utmost cases proves to be critical for the adequate understanding of the represented scene. The R–G–B representation obtained by mapping the visual bands to the R, G, and...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Iulia Coca Neagoe, Mihai Coca, Corina Vaduva, Mihai Datcu
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/010b3aa3f8bd419f8f0fd7f52a99321b
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Visualization of multispectral images through band selection methods determines an information loss that in utmost cases proves to be critical for the adequate understanding of the represented scene. The R–G–B representation obtained by mapping the visual bands to the R, G, and B channels is highly used due to its great resemblance with the natural color one and aspects perceivable by the human eye. However, despite the similarity in terms of color code, ambiguities between classes such as water and vegetation or atmospheric phenomena like fog, clouds, and smoke that have been penetrated by other bands, remain visible and hinder the process of visualization of the Earth surface. This article presents a set of five different methods to offset the effects caused by ambiguities, fog, light clouds, and smoke by transferring relevant information between bands in order to visually reconstitute those parts of the image affected by atmospheric phenomena. The general concept shared by these methods implies a stacked autoencoder that successfully encompasses the information from all spectral bands into a latent representation used for visualization. Each proposed method is defined by different combination of input and error function formula. Spectral and polar coordinates features represent the possible options for the input, while formulas based on mean squared error or angular spectral distances determine the potential choices in terms of error function definition. The property of angular spectral distance and polar coordinates transformation to obtain illuminant invariant features determined their use in three out of five methods. We evaluate the methods through spectral signature graphical comparison and visual comparison related to the R–G–B representation. We conduct experiments on multiple Sentinel 2 full images.