Remote Sensing Image Scene Classification Based on Global Self-Attention Module
The complexity of scene images makes the research on remote-sensing image scene classification challenging. With the wide application of deep learning in recent years, many remote-sensing scene classification methods using a convolutional neural network (CNN) have emerged. Current CNN usually output...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/4d717f820e474328b343a656b62df52c |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:4d717f820e474328b343a656b62df52c |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:4d717f820e474328b343a656b62df52c2021-11-25T18:54:06ZRemote Sensing Image Scene Classification Based on Global Self-Attention Module10.3390/rs132245422072-4292https://doaj.org/article/4d717f820e474328b343a656b62df52c2021-11-01T00:00:00Zhttps://www.mdpi.com/2072-4292/13/22/4542https://doaj.org/toc/2072-4292The complexity of scene images makes the research on remote-sensing image scene classification challenging. With the wide application of deep learning in recent years, many remote-sensing scene classification methods using a convolutional neural network (CNN) have emerged. Current CNN usually output global information by integrating the depth features extricated from the convolutional layer through the fully connected layer; however, the global information extracted is not comprehensive. This paper proposes an improved remote-sensing image scene classification method based on a global self-attention module to address this problem. The global information is derived from the depth characteristics extracted by the CNN. In order to better express the semantic information of the remote-sensing image, the multi-head self-attention module is introduced for global information augmentation. Meanwhile, the local perception unit is utilized to improve the self-attention module’s representation capabilities for local objects. The proposed method’s effectiveness is validated through comparative experiments with various training ratios and different scales on public datasets (UC Merced, AID, and NWPU-NESISC45). The precision of our proposed model is significantly improved compared to other methods for remote-sensing image scene classification.Qingwen LiDongmei YanWanrong WuMDPI AGarticleremote-sensing imagescene classificationconvolutional neural network (CNN)global self-attention moduleScienceQENRemote Sensing, Vol 13, Iss 4542, p 4542 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
remote-sensing image scene classification convolutional neural network (CNN) global self-attention module Science Q |
spellingShingle |
remote-sensing image scene classification convolutional neural network (CNN) global self-attention module Science Q Qingwen Li Dongmei Yan Wanrong Wu Remote Sensing Image Scene Classification Based on Global Self-Attention Module |
description |
The complexity of scene images makes the research on remote-sensing image scene classification challenging. With the wide application of deep learning in recent years, many remote-sensing scene classification methods using a convolutional neural network (CNN) have emerged. Current CNN usually output global information by integrating the depth features extricated from the convolutional layer through the fully connected layer; however, the global information extracted is not comprehensive. This paper proposes an improved remote-sensing image scene classification method based on a global self-attention module to address this problem. The global information is derived from the depth characteristics extracted by the CNN. In order to better express the semantic information of the remote-sensing image, the multi-head self-attention module is introduced for global information augmentation. Meanwhile, the local perception unit is utilized to improve the self-attention module’s representation capabilities for local objects. The proposed method’s effectiveness is validated through comparative experiments with various training ratios and different scales on public datasets (UC Merced, AID, and NWPU-NESISC45). The precision of our proposed model is significantly improved compared to other methods for remote-sensing image scene classification. |
format |
article |
author |
Qingwen Li Dongmei Yan Wanrong Wu |
author_facet |
Qingwen Li Dongmei Yan Wanrong Wu |
author_sort |
Qingwen Li |
title |
Remote Sensing Image Scene Classification Based on Global Self-Attention Module |
title_short |
Remote Sensing Image Scene Classification Based on Global Self-Attention Module |
title_full |
Remote Sensing Image Scene Classification Based on Global Self-Attention Module |
title_fullStr |
Remote Sensing Image Scene Classification Based on Global Self-Attention Module |
title_full_unstemmed |
Remote Sensing Image Scene Classification Based on Global Self-Attention Module |
title_sort |
remote sensing image scene classification based on global self-attention module |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/4d717f820e474328b343a656b62df52c |
work_keys_str_mv |
AT qingwenli remotesensingimagesceneclassificationbasedonglobalselfattentionmodule AT dongmeiyan remotesensingimagesceneclassificationbasedonglobalselfattentionmodule AT wanrongwu remotesensingimagesceneclassificationbasedonglobalselfattentionmodule |
_version_ |
1718410584999854080 |