Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection
The depth map contains abundant spatial structure cues, which makes it extensively introduced into saliency detection tasks for improving the detection accuracy. Nevertheless, the acquired depth map is often with uneven quality, due to the interference of depth sensors and external environments, pos...
Guardado en:
Autores principales: | , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a36210973e84426688a7859caa3fc9f1 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:a36210973e84426688a7859caa3fc9f1 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:a36210973e84426688a7859caa3fc9f12021-11-18T00:06:39ZProgressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection2169-353610.1109/ACCESS.2021.3126338https://doaj.org/article/a36210973e84426688a7859caa3fc9f12021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9606676/https://doaj.org/toc/2169-3536The depth map contains abundant spatial structure cues, which makes it extensively introduced into saliency detection tasks for improving the detection accuracy. Nevertheless, the acquired depth map is often with uneven quality, due to the interference of depth sensors and external environments, posing a challenge when trying to minimize the disturbances from low-quality depth maps during the fusion process. In this article, to mitigate such issues and highlight the salient objects, we propose a progressive guided fusion network (PGFNet) with multi-modal and multi-scale attention for RGB-D salient object detection. Particularly, we first present a multi-modal and multi-scale attention fusion model (MMAFM) to fully mine and utilize the complementarity of features at different scales and modalities for achieving optimal fusion. Then, to strengthen the semantic expressiveness of the shallow-layer features, we design a multi-modal feature refinement mechanism (MFRM), which exploits the high-level fusion feature to guide the enhancement of the shallow-layer original RGB and depth features before they are fused. Moreover, a residual prediction module (RPM) is applied to further suppress background elements. Our entire network adopts a top-down strategy to progressively excavate and integrate valuable information. Compared with the state-of-the-art methods, experimental results demonstrate the effectiveness of our proposed method both qualitatively and quantitatively on eight challenging benchmark datasets.Jiajia WuGuangliang HanHaining WangHang YangQingqing LiDongxu LiuFangjian YePeixun LiuIEEEarticleRGB-Dsalient object detectionmulti-modal and multi-scale attentionprogressive guided fusionElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 150608-150622 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
RGB-D salient object detection multi-modal and multi-scale attention progressive guided fusion Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
RGB-D salient object detection multi-modal and multi-scale attention progressive guided fusion Electrical engineering. Electronics. Nuclear engineering TK1-9971 Jiajia Wu Guangliang Han Haining Wang Hang Yang Qingqing Li Dongxu Liu Fangjian Ye Peixun Liu Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection |
description |
The depth map contains abundant spatial structure cues, which makes it extensively introduced into saliency detection tasks for improving the detection accuracy. Nevertheless, the acquired depth map is often with uneven quality, due to the interference of depth sensors and external environments, posing a challenge when trying to minimize the disturbances from low-quality depth maps during the fusion process. In this article, to mitigate such issues and highlight the salient objects, we propose a progressive guided fusion network (PGFNet) with multi-modal and multi-scale attention for RGB-D salient object detection. Particularly, we first present a multi-modal and multi-scale attention fusion model (MMAFM) to fully mine and utilize the complementarity of features at different scales and modalities for achieving optimal fusion. Then, to strengthen the semantic expressiveness of the shallow-layer features, we design a multi-modal feature refinement mechanism (MFRM), which exploits the high-level fusion feature to guide the enhancement of the shallow-layer original RGB and depth features before they are fused. Moreover, a residual prediction module (RPM) is applied to further suppress background elements. Our entire network adopts a top-down strategy to progressively excavate and integrate valuable information. Compared with the state-of-the-art methods, experimental results demonstrate the effectiveness of our proposed method both qualitatively and quantitatively on eight challenging benchmark datasets. |
format |
article |
author |
Jiajia Wu Guangliang Han Haining Wang Hang Yang Qingqing Li Dongxu Liu Fangjian Ye Peixun Liu |
author_facet |
Jiajia Wu Guangliang Han Haining Wang Hang Yang Qingqing Li Dongxu Liu Fangjian Ye Peixun Liu |
author_sort |
Jiajia Wu |
title |
Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection |
title_short |
Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection |
title_full |
Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection |
title_fullStr |
Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection |
title_full_unstemmed |
Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection |
title_sort |
progressive guided fusion network with multi-modal and multi-scale attention for rgb-d salient object detection |
publisher |
IEEE |
publishDate |
2021 |
url |
https://doaj.org/article/a36210973e84426688a7859caa3fc9f1 |
work_keys_str_mv |
AT jiajiawu progressiveguidedfusionnetworkwithmultimodalandmultiscaleattentionforrgbdsalientobjectdetection AT guanglianghan progressiveguidedfusionnetworkwithmultimodalandmultiscaleattentionforrgbdsalientobjectdetection AT hainingwang progressiveguidedfusionnetworkwithmultimodalandmultiscaleattentionforrgbdsalientobjectdetection AT hangyang progressiveguidedfusionnetworkwithmultimodalandmultiscaleattentionforrgbdsalientobjectdetection AT qingqingli progressiveguidedfusionnetworkwithmultimodalandmultiscaleattentionforrgbdsalientobjectdetection AT dongxuliu progressiveguidedfusionnetworkwithmultimodalandmultiscaleattentionforrgbdsalientobjectdetection AT fangjianye progressiveguidedfusionnetworkwithmultimodalandmultiscaleattentionforrgbdsalientobjectdetection AT peixunliu progressiveguidedfusionnetworkwithmultimodalandmultiscaleattentionforrgbdsalientobjectdetection |
_version_ |
1718425242931560448 |