Progressive Guided Fusion Network With Multi-Modal and Multi-Scale Attention for RGB-D Salient Object Detection
The depth map contains abundant spatial structure cues, which makes it extensively introduced into saliency detection tasks for improving the detection accuracy. Nevertheless, the acquired depth map is often with uneven quality, due to the interference of depth sensors and external environments, pos...
Guardado en:
Autores principales: | Jiajia Wu, Guangliang Han, Haining Wang, Hang Yang, Qingqing Li, Dongxu Liu, Fangjian Ye, Peixun Liu |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a36210973e84426688a7859caa3fc9f1 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Edge-Aware Multi-Level Interactive Network for Salient Object Detection of Strip Steel Surface Defects
por: Xiaofei Zhou, et al.
Publicado: (2021) -
Multi-Modal Deep Learning for Weeds Detection in Wheat Field Based on RGB-D Images
por: Ke Xu, et al.
Publicado: (2021) -
A Novel 2D-3D CNN with Spectral-Spatial Multi-Scale Feature Fusion for Hyperspectral Image Classification
por: Dongxu Liu, et al.
Publicado: (2021) -
GSS-RiskAsser: A Multi-Modal Deep-Learning Framework for Urban Gas Supply System Risk Assessment on Business Users
por: Xuefei Li, et al.
Publicado: (2021) -
Metaknowledge Extraction Based on Multi-Modal Documents
por: Shu-Kan Liu, et al.
Publicado: (2021)