Edge-Aware Multi-Level Interactive Network for Salient Object Detection of Strip Steel Surface Defects
The performance of the salient object detection of strip surface defects has been promoted largely by deep learning based models. However, due to the complexity of strip surface defects, the existing models perform poorly in the challenging scenes such as noise disturbance, and low contrast between...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/9b824aa28ef74f439a04dcebef8b3ee5 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | The performance of the salient object detection of strip surface defects has been promoted largely by deep learning based models. However, due to the complexity of strip surface defects, the existing models perform poorly in the challenging scenes such as noise disturbance, and low contrast between defect regions and background. Meanwhile, the detection results of existing models often suffer from coarse boundary details. Therefore, we propose a novel saliency model, namely an Edge-aware Multi-level Interactive Network, to detect the defects from the strip steel surface. Concretely, our model adopts the U-shape architecture where the two crucial points are the interactive feature integration and the edge-guided saliency fusion. Firstly, except the skip connection that combines the same stage of encoder and decoder, we deploy another connection, where the features from adjacent levels of encoder are transferred to the same stage of decoder. By this way, we are able to provide an effective fusion of multi-level deep features, yielding a well depiction for defects. Secondly, to give well-defined boundaries for prediction results, we add the edge extraction branch after each decoder block, where the progressive feature aggregation endows the edge with precise details and complete object cues. Meanwhile, together with the edge extraction branches, we deploy the saliency prediction branch at each decoder stage. After that, coupled with the fine edge information, we fuse all outputs of saliency prediction branches into the final saliency map, where the edge cue steers the saliency result to pay more attention to the boundary details. Following this way, we can provide a high-quality saliency map which can accurately locate and segment the defects. Extensive experiments are performed on the public dataset, and the results prove the effectiveness and robustness of our model which consistently outperforms the state-of-the-art models. |
---|