Automatic Unsupervised Fabric Defect Detection Based on Self-Feature Comparison

Due to the huge demand for textile production in China, fabric defect detection is particularly attractive. At present, an increasing number of supervised deep-learning methods are being applied in surface defect detection. However, the annotation of datasets in industrial settings often depends on...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Zhengrui Peng, Xinyi Gong, Bengang Wei, Xiangyi Xu, Shixiong Meng
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/31b7c9e6f72c4afaab16d01da69f689b
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Due to the huge demand for textile production in China, fabric defect detection is particularly attractive. At present, an increasing number of supervised deep-learning methods are being applied in surface defect detection. However, the annotation of datasets in industrial settings often depends on professional inspectors. Moreover, the methods based on supervised learning require a lot of annotation, which consumes a great deal of time and costs. In this paper, an approach based on self-feature comparison (SFC) was employed that accurately located and segmented fabric texture images to find anomalies with unsupervised learning. The SFC architecture contained the self-feature reconstruction module and the self-feature distillation. Accurate fiber anomaly location and segmentation were generated based on these two modules. Compared with the traditional methods that operate in image space, the comparison of feature space can better locate the anomalies of fiber texture surfaces. Evaluations were performed on the three publicly available databases. The results indicated that our method performed well compared with other methods, and had excellent defect detection ability in the collected textile images. In addition, the visual results showed that our results can be used as a pixel-level candidate label.