An Adaptive Threshold for the Canny Algorithm With Deep Reinforcement Learning

The Canny algorithm is widely used for edge detection. It requires the adjustment of parameters to obtain a high-quality edge image. Several methods can select them automatically, but they cannot cover the diverse variations on an image. In the Canny algorithm, we need to set values of three paramet...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Keong-Hun Choi, Jong-Eun Ha
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/9523fb1722094894875e95f4a04e927e
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:The Canny algorithm is widely used for edge detection. It requires the adjustment of parameters to obtain a high-quality edge image. Several methods can select them automatically, but they cannot cover the diverse variations on an image. In the Canny algorithm, we need to set values of three parameters. One is related to smoothing window size, and the other two are the low and high threshold. In this paper, we assume that the smoothing window size is fixed to a predefined size. This paper proposes a method to provide adaptive thresholds for the Canny algorithm, which operates well on images acquired under various variations. We select optimal values of two thresholds adaptively using an algorithm based on the Deep Q-Network (DQN). We introduce a state model, a policy model, and a reward model to formulate the given problem in deep reinforcement learning. The proposed method has the advantage that it can adapt to a new environment using only images without labels, unlike the existing supervised way. We show the feasibility of the proposed algorithm by diverse experimental results.