Actor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking
This paper synchronizes control theory with computer vision by formalizing object tracking as a sequential decision-making process. A reinforcement learning (RL) agent successfully tracks an interface between two liquids, which is often a critical variable to track in many chemical, petrochemical, m...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Elsevier
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a624e2d1afb747039710f977d50ab005 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:a624e2d1afb747039710f977d50ab005 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:a624e2d1afb747039710f977d50ab0052021-11-14T04:32:22ZActor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking2095-809910.1016/j.eng.2021.04.027https://doaj.org/article/a624e2d1afb747039710f977d50ab0052021-09-01T00:00:00Zhttp://www.sciencedirect.com/science/article/pii/S209580992100326Xhttps://doaj.org/toc/2095-8099This paper synchronizes control theory with computer vision by formalizing object tracking as a sequential decision-making process. A reinforcement learning (RL) agent successfully tracks an interface between two liquids, which is often a critical variable to track in many chemical, petrochemical, metallurgical, and oil industries. This method utilizes less than 100 images for creating an environment, from which the agent generates its own data without the need for expert knowledge. Unlike supervised learning (SL) methods that rely on a huge number of parameters, this approach requires far fewer parameters, which naturally reduces its maintenance cost. Besides its frugal nature, the agent is robust to environmental uncertainties such as occlusion, intensity changes, and excessive noise. From a closed-loop control context, an interface location-based deviation is chosen as the optimization goal during training. The methodology showcases RL for real-time object-tracking applications in the oil sands industry. Along with a presentation of the interface tracking problem, this paper provides a detailed review of one of the most effective RL methodologies: actor–critic policy.Oguzhan DogruKirubakaran VelswamyBiao HuangElsevierarticleInterface trackingObject trackingOcclusionReinforcement learningUniform manifold approximation and projectionEngineering (General). Civil engineering (General)TA1-2040ENEngineering, Vol 7, Iss 9, Pp 1248-1261 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Interface tracking Object tracking Occlusion Reinforcement learning Uniform manifold approximation and projection Engineering (General). Civil engineering (General) TA1-2040 |
spellingShingle |
Interface tracking Object tracking Occlusion Reinforcement learning Uniform manifold approximation and projection Engineering (General). Civil engineering (General) TA1-2040 Oguzhan Dogru Kirubakaran Velswamy Biao Huang Actor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking |
description |
This paper synchronizes control theory with computer vision by formalizing object tracking as a sequential decision-making process. A reinforcement learning (RL) agent successfully tracks an interface between two liquids, which is often a critical variable to track in many chemical, petrochemical, metallurgical, and oil industries. This method utilizes less than 100 images for creating an environment, from which the agent generates its own data without the need for expert knowledge. Unlike supervised learning (SL) methods that rely on a huge number of parameters, this approach requires far fewer parameters, which naturally reduces its maintenance cost. Besides its frugal nature, the agent is robust to environmental uncertainties such as occlusion, intensity changes, and excessive noise. From a closed-loop control context, an interface location-based deviation is chosen as the optimization goal during training. The methodology showcases RL for real-time object-tracking applications in the oil sands industry. Along with a presentation of the interface tracking problem, this paper provides a detailed review of one of the most effective RL methodologies: actor–critic policy. |
format |
article |
author |
Oguzhan Dogru Kirubakaran Velswamy Biao Huang |
author_facet |
Oguzhan Dogru Kirubakaran Velswamy Biao Huang |
author_sort |
Oguzhan Dogru |
title |
Actor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking |
title_short |
Actor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking |
title_full |
Actor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking |
title_fullStr |
Actor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking |
title_full_unstemmed |
Actor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking |
title_sort |
actor–critic reinforcement learning and application in developing computer-vision-based interface tracking |
publisher |
Elsevier |
publishDate |
2021 |
url |
https://doaj.org/article/a624e2d1afb747039710f977d50ab005 |
work_keys_str_mv |
AT oguzhandogru actorcriticreinforcementlearningandapplicationindevelopingcomputervisionbasedinterfacetracking AT kirubakaranvelswamy actorcriticreinforcementlearningandapplicationindevelopingcomputervisionbasedinterfacetracking AT biaohuang actorcriticreinforcementlearningandapplicationindevelopingcomputervisionbasedinterfacetracking |
_version_ |
1718429976263393280 |