Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning
Robotic grasping for cluttered tasks and heterogeneous targets is not satisfied by the deep learning that has been developed in the last decade. The main problem lies in intelligence, which is stagnant, even though it has a high accuracy rate in usual environment; however, the cluttered grasping env...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/749f7dd7ae2046d8a728f4e0c6b8a779 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:749f7dd7ae2046d8a728f4e0c6b8a779 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:749f7dd7ae2046d8a728f4e0c6b8a7792021-12-02T00:00:39ZSelf-Correction for Eye-In-Hand Robotic Grasping Using Action Learning2169-353610.1109/ACCESS.2021.3129474https://doaj.org/article/749f7dd7ae2046d8a728f4e0c6b8a7792021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9622215/https://doaj.org/toc/2169-3536Robotic grasping for cluttered tasks and heterogeneous targets is not satisfied by the deep learning that has been developed in the last decade. The main problem lies in intelligence, which is stagnant, even though it has a high accuracy rate in usual environment; however, the cluttered grasping environment is very irregular. In this paper, an action learning for robotic grasping using eye-in-hand coordination is developed to grasp the cluttered and wide range of various objects using 6 degree-of-freedom (DOF) robotic manipulator equipped with a three-finger gripper. To involve action learning in this system, k-Nearest Neighbor (kNN), Disparity Map (DM), and You Only Look Once (YOLO) were needed. After successfully formulating the learning cycle, an instrument assesses the robot’s environment and performance with qualitative weightings. Some experiments of measuring the depth of the target, localization of target variations, target detection, and the gripping process itself were conducted. The entire process is spread out in plan, act, observe, and reflect for each action learning cycle. If the first cycle does not suffice the results according to the minimum pass standard, the cycle will renew until the robot succeeds in picking and placing. Furthermore, this study demonstrated that the action learning-based object manipulation system with stereo-like vision and eye-in-hand calibration can improve intelligence over previous errors with acceptable errors. Thus, action learning might be applicable to other object manipulation systems without having to define the environment first. MuslikhinJenq-Ruey HorngSzu-Yueh YangMing-Shyan WangIEEEarticleAction learningdeep learningeye-in-hand manipulatork-nearest neighborrobotic manipulatorrobotic graspingElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 156422-156436 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Action learning deep learning eye-in-hand manipulator k-nearest neighbor robotic manipulator robotic grasping Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
Action learning deep learning eye-in-hand manipulator k-nearest neighbor robotic manipulator robotic grasping Electrical engineering. Electronics. Nuclear engineering TK1-9971 Muslikhin Jenq-Ruey Horng Szu-Yueh Yang Ming-Shyan Wang Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning |
description |
Robotic grasping for cluttered tasks and heterogeneous targets is not satisfied by the deep learning that has been developed in the last decade. The main problem lies in intelligence, which is stagnant, even though it has a high accuracy rate in usual environment; however, the cluttered grasping environment is very irregular. In this paper, an action learning for robotic grasping using eye-in-hand coordination is developed to grasp the cluttered and wide range of various objects using 6 degree-of-freedom (DOF) robotic manipulator equipped with a three-finger gripper. To involve action learning in this system, k-Nearest Neighbor (kNN), Disparity Map (DM), and You Only Look Once (YOLO) were needed. After successfully formulating the learning cycle, an instrument assesses the robot’s environment and performance with qualitative weightings. Some experiments of measuring the depth of the target, localization of target variations, target detection, and the gripping process itself were conducted. The entire process is spread out in plan, act, observe, and reflect for each action learning cycle. If the first cycle does not suffice the results according to the minimum pass standard, the cycle will renew until the robot succeeds in picking and placing. Furthermore, this study demonstrated that the action learning-based object manipulation system with stereo-like vision and eye-in-hand calibration can improve intelligence over previous errors with acceptable errors. Thus, action learning might be applicable to other object manipulation systems without having to define the environment first. |
format |
article |
author |
Muslikhin Jenq-Ruey Horng Szu-Yueh Yang Ming-Shyan Wang |
author_facet |
Muslikhin Jenq-Ruey Horng Szu-Yueh Yang Ming-Shyan Wang |
author_sort |
Muslikhin |
title |
Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning |
title_short |
Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning |
title_full |
Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning |
title_fullStr |
Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning |
title_full_unstemmed |
Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning |
title_sort |
self-correction for eye-in-hand robotic grasping using action learning |
publisher |
IEEE |
publishDate |
2021 |
url |
https://doaj.org/article/749f7dd7ae2046d8a728f4e0c6b8a779 |
work_keys_str_mv |
AT muslikhin selfcorrectionforeyeinhandroboticgraspingusingactionlearning AT jenqrueyhorng selfcorrectionforeyeinhandroboticgraspingusingactionlearning AT szuyuehyang selfcorrectionforeyeinhandroboticgraspingusingactionlearning AT mingshyanwang selfcorrectionforeyeinhandroboticgraspingusingactionlearning |
_version_ |
1718403992832180224 |