Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning
Many real-world tasks require multiple agents to work together. When talking about multiple agents in robotics, it is usually referenced to multiple manipulators in collaboration to solve a given task, where each one is controlled by a single agent. However, due to the increasing development of modu...
Guardado en:
Autores principales: | , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/f269c8ad80704f40ba131479cbc80f80 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:f269c8ad80704f40ba131479cbc80f80 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:f269c8ad80704f40ba131479cbc80f802021-11-11T15:16:30ZDecentralized Multi-Agent Control of a Manipulator in Continuous Task Learning10.3390/app1121102272076-3417https://doaj.org/article/f269c8ad80704f40ba131479cbc80f802021-11-01T00:00:00Zhttps://www.mdpi.com/2076-3417/11/21/10227https://doaj.org/toc/2076-3417Many real-world tasks require multiple agents to work together. When talking about multiple agents in robotics, it is usually referenced to multiple manipulators in collaboration to solve a given task, where each one is controlled by a single agent. However, due to the increasing development of modular and re-configurable robots, it is also important to investigate the possibility of implementing multi-agent controllers that learn how to manage the manipulator’s degrees of freedom (DoF) in separated clusters for the execution of a given application (e.g., being able to face faults or, partially, new kinematics configurations). Within this context, this paper focuses on the decentralization of the robot control action learning and (re)execution considering a generic multi-DoF manipulator. Indeed, the proposed framework employs a multi-agent paradigm and investigates how such a framework impacts the control action learning process. Multiple variations of the multi-agent framework have been proposed and tested in this research, comparing the achieved performance w.r.t. a centralized (i.e., single-agent) control action learning framework, previously proposed by some of the authors. As a case study, a manipulation task (i.e., grasping and lifting) of an unknown object (to the robot controller) has been considered for validation, employing a Franka EMIKA panda robot. The MuJoCo environment has been employed to implement and test the proposed multi-agent framework. The achieved results show that the proposed decentralized approach is capable of accelerating the learning process at the beginning with respect to the single-agent framework while also reducing the computational effort. In fact, when decentralizing the controller, it is shown that the number of variables involved in the action space can be efficiently separated into several groups and several agents. This simplifies the original complex problem into multiple ones, efficiently improving the task learning process.Asad Ali ShahidJorge Said Vidal SesinDamjan PecioskiFrancesco BraghinDario PigaLoris RovedaMDPI AGarticlereinforcement learningdecentralized controlmulti-agentcontinuous controlrobotic graspingpolicy optimizationTechnologyTEngineering (General). Civil engineering (General)TA1-2040Biology (General)QH301-705.5PhysicsQC1-999ChemistryQD1-999ENApplied Sciences, Vol 11, Iss 10227, p 10227 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
reinforcement learning decentralized control multi-agent continuous control robotic grasping policy optimization Technology T Engineering (General). Civil engineering (General) TA1-2040 Biology (General) QH301-705.5 Physics QC1-999 Chemistry QD1-999 |
spellingShingle |
reinforcement learning decentralized control multi-agent continuous control robotic grasping policy optimization Technology T Engineering (General). Civil engineering (General) TA1-2040 Biology (General) QH301-705.5 Physics QC1-999 Chemistry QD1-999 Asad Ali Shahid Jorge Said Vidal Sesin Damjan Pecioski Francesco Braghin Dario Piga Loris Roveda Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning |
description |
Many real-world tasks require multiple agents to work together. When talking about multiple agents in robotics, it is usually referenced to multiple manipulators in collaboration to solve a given task, where each one is controlled by a single agent. However, due to the increasing development of modular and re-configurable robots, it is also important to investigate the possibility of implementing multi-agent controllers that learn how to manage the manipulator’s degrees of freedom (DoF) in separated clusters for the execution of a given application (e.g., being able to face faults or, partially, new kinematics configurations). Within this context, this paper focuses on the decentralization of the robot control action learning and (re)execution considering a generic multi-DoF manipulator. Indeed, the proposed framework employs a multi-agent paradigm and investigates how such a framework impacts the control action learning process. Multiple variations of the multi-agent framework have been proposed and tested in this research, comparing the achieved performance w.r.t. a centralized (i.e., single-agent) control action learning framework, previously proposed by some of the authors. As a case study, a manipulation task (i.e., grasping and lifting) of an unknown object (to the robot controller) has been considered for validation, employing a Franka EMIKA panda robot. The MuJoCo environment has been employed to implement and test the proposed multi-agent framework. The achieved results show that the proposed decentralized approach is capable of accelerating the learning process at the beginning with respect to the single-agent framework while also reducing the computational effort. In fact, when decentralizing the controller, it is shown that the number of variables involved in the action space can be efficiently separated into several groups and several agents. This simplifies the original complex problem into multiple ones, efficiently improving the task learning process. |
format |
article |
author |
Asad Ali Shahid Jorge Said Vidal Sesin Damjan Pecioski Francesco Braghin Dario Piga Loris Roveda |
author_facet |
Asad Ali Shahid Jorge Said Vidal Sesin Damjan Pecioski Francesco Braghin Dario Piga Loris Roveda |
author_sort |
Asad Ali Shahid |
title |
Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning |
title_short |
Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning |
title_full |
Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning |
title_fullStr |
Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning |
title_full_unstemmed |
Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning |
title_sort |
decentralized multi-agent control of a manipulator in continuous task learning |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/f269c8ad80704f40ba131479cbc80f80 |
work_keys_str_mv |
AT asadalishahid decentralizedmultiagentcontrolofamanipulatorincontinuoustasklearning AT jorgesaidvidalsesin decentralizedmultiagentcontrolofamanipulatorincontinuoustasklearning AT damjanpecioski decentralizedmultiagentcontrolofamanipulatorincontinuoustasklearning AT francescobraghin decentralizedmultiagentcontrolofamanipulatorincontinuoustasklearning AT dariopiga decentralizedmultiagentcontrolofamanipulatorincontinuoustasklearning AT lorisroveda decentralizedmultiagentcontrolofamanipulatorincontinuoustasklearning |
_version_ |
1718435789940981760 |