Repot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments

Collecting and monitoring data in low-latency from numerous sensing devices is one of the key foundations in networked cyber-physical applications such as industrial process control, intelligent traffic control, and networked robots. As the delay in data updates can degrade the quality of networked...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Youngseok Lee, Woo Kyung Kim, Sung Hyun Choi, Ikjun Yeom, Honguk Woo
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/5896d7e1f2a54ae19ade84d6d7ca88b2
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:5896d7e1f2a54ae19ade84d6d7ca88b2
record_format dspace
spelling oai:doaj.org-article:5896d7e1f2a54ae19ade84d6d7ca88b22021-11-18T00:10:48ZRepot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments2169-353610.1109/ACCESS.2021.3125008https://doaj.org/article/5896d7e1f2a54ae19ade84d6d7ca88b22021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9599665/https://doaj.org/toc/2169-3536Collecting and monitoring data in low-latency from numerous sensing devices is one of the key foundations in networked cyber-physical applications such as industrial process control, intelligent traffic control, and networked robots. As the delay in data updates can degrade the quality of networked monitoring, it is desirable to continuously maintain the optimal setting on sensing devices in terms of transmission rates and bandwidth allocation, taking into account application requirements as well as time-varying conditions of underlying network environments. In this paper, we adapt deep reinforcement learning (RL) to achieve a bandwidth allocation policy in networked monitoring. We present a transferable RL model <italic>Repot</italic> in which a policy trained in an easy-to-learn network environment can be readily adjusted in various target network environments. Specifically, we employ <italic>flow embedding</italic> and <italic>action shaping</italic> schemes in <italic>Repot</italic> that enable the systematic adaptation of a bandwidth allocation policy to the conditions of a target environment. Through experiments with the NS-3 network simulator, we show that <italic>Repot</italic> achieves stable and high monitoring performance across different network conditions, e.g., outperforming other heuristics and learning-based solutions by 14.5&#x007E;20.8&#x0025; in quality-of-experience (QoE) for a target network environment. We also demonstrate the sample-efficient adaptation in <italic>Repot</italic> by exploiting only 6.25&#x0025; of the sample amount required for model training from scratch. We present a case study with the SUMO mobility simulator and verify the benefits of <italic>Repot</italic> in practical scenarios, showing performance gains over the others, e.g., 6.5&#x0025; in urban-scale and 12.6&#x0025; in suburb-scale.Youngseok LeeWoo Kyung KimSung Hyun ChoiIkjun YeomHonguk WooIEEEarticleNetworked monitoring systemsbandwidth allocationtransferable reinforcement learningdomain adaptationpolicy transferflow embeddingElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 147280-147294 (2021)
institution DOAJ
collection DOAJ
language EN
topic Networked monitoring systems
bandwidth allocation
transferable reinforcement learning
domain adaptation
policy transfer
flow embedding
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
spellingShingle Networked monitoring systems
bandwidth allocation
transferable reinforcement learning
domain adaptation
policy transfer
flow embedding
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Youngseok Lee
Woo Kyung Kim
Sung Hyun Choi
Ikjun Yeom
Honguk Woo
Repot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments
description Collecting and monitoring data in low-latency from numerous sensing devices is one of the key foundations in networked cyber-physical applications such as industrial process control, intelligent traffic control, and networked robots. As the delay in data updates can degrade the quality of networked monitoring, it is desirable to continuously maintain the optimal setting on sensing devices in terms of transmission rates and bandwidth allocation, taking into account application requirements as well as time-varying conditions of underlying network environments. In this paper, we adapt deep reinforcement learning (RL) to achieve a bandwidth allocation policy in networked monitoring. We present a transferable RL model <italic>Repot</italic> in which a policy trained in an easy-to-learn network environment can be readily adjusted in various target network environments. Specifically, we employ <italic>flow embedding</italic> and <italic>action shaping</italic> schemes in <italic>Repot</italic> that enable the systematic adaptation of a bandwidth allocation policy to the conditions of a target environment. Through experiments with the NS-3 network simulator, we show that <italic>Repot</italic> achieves stable and high monitoring performance across different network conditions, e.g., outperforming other heuristics and learning-based solutions by 14.5&#x007E;20.8&#x0025; in quality-of-experience (QoE) for a target network environment. We also demonstrate the sample-efficient adaptation in <italic>Repot</italic> by exploiting only 6.25&#x0025; of the sample amount required for model training from scratch. We present a case study with the SUMO mobility simulator and verify the benefits of <italic>Repot</italic> in practical scenarios, showing performance gains over the others, e.g., 6.5&#x0025; in urban-scale and 12.6&#x0025; in suburb-scale.
format article
author Youngseok Lee
Woo Kyung Kim
Sung Hyun Choi
Ikjun Yeom
Honguk Woo
author_facet Youngseok Lee
Woo Kyung Kim
Sung Hyun Choi
Ikjun Yeom
Honguk Woo
author_sort Youngseok Lee
title Repot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments
title_short Repot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments
title_full Repot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments
title_fullStr Repot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments
title_full_unstemmed Repot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments
title_sort repot: transferable reinforcement learning for quality-centric networked monitoring in various environments
publisher IEEE
publishDate 2021
url https://doaj.org/article/5896d7e1f2a54ae19ade84d6d7ca88b2
work_keys_str_mv AT youngseoklee repottransferablereinforcementlearningforqualitycentricnetworkedmonitoringinvariousenvironments
AT wookyungkim repottransferablereinforcementlearningforqualitycentricnetworkedmonitoringinvariousenvironments
AT sunghyunchoi repottransferablereinforcementlearningforqualitycentricnetworkedmonitoringinvariousenvironments
AT ikjunyeom repottransferablereinforcementlearningforqualitycentricnetworkedmonitoringinvariousenvironments
AT hongukwoo repottransferablereinforcementlearningforqualitycentricnetworkedmonitoringinvariousenvironments
_version_ 1718425164558893056