Repot: Transferable Reinforcement Learning for Quality-Centric Networked Monitoring in Various Environments

Collecting and monitoring data in low-latency from numerous sensing devices is one of the key foundations in networked cyber-physical applications such as industrial process control, intelligent traffic control, and networked robots. As the delay in data updates can degrade the quality of networked...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Youngseok Lee, Woo Kyung Kim, Sung Hyun Choi, Ikjun Yeom, Honguk Woo
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/5896d7e1f2a54ae19ade84d6d7ca88b2
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Collecting and monitoring data in low-latency from numerous sensing devices is one of the key foundations in networked cyber-physical applications such as industrial process control, intelligent traffic control, and networked robots. As the delay in data updates can degrade the quality of networked monitoring, it is desirable to continuously maintain the optimal setting on sensing devices in terms of transmission rates and bandwidth allocation, taking into account application requirements as well as time-varying conditions of underlying network environments. In this paper, we adapt deep reinforcement learning (RL) to achieve a bandwidth allocation policy in networked monitoring. We present a transferable RL model <italic>Repot</italic> in which a policy trained in an easy-to-learn network environment can be readily adjusted in various target network environments. Specifically, we employ <italic>flow embedding</italic> and <italic>action shaping</italic> schemes in <italic>Repot</italic> that enable the systematic adaptation of a bandwidth allocation policy to the conditions of a target environment. Through experiments with the NS-3 network simulator, we show that <italic>Repot</italic> achieves stable and high monitoring performance across different network conditions, e.g., outperforming other heuristics and learning-based solutions by 14.5&#x007E;20.8&#x0025; in quality-of-experience (QoE) for a target network environment. We also demonstrate the sample-efficient adaptation in <italic>Repot</italic> by exploiting only 6.25&#x0025; of the sample amount required for model training from scratch. We present a case study with the SUMO mobility simulator and verify the benefits of <italic>Repot</italic> in practical scenarios, showing performance gains over the others, e.g., 6.5&#x0025; in urban-scale and 12.6&#x0025; in suburb-scale.