Coexistence Scheme for Uncoordinated LTE and WiFi Networks Using Experience Replay Based Q-Learning

Nowadays, broadband applications that use the licensed spectrum of the cellular network are growing fast. For this reason, Long-Term Evolution-Unlicensed (LTE-U) technology is expected to offload its traffic to the unlicensed spectrum. However, LTE-U transmissions have to coexist with the existing W...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Merkebu Girmay, Vasilis Maglogiannis, Dries Naudts, Adnan Shahid, Ingrid Moerman
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/9b3e9b5c0c604808bbe6664f4b94e30d
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Nowadays, broadband applications that use the licensed spectrum of the cellular network are growing fast. For this reason, Long-Term Evolution-Unlicensed (LTE-U) technology is expected to offload its traffic to the unlicensed spectrum. However, LTE-U transmissions have to coexist with the existing WiFi networks. Most existing coexistence schemes consider coordinated LTE-U and WiFi networks where there is a central coordinator that communicates traffic demand of the co-located networks. However, such a method of WiFi traffic estimation raises the complexity, traffic overhead, and reaction time of the coexistence schemes. In this article, we propose Experience Replay (ER) and Reward selective Experience Replay (RER) based Q-learning techniques as a solution for the coexistence of uncoordinated LTE-U and WiFi networks. In the proposed schemes, the LTE-U deploys a WiFi saturation sensing model to estimate the traffic demand of co-located WiFi networks. We also made a performance comparison between the proposed schemes and other rule-based and Q-learning based coexistence schemes implemented in non-coordinated LTE-U and WiFi networks. The simulation results show that the RER Q-learning scheme converges faster than the ER Q-learning scheme. The RER Q-learning scheme also gives 19.1% and 5.2% enhancement in aggregated throughput and 16.4% and 10.9% enhancement in fairness performance as compared to the rule-based and Q-learning coexistence schemes, respectively.