Lightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network

Live virtual reality (VR) streaming (a.k.a., 360-degree video streaming) has become increasingly popular because of the rapid growth of head‐mounted displays and 5G networking deployment. However, the huge bandwidth and the energy required to deliver live VR frames in the wireless video sensor netwo...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Xiaolei Chen, Baoning Cao, Ishfaq Ahmad
Formato: article
Lenguaje:EN
Publicado: Hindawi Limited 2021
Materias:
Acceso en línea:https://doaj.org/article/4b2c216a1f1e44b9b7d211625b8ba262
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:4b2c216a1f1e44b9b7d211625b8ba262
record_format dspace
spelling oai:doaj.org-article:4b2c216a1f1e44b9b7d211625b8ba2622021-11-22T01:11:27ZLightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network1875-905X10.1155/2021/8501990https://doaj.org/article/4b2c216a1f1e44b9b7d211625b8ba2622021-01-01T00:00:00Zhttp://dx.doi.org/10.1155/2021/8501990https://doaj.org/toc/1875-905XLive virtual reality (VR) streaming (a.k.a., 360-degree video streaming) has become increasingly popular because of the rapid growth of head‐mounted displays and 5G networking deployment. However, the huge bandwidth and the energy required to deliver live VR frames in the wireless video sensor network (WVSN) become bottlenecks, making it impossible for the application to be deployed more widely. To solve the bandwidth and energy challenges, VR video viewport prediction has been proposed as a feasible solution. However, the existing works mainly focuses on the bandwidth usage and prediction accuracy and ignores the resource consumption of the server. In this study, we propose a lightweight neural network-based viewport prediction method for live VR streaming in WVSN to overcome these problems. In particular, we (1) use a compressed channel lightweight network (C-GhostNet) to reduce the parameters of the whole model and (2) use an improved gate recurrent unit module (GRU-ECA) and C-GhostNet to process the video data and head movement data separately to improve the prediction accuracy. To evaluate the performance of our method, we conducted extensive experiments using an open VR user dataset. The experiments results demonstrate that our method achieves significant server resource saving, real-time performance, and high prediction accuracy, while achieving low bandwidth usage and low energy consumption in WVSN, which meets the requirement of live VR streaming.Xiaolei ChenBaoning CaoIshfaq AhmadHindawi LimitedarticleTelecommunicationTK5101-6720ENMobile Information Systems, Vol 2021 (2021)
institution DOAJ
collection DOAJ
language EN
topic Telecommunication
TK5101-6720
spellingShingle Telecommunication
TK5101-6720
Xiaolei Chen
Baoning Cao
Ishfaq Ahmad
Lightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network
description Live virtual reality (VR) streaming (a.k.a., 360-degree video streaming) has become increasingly popular because of the rapid growth of head‐mounted displays and 5G networking deployment. However, the huge bandwidth and the energy required to deliver live VR frames in the wireless video sensor network (WVSN) become bottlenecks, making it impossible for the application to be deployed more widely. To solve the bandwidth and energy challenges, VR video viewport prediction has been proposed as a feasible solution. However, the existing works mainly focuses on the bandwidth usage and prediction accuracy and ignores the resource consumption of the server. In this study, we propose a lightweight neural network-based viewport prediction method for live VR streaming in WVSN to overcome these problems. In particular, we (1) use a compressed channel lightweight network (C-GhostNet) to reduce the parameters of the whole model and (2) use an improved gate recurrent unit module (GRU-ECA) and C-GhostNet to process the video data and head movement data separately to improve the prediction accuracy. To evaluate the performance of our method, we conducted extensive experiments using an open VR user dataset. The experiments results demonstrate that our method achieves significant server resource saving, real-time performance, and high prediction accuracy, while achieving low bandwidth usage and low energy consumption in WVSN, which meets the requirement of live VR streaming.
format article
author Xiaolei Chen
Baoning Cao
Ishfaq Ahmad
author_facet Xiaolei Chen
Baoning Cao
Ishfaq Ahmad
author_sort Xiaolei Chen
title Lightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network
title_short Lightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network
title_full Lightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network
title_fullStr Lightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network
title_full_unstemmed Lightweight Neural Network-Based Viewport Prediction for Live VR Streaming in Wireless Video Sensor Network
title_sort lightweight neural network-based viewport prediction for live vr streaming in wireless video sensor network
publisher Hindawi Limited
publishDate 2021
url https://doaj.org/article/4b2c216a1f1e44b9b7d211625b8ba262
work_keys_str_mv AT xiaoleichen lightweightneuralnetworkbasedviewportpredictionforlivevrstreaminginwirelessvideosensornetwork
AT baoningcao lightweightneuralnetworkbasedviewportpredictionforlivevrstreaminginwirelessvideosensornetwork
AT ishfaqahmad lightweightneuralnetworkbasedviewportpredictionforlivevrstreaminginwirelessvideosensornetwork
_version_ 1718418280329248768