TY - JOUR
T1 - SDSR
T2 - optimizing metaverse video streaming via saliency-driven dynamic super-resolution
AU - Chai, Baili
AU - Chen, Jinyu
AU - Luo, Zhenxiao
AU - Wang, Zelong
AU - Hu, Miao
AU - Zhou, Yipeng
AU - Wu, Di
PY - 2024/4
Y1 - 2024/4
N2 - Metaverse (especially 360-degree) video streaming allows broadcasting
virtual events in the metaverse to a broad audience. To reduce the huge
bandwidth consumption, quite a few super-resolution (SR)-enhanced
360-degree video streaming systems have been proposed. However, there is
very limited work to investigate how the granularity of SR model
affects the system performance, and how to choose a proper SR model for
different video contents under diverse environmental conditions. In this
paper, we first conduct a dedicated measurement study to unveil the
impact of different granularities of SR models. It is found that the
scene of a video largely determines the effectiveness of SR models in
different granularities. Based on our observations, we propose a novel
360-degree video streaming framework with saliency-driven dynamic
super-resolution, called
SDSR
. To maximize user QoE, we formally formulate an optimization problem
and adopt the model predictive control (MPC) theory for bitrate
adaptation and SR model selection. To improve the effectiveness of SR
model, we leverage the saliency information, which well reflects users’
view interests, for model training. In addition, we reuse an SR model
for similar chunks based on temporal redundancy of a video. Finally, we
conduct extensive experiments on real traces and the results show that
SDSR outperforms the state-of-the-art algorithms with an improvement up
to 32.78% in terms of the average QoE.
AB - Metaverse (especially 360-degree) video streaming allows broadcasting
virtual events in the metaverse to a broad audience. To reduce the huge
bandwidth consumption, quite a few super-resolution (SR)-enhanced
360-degree video streaming systems have been proposed. However, there is
very limited work to investigate how the granularity of SR model
affects the system performance, and how to choose a proper SR model for
different video contents under diverse environmental conditions. In this
paper, we first conduct a dedicated measurement study to unveil the
impact of different granularities of SR models. It is found that the
scene of a video largely determines the effectiveness of SR models in
different granularities. Based on our observations, we propose a novel
360-degree video streaming framework with saliency-driven dynamic
super-resolution, called
SDSR
. To maximize user QoE, we formally formulate an optimization problem
and adopt the model predictive control (MPC) theory for bitrate
adaptation and SR model selection. To improve the effectiveness of SR
model, we leverage the saliency information, which well reflects users’
view interests, for model training. In addition, we reuse an SR model
for similar chunks based on temporal redundancy of a video. Finally, we
conduct extensive experiments on real traces and the results show that
SDSR outperforms the state-of-the-art algorithms with an improvement up
to 32.78% in terms of the average QoE.
UR - http://www.scopus.com/inward/record.url?scp=85182385609&partnerID=8YFLogxK
U2 - 10.1109/JSAC.2023.3345418
DO - 10.1109/JSAC.2023.3345418
M3 - Article
AN - SCOPUS:85182385609
SN - 0733-8716
VL - 42
SP - 978
EP - 989
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
IS - 4
ER -