VR viewport pose model for quantifying and exploiting frame correlations

Ying Chen, Hojung Kwon, Hazer Inaltekin, Maria Gorlatova

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

2 Citations (Scopus)


The importance of the dynamics of the viewport pose, i.e., the location and the orientation of users' points of view, for virtual reality (VR) experiences calls for the development of VR viewport pose models. In this paper, informed by our experimental measurements of viewport trajectories across 3 different types of VR interfaces, we first develop a statistical model of viewport poses in VR environments. Based on the developed model, we examine the correlations between pixels in VR frames that correspond to different viewport poses, and obtain an analytical expression for the visibility similarity (ViS) of the pixels across different VR frames. We then propose a lightweight ViS-based ALG-ViS algorithm that adaptively splits VR frames into the background and the foreground, reusing the background across different frames. Our implementation of ALG-ViS in two Oculus Quest 2 rendering systems demonstrates ALG-ViS running in real time, supporting the full VR frame rate, and outperforming baselines on measures of frame quality and bandwidth consumption.

Original languageEnglish
Title of host publicationIEEE INFOCOM 2022 - IEEE Conference on Computer Communications
Place of PublicationPiscataway, NJ
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages10
ISBN (Electronic)9781665458221
ISBN (Print)9781665458238
Publication statusPublished - 2022
Event41st IEEE Conference on Computer Communications, INFOCOM 2022 - Virtual, London, United Kingdom
Duration: 2 May 20225 May 2022

Publication series

ISSN (Print)0743-166X
ISSN (Electronic)2641-9874


Conference41st IEEE Conference on Computer Communications, INFOCOM 2022
Country/TerritoryUnited Kingdom
CityVirtual, London


Dive into the research topics of 'VR viewport pose model for quantifying and exploiting frame correlations'. Together they form a unique fingerprint.

Cite this