TY - GEN
T1 - GazeFed
T2 - 32nd IEEE/ACM International Symposium on Quality of Service, IWQoS 2024
AU - Wu, Jiang
AU - Liu, Xuezheng
AU - Hu, Miao
AU - Lin, Hongxu
AU - Chen, Min
AU - Zhou, Yipeng
AU - Wu, Di
PY - 2024
Y1 - 2024
N2 - Gaze prediction is essential for enhancing user experiences of virtual reality (VR) applications. However, existing methods seldom considered the privacy nature of gaze data, which may reveal both psychological and physiological characteristics of VR users. Moreover, the commonly adopted one-sizefits-all prediction model cannot well capture behavioral patterns of different VR users. In this paper, we propose a privacyaware personalized gaze prediction framework called GazeFed, which can train a personalized gaze prediction model for each user in a collaborative manner. In GazeFed, only intermediate computations are exchanged between users and the server. The raw gaze data samples are locally preserved to protect user privacy. The global model is shared among all users, which can be further trained with local gaze data to generate a personalized prediction model for each individual user. We also propose a deep neural network tailored for VR gaze prediction called GazeNet, which can effectively extract features from VR contents, gaze data and other user behaviors, and improve the accuracy of gaze prediction. Moreover, the technique of differential privacy (DP) is also integrated to provide more privacy protection, and we theoretically prove that GazeFed can well converge and satisfy the requirement of differential privacy in the meanwhile. Last, we conduct extensive experiments to evaluate the effectiveness of our proposed GazeFed on real datasets and various VR scenarios. The experimental results demonstrate that GazeFed outperforms the state-of-the-art approaches.
AB - Gaze prediction is essential for enhancing user experiences of virtual reality (VR) applications. However, existing methods seldom considered the privacy nature of gaze data, which may reveal both psychological and physiological characteristics of VR users. Moreover, the commonly adopted one-sizefits-all prediction model cannot well capture behavioral patterns of different VR users. In this paper, we propose a privacyaware personalized gaze prediction framework called GazeFed, which can train a personalized gaze prediction model for each user in a collaborative manner. In GazeFed, only intermediate computations are exchanged between users and the server. The raw gaze data samples are locally preserved to protect user privacy. The global model is shared among all users, which can be further trained with local gaze data to generate a personalized prediction model for each individual user. We also propose a deep neural network tailored for VR gaze prediction called GazeNet, which can effectively extract features from VR contents, gaze data and other user behaviors, and improve the accuracy of gaze prediction. Moreover, the technique of differential privacy (DP) is also integrated to provide more privacy protection, and we theoretically prove that GazeFed can well converge and satisfy the requirement of differential privacy in the meanwhile. Last, we conduct extensive experiments to evaluate the effectiveness of our proposed GazeFed on real datasets and various VR scenarios. The experimental results demonstrate that GazeFed outperforms the state-of-the-art approaches.
UR - http://www.scopus.com/inward/record.url?scp=85206365040&partnerID=8YFLogxK
U2 - 10.1109/IWQoS61813.2024.10682864
DO - 10.1109/IWQoS61813.2024.10682864
M3 - Conference proceeding contribution
AN - SCOPUS:85206365040
SN - 9798350350135
BT - 2024 IEEE/ACM 32nd International Symposium on Quality of Service (IWQoS)
PB - Institute of Electrical and Electronics Engineers (IEEE)
CY - Piscataway, NJ
Y2 - 19 June 2024 through 21 June 2024
ER -