In 360-degree video streaming, users commonly watch a video scene within a Field of View (FoV). Such observation provides an opportunity to save bandwidth consumption by predicting and then prefetching video tiles within the FoV. However, existing FoV prediction methods seldom consider the diversity among user behaviors and the impact of different video genres. Thus, previous one-size-fits-all models cannot make accurate prediction for users with different behavior patterns. In this paper, we propose a user-aware viewport prediction algorithm called Sparkle, which is a practical whitebox approach for FoV prediction. Instead of training a single learning model to predict the behaviors for all users, our proposed algorithm is tailored to fit each individual user. In particular, unlike other learning models, our prediction model is completely explainable and all the parameters have their physical meanings. We first conduct a measurement study to analyze real user behaviors and observe that there exists sharp fluctuation of view orientation and user posture has significant impact on the viewport movement of users. Moreover, cross-user similarity is diverse across different video genres. Inspired by these insights, we further design a user-aware viewport prediction algorithm by mimicking a user's viewport movement on the tile map, and determine how a user will change the viewport angle based on his (or her) trajectory and other similar users' behaviors in the past time window. Extensive evaluations with real datasets demonstrate that, our proposed algorithm significantly outperforms the state-of-the-art benchmark methods (e.g., LSTM-based methods) by over 5%, and the prediction accuracy is much more stable on various types of 360-degree videos than previous methods.