|
[1] I. Agtzidis, M. Startsev, and M. Dorr. 360-degree video gaze behaviour: A groundtruth data set and a classification algorithm for eye movements. In Proc. of ACM International Conference on Multimedia (MM), pages 1007–1015, Nice, France, October 2019. [2] A. Ajit, N. Banerjee, and S. Banerjee. Combining pairwise feature matches from device trajectories for biometric authentication in virtual reality environments. In Proc. of IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), pages 9–97, San Diego, USA, December 2019. [3] alvr org. Stream VR games from your PC to your headset via Wi-Fi, 2023. https://github.com/alvr-org/ALVR. [4] M. Andres, N. Bordenabe, K. Chatzikokolakis, and C. Palamidessi. Geoindistinguishability: Differential privacy for location-based systems. In Proc. of ACM SIGSAC conference on Computer & Communications Security (CCS), pages 901–914, Berlin, Germany, November 2013. [5] C. Anthes, R. Garcia, M. Wiedemann, and D. Kranzlmuller. State of the art of virtual reality technology. In Proc. of IEEE Aerospace Conference, pages 1–19, Big Sky, USA, March 2016. [6] AppliedVR. Take control of your chronic lower back pain, 2023. https://www.relievrx.com/. [7] K. Arulkumaran, M. Deisenroth, M. Brundage, and A. Bharath. Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine, 34(6):26–38, 2017. [8] J. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. Srinivasan. Mip-NeRF: A multiscale representation for anti-aliasing neural radiance fields. In Proc. of IEEE/CVF International Conference on Computer Vision (ICCV), pages 5855–5864, Montreal, Canada, October 2021. [9] J. Barron, B. Mildenhall, D. Verbin, P. Srinivasan, and P. Hedman. Mip-NeRF 360: Unbounded anti-aliased neural radiance fields. In Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5470–5479, New Orleans, USA, June 2022. [10] B. Bastani, E. Turner, C. Vieri, H. Jiang, B. Funt, and N. Balram. Foveated pipeline for AR/VR head-mounted displays. Information Display, 33(6):14–35, 2017. [11] J. Beck, M. Rainoldi, and R. Egger. Virtual reality in tourism: a state-of-the-art review. Tourism Review, 74(3):586–612, 2019. [12] V. Bindschaedler, R. Shokri, and C. Gunter. Plausible deniability for privacypreserving data synthesis. arXiv:1708.07975, 2017. [13] D. Bonatto, S. Fachada, S. Rogge, A. Munteanu, and G. Lafruit. Real-time depth video-based rendering for 6-DoF HMD navigation and light field displays. IEEE access, 9:146868–146887, 2021. [14] A. Borji, M.-M. Cheng, Q. Hou, H. Jiang, and J. Li. Salient object detection: A survey. Computational visual media, 5:117–150, 2019. [15] E. Bozkir, O. Gunlu,W. Fuhl, R. Schaefer, and E. Kasneci. Differential privacy for eye tracking with temporal correlations. Plos one, 16(8):1–22, 2021. [16] P. Caserman, A. Garcia, and S. Gobel. A survey of full-body motion reconstruction in immersive virtual reality applications. IEEE Transactions on Visualization and Computer Graphics, 26(10):3089–3108, 2019. [17] B. David, K. Butler, and E. Jain. For your eyes only: Privacy-preserving eyetracking datasets. In Proc. of ACM Symposium on Eye Tracking Research & Applications (ETRA), pages 1–6, Seattle, USA, June 2022. [18] B. David, D. Hosfelt, K. Butler, and E. Jain. A privacy-preserving approach to streaming eye-tracking data. IEEE Transactions on Visualization and Computer Graphics, 27(5):2555–2565, 2021. [19] E. David, J. Gutierrez, A. Coutrot, M. Da, and P. Callet. A dataset of head and eye movements for 360Åã videos. In Proc. of ACM Multimedia Systems Conference (MMSys), pages 432–437, Amsterdam, Netherlands, June 2018. [20] J. Dong, K. Ota, and M. Dong. Why VR games sickness? an empirical study of capturing and analyzing VR games head movement dataset. IEEE MultiMedia, 29(2):74–82, 2022. [21] C. Dwork. Differential privacy. In Proc. of International colloquium on automata, languages, and programming (ICALP), pages 1–12, Venice, Italy, July 2006. [22] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality, 7(3):17–51, 2016. [23] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4):211–407, 2014. [24] S. Eberz, G. Lovisotto, K. Rasmussen, V. Lenders, and I. Martinovic. 28 blinks later: Tackling practical challenges of eye movement biometrics. In Proc. of ACM SIGSAC Conference on Computer and Communications Security (CCS), pages 1187–1199, London, UK, November 2019. [25] Eclipse. Eclipse, 2023. https://tinyurl.com/bddxda4z. [26] K. Emery, M. Zannoli, J. Warren, L. Xiao, and S. Talathi. OpenNEEDS: A dataset of gaze, head, hand, and scene signals during exploration in open-ended VR environments. In Proc. of ACM Symposium on Eye Tracking Research and Applications (ETRA), pages 1–7, Virtual, May 2021. [27] S. Eraslan, Y. Yesilada, and S. Harper. Scanpath trend analysis on web pages: Clustering eye tracking scanpaths. ACM Transactions on the Web, 10(4):1–35, 2016. [28] U. Erlingsson, V. Pihur, and A. Korolova. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proc. of ACM SIGSAC conference on computer and communications security (CCS), pages 1054–1067, Scottsdale, USA, November 2014. [29] S. Fachada, D. Bonatto, A. Schenkel, B. Kroon, and B. Sonneveldt. RVS software, 2023. https://gitlab.com/mpeg-i-visual/rvs/-/tree/master. [30] S. Fachada, D. Bonatto, A. Schenkel, and G. Lafruit. Free navigation in natural scenery with DIBR: RVS and VSRS in MPEG-I standardization. In Proc. of IEEE International Conference on 3D Immersion (IC3D), pages 1–6, Brussels, Belgium, December 2018. [31] C. Fan, J. Lee, W. Lo, C. Huang, K. Chen, and C. Hsu. Fixation prediction for 360 video streaming in head-mounted virtual reality. In Proc. of ACM Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV), pages 67–72, Taipei, Taiwan, June 2017. [32] C. Fan, S. Yen, C. Huang, and C. Hsu. Optimizing fixation prediction using recurrent neural networks for 360 video streaming in head-mounted virtual reality. IEEE Transactions on Multimedia, 22(3):744–759, 2019. [33] J. Fang, K. Lee, T. Kamarainen, M. Siekkinen, and C. Hsu. Will dynamic foveation boost cloud VR gaming experience? In Proc. of ACM Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV), pages 29–35, Vancouver, Canada, June 2023. [34] C. Fehn. Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV. In Proc. of SPIE on Stereoscopic Displays and Virtual Reality Systems (SD&A), pages 93–104, San Jose, USA, May 2004. [35] Fozzy. Fozzy game servers, 2023. https://tinyurl.com/bdh7abyc. [36] S. Fremerey, A. Singla, K. Meseberg, and A. Raake. AVtrack360: An open dataset and software recording people’s head rotations watching 360Åã videos on an HMD. In Proc. of ACM Multimedia Systems Conference (MMSys), pages 403–408, Amsterdam, Netherlands, June 2018. [37] W. Fuhl, E. Bozkir, and E. Kasneci. Reinforcement learning for the privacy preservation and manipulation of eye tracking data. In Proc. of International Conference on Artificial Neural Networks (ICANN), pages 595–607, Bratislava, Slovakia, September 2021. [38] B. GAMES. Beat saber, 2023. https://beatsaber.com/. [39] Geopipe. Real new york city vol. 2, 2023. https://tinyurl.com/28y93nbn. [40] L. Giant Form Entertainment. Aircar, 2023. https://tinyurl.com/5xhcp4xr. [41] A. Giaretta. Security and privacy in virtual reality–a literature survey. arXiv:2205.00208, 2022. [42] N. Gilbert. 74 virtual reality statistics you must know in 2021/2022: Adoption, usage & market share, 2022. https://financesonline.com virtual-reality-statistics/. [43] R. Gross, E. Airoldi, B. Malin, and L. Sweeney. Integrating utility into face deidentification. In Proc. of International Workshop on Privacy Enhancing Technologies (PETS), pages 227–242, Cavtat, Croatia, May 2005. [44] Q. Guimard, F. Robert, C. Bauce, A. Ducreux, L. Sassatelli, H. Wu, M. Winckler, and A. Gros. PEM360: A dataset of 360Åã videos with continuous physiological measurements, subjective emotional ratings and motion traces. In Proc. of ACM Multimedia Systems Conference (MMSys), pages 252–258, Athlone, Ireland, June 2022. [45] R. Hessels, C. Kemner, C. van den Boomen, and I. Hooge. The area-of-interest problem in eyetracking research: A noise-robust solution for face and sparse stimuli. Behavior research methods, 48:1694–1712, 2016. [46] C. Holland and O. Komogortsev. Biometric identification via eye movement scanpaths in reading. In Proc. of IEEE International joint conference on biometrics (IJCB), pages 1–8, Washington DC, USA, October 2011. [47] A. Hore and D. Ziou. Image quality metrics: PSNR vs. SSIM. In Proc. of IEEE International Conference on Pattern Recognition (ICPR), pages 2366–2369, Istanbul, Turkey, August 2010. [48] M. Hsieh and J. Lee. Preliminary study of VR and AR applications in medical and healthcare education. J Nurs Health Stud, 3(1):1–5, 2018. [49] HTC. VIVEPORT, 2023. https://www.viveport.com/?hl=zh-TW. [50] M. Hu, X. Luo, J. Chen, Y. Lee, Y. Zhou, and D. Wu. Virtual reality: A survey of enabling technologies and its applications in IoT. Journal of Network and Computer Applications, 178:102970, 2021. [51] Z. Hu, A. Bulling, S. Li, and G. Wang. EHTask: Recognizing user tasks from eye and head movements in immersive virtual reality. IEEE Transactions on Visualization and Computer Graphics, 29(4):1992 – 2004, 2021. [52] Z. Hu, C. Zhang, S. Li, G.Wang, and D. Manocha. SGaze: A data-driven eye-head coordination model for realtime gaze prediction. IEEE Transactions on Visualization and Computer Graphics, 25(5):2002–2010, 2019. [53] B. Huitema and J. McKean. Autocorrelation estimation and inference with small samples. Psychological Bulletin, 110(2):291–304, 1991. [54] G. Illahi, M. Siekkinen, T. Kamarainen, and A. Yla. Real-time gaze prediction in virtual reality. In Proc. of ACM International Workshop on Immersive Mixed and Virtual Environment Systems (MMVE), pages 12–18, Athlone, Ireland, June 2022. [55] P. Julayanont and Z. Nasreddine. Montreal cognitive assessment (MoCA): Concept and clinical review. Cognitive Screening Instruments: A Practical Approach, pages 139–195, 2017. [56] J. Jung and P. Boissonade. VVS: Versatile view synthesizer for 6-DoF immersive video. working paper or preprint, 2020. [57] D. Kaminska, T. Sapinski, L. Wiak, T. Tikk, R. Haamer, E. Avots, A. Helmi, C. Ozcinar, and G. Anbarjafari. Virtual reality and its applications in education: Survey. Information, 10(10):318, 2019. [58] Y. Kavak, E. Erdem, and A. Erdem. A comparative study for feature integration strategies in dynamic saliency estimation. Signal Processing: Image Communication, 51:13–25, 2017. [59] G. Kellaris, S. Papadopoulos, X. Xiao, and D. Papadias. Differentially private event sequences over infinite streams. Proc. of VLDB Endowment, 7:1155–1166, 2014. [60] R. Kennedy, N. Lane, K. Berbaum, and M. Lilienthal. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. The International Journal of Aviation Psychology, 3(3):203–220, 1993. [61] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. [62] C. Kolmar. 25 amazing virtual reality statistics [2022]: The future of VR + AR, 2022. https://www.zippia.com/advice/virtual-reality-statistics/. [63] G. Kougioumtzidis, V. Poulkov, Z. Zaharis, and P. Lazaridis. A survey on multimedia services QoE assessment and machine learning-based prediction. IEEE Access, 10:19507–19538, 2022. [64] B. Kroon and G. Lafruit. Reference view synthesizer (RVS) 2.0 manual, 2018. [65] LaikaBossGames. Art gallery museum, 2023. https://tinyurl.com/4bmjtbty. [66] E. Langbehn, F. Steinicke, M. Lappe, G. Welch, and G. Bruder. In the blink of an eye: Leveraging blink-induced suppression for imperceptible position and orientation redirection in virtual reality. ACM Transactions on Graphics, 37(4):1–11, 2018. [67] S. Learn. Scikit-learn, 2023. https://scikit-learn.org/stable/. [68] R. Leigh and D. Zee. A survey of eye movements: characteristics and teleology. The Neurology of Eye Movements; Oxford University Press: New York, NY, USA, pages 3–19, 2006. [69] J. Li, A. Roy, K. Fawaz, and Y. Kim. Kalεido: Real-time privacy control for eyetracking systems. In Proc. of USENIX Security Symposium, pages 1793–1810, Virtual, August 2021. [70] J. Liebers, M. Abdelaziz, L. Mecke, A. Saad, J. Auda, U. Gruenefeld, F. Alt, and S. Schneegass. Understanding user identification in virtual reality through behavioral biometrics and the effect of body normalization. In Proc. of ACM CHI Conference on Human Factors in Computing Systems, pages 1–11, Yokohama, Japan, May 2021. [71] A. Liu, L. Xia, A. Duchowski, R. Bailey, K. Holmqvist, and E. Jain. Differential privacy for eye-tracking data. In Proc. of ACM Symposium on Eye Tracking Research & Applications (ETRA), pages 1–10, Denver, USA, June 2019. [72] Z. Liu, Z. Zhu, J. Gao, and C. Xu. Forecast methods for time series data: A survey. IEEE Access, 9:91896–91912, 2021. [73] W.-C. Lo, C.-L. Fan, J. Lee, C.-Y. Huang, K.-T. Chen, and C.-H. Hsu. 360Åã video viewing dataset in head-mounted virtual reality. In Proc. of ACM on Multimedia Systems Conference (MMSys), pages 211–216, Taipei, Taiwan, June 2017. [74] P. Lungaro, R. Sjoberg, A. Valero, A. Mittal, and K. Tollmar. Gaze-aware streaming solutions for the next generation of mobile VR experiences. IEEE Transactions on Visualization and Computer Graphics, 24(4):1535–1544, 2018. [75] G. Mahalakshmi, S. Sridevi, and S. Rajaram. A survey on forecasting of time series data. In Proc. of IEEE International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE), pages 1–8, Kovilpatti, India, January 2016. [76] W. Mark. Postrendering 3D Image Warping: Visibility, Reconstruction, and Performance for Depth-image Warping. PhD thesis, The University of North Carolina, 1999. [77] F. Mathis, H. Fawaz, and M. Khamis. Knowledge-driven biometric authentication in virtual reality. In Proc. of Extended Abstracts of the ACM CHI Conference on Human Factors in Computing Systems, pages 1–10, Honolulu, USA, April 2020. [78] Math.NET. Math.net numerics, 2023. https://numerics.mathdotnet.com/. [79] MathNet.Numerics. Mathnet.numerics.distributions, 2023. https://tinyurl.com/8aczzcyy. [80] P. McCarthy. NuGetForUnity, 2023. https://tinyurl.com/3u446efc. [81] G. McConkie. Evaluating and reporting data quality in eye movement research. Behavior Research Methods & Instrumentation, 13(2):97–106, 1981. [82] E. McKenzie. Some simple models for discrete variate time series. JAWRA Journal of the American Water Resources Association, 21:645–650, 1985. [83] X. Meng, R. Du, and A. Varshney. Eye-dominance-guided foveated rendering. IEEE transactions on visualization and computer graphics, 26(5):1972–1980, 2020. [84] X. Meng, R. Du, M. Zwicker, and A. Varshney. Kernel foveated rendering. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 1(1):1–20, 2018. [85] B. Mildenhall, P. Srinivasan, M. Tancik, J. Barron, R. Ramamoorthi, and R. Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, December 2021. [86] B. Mildenhall, P. P. Srinivasan, R. Ortiz-Cayon, N. K. Kalantari, R. Ramamoorthi, R. Ng, and A. Kar. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics, 38(4), 2019. [87] A. Millen and P. Hancock. Eye see through you! eye tracking unmasks concealed face recognition despite countermeasures. Cognitive Research: Principles and Implications, 4(23):1–14, 2019. [88] M. Miller, F. Herrera, H. Jun, J. Landay, and J. Bailenson. Personal identifiability of user tracking data during observation of 360-degree VR video. Scientific Reports, 10(1):1–10, 2020. [89] R. Miller, N. Banerjee, and S. Banerjee. Within-system and cross-system behaviorbased biometric authentication in virtual reality. In Proc. of IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pages 311–316, Atlanta, USA, March 2020. [90] J. Moline. Virtual reality for health care: a survey. Virtual Reality in Neuro-Psycho- Physiology, 44:3–34, 1997. [91] S. Munn, L. Stefano, and J. Pelz. Fixation-identification in dynamic scenes: Comparing an automated algorithm to manual coding. In Proc. of ACM symposium on Applied Perception in Graphics and Visualization (APGV), pages 33–42, Los Angeles, USA, August 2008. [92] V. Nair, G. Garrido, and D. Song. Exploring the unprecedented privacy risks of the metaverse. arXiv:2207.13176, 2022. [93] V. Nair, M. Garrido, and D. Song. Going incognito in the metaverse. arXiv:2208.05604, 2022. [94] Netflix. VMAF - video multi-method assessment fusion, 2021. https://github.com/Netflix/vmaf. [95] E. Newton, L. Sweeney, and B. Malin. Preserving privacy by de-identifying face images. IEEE transactions on Knowledge and Data Engineering, 17(2):232–243, 2005. [96] E. Niebur. Saliency map. Scholarpedia, 2(8):2675–2675, 2007. [97] Numpy. Legacy random generation, 2023. https://tinyurl.com/mr3fdaza. [98] I. Olade, C. Fleming, and H. Liang. Biomove: Biometric user identification from human kinesiological movements for virtual reality systems. Sensors, 20(10):2944, 2020. [99] OpenCV. Opencv modules, 2023. https://docs.opencv.org/4.7.0/. [100] A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics, 35(6):1–12, 2016. [101] K. Pfeuffer, M. Geiger, S. Prange, L. Mecke, D. Buschek, and F. Alt. Behavioural biometrics in VR: Identifying people from body motion and relations in virtual reality. In Proc. of ACM CHI Conference on Human Factors in Computing Systems, pages 1–12, Glasgow, UK, May 2019. [102] Polyarc. Moss, 2023. https://tinyurl.com/59buur7s. [103] V. Rastogi and S. Nath. Differentially private aggregation of distributed time-series with transformation and encryption. In Proc. of ACM SIGMOD International Conference on Management of Data (SIGMOD), pages 735–746, Indianapolis, USA, June 2010. [104] I. RealSense. Intel realsense sdk 2.0 (v2.53.1), 2023. https://github.com/IntelRealSense/librealsense/. [105] I. RealSense. pyrealsense2 2.53.1.4623 project description, 2023. https://pypi.org/project/pyrealsense2. [106] S. Ribaric, A. Ariyaeeinia, and N. Pavesic. De-identification for privacy protection in multimedia content: A survey. Signal Processing: Image Communication, 47(2):131–151, 2016. [107] S. Saini, S. Malhi, B. Saro, M. Khan, and A. Kaur. Virtual reality: A survey of enabling technologies and its applications in IoT. In Proc. of International Conference on Electronics and Renewable Systems (ICEARS), pages 597–603, Tuticorin, India, March 2022. [108] B. S. Salahieh, J. Jung, and A. Dziembowski. Test model 10 for MPEG immersive video. 2021. [109] D. Salvucci and J. Goldberg. Identifying fixations and saccades in eye-tracking protocols. In Proc. of ACM symposium on Eye Tracking Research & Applications (ETRA), pages 71–78, Palm Beach Gardens, USA, November 2000. [110] Scipy. scipy.spatial.transform.rotation, 2023. https://tinyurl.com/2p8zatbr. [111] Shapes. Nature starter kit 2, 2023. https://tinyurl.com/yuruhcux. [112] M. Slater and M. Sanchez-Vives. Enhancing our lives with immersive virtual reality. Frontiers in Robotics and AI, 3:74, 2016. [113] C. Slocum, Y. Zhang, N. Abu, and J. Chen. Going through the motions: AR/VR keylogging from user head motions. In Proc. of USENIX Security Symposium, pages 159–174, Anaheim, USA, August 2023. [114] O. Stankiewicz, K. Wegner, M. Tanimoto, and M. DomaÅLnski. Enhanced view synthesis reference software (VSRS) for free-viewpoint television. 2013. [115] SteamVR. Steamvr, 2023. https://tinyurl.com/yptjjtjs. [116] J. Steil, I. Hagestedt, X. Huang, and A. Bulling. Privacy-aware eye tracking using differential privacy. In Proc. of ACM Symposium on Eye Tracking Research & Applications (ETRA), pages 1–9, Denver, USA, June 2019. [117] O. Studio. Free medieval room, 2023. https://tinyurl.com/mwmfft2v. [118] X. Su, X. Yan, and C. Tsai. Linear regression. Wiley Interdisciplinary Reviews: Computational Statistics, 4:275–294, 2012. [119] Q. Sun, A. Patney, L. Wei, O. Shapira, J. Lu, P. Asente, S. Zhu, M. McGuire, D. Luebke, and A. Kaufman. Towards virtual reality infinite walking: Dynamic saccadic redirection. ACM Transactions on Graphics, 37(4):1–13, 2018. [120] Y. Sun, S. Tang, C. Wang, and C. Hsu. On objective and subjective quality of 6DoF synthesized live immersive videos. In Proc. of ACM Workshop on Quality of Experience in Visual Multimedia Applications (QoEVMA), pages 49–56, Lisboa, Portugal, October 2022. [121] A. Sunshine. Arizona sunshine, 2023. https://www.arizona-sunshine.com/. [122] L. Tabbaa, R. Searle, S. Bafti, M. Hossain, J. Intarasisrisawat, M. Glancy, and C. Ang. VREED: Virtual reality emotion recognition dataset using eye tracking & physiological measures. ACM Interact. Mob. Wearable Ubiquitous Technol., 5(4):1–20, 2022. [123] A. D. P. Team. Learning with privacy at scale, 2017. https://tinyurl.com/3r7tbw8r. [124] S. Team. Superhot vr, 2023. https://tinyurl.com/2bhhzmj3. [125] E. Thambiraja, G. Ramesh, and D. Umarani. A survey on various most common encryption techniques. International Journal of Advanced Research in Computer Science and Software Engineering, 2(7):226–233, 2012. [126] Tobii. Tobii XR API, 2023. https://developer.tobii.com/xr/develop/unity/. [127] Tobii. Troubleshoot, 2023. https://developer.tobii.com/pc-gaming/unitysdk/ troubleshoot/. [128] S. Tomar. Converting video formats with FFmpeg, 2006. https://tinyurl.com/3tr3nuvt. [129] Unity. Unity, 2023. https://unity.com/. [130] Unity. Unity asset store, 2023. https://assetstore.unity.com/. [131] Unity. Unity scripting API, 2023. https://docs.unity3d.com/ScriptReference/. [132] Unity. XR interaction toolkit, 2023. https://tinyurl.com/m37py5ty. [133] I. Wagner and D. Eckhoff. Technical privacy metrics: a systematic survey. ACM Computing Surveys, 51(3):1–38, 2018. [134] Y. Wang, Z. Su, N. Zhang, R. Xing, D. Liu, T. Luan, and X. Shen. A survey on metaverse: Fundamentals, security, and privacy. IEEE Communications Surveys & Tutorials, 25(1):319–352, 2022. [135] W. Wei. Time Siries Analysis, volume 2. Oxford University Press, 2013. [136] X. Wei and C. Yang. FoV privacy-aware VR streaming. In Proc. of IEEE Wireless Communications and Networking Conference (WCNC), pages 1515–1520, Austin, USA, April 2022. [137] Y. Wei, X. Wei, S. Zheng, C. Hsu, and C. Yang. A 6DoF VR dataset of 3D virtualworld for privacy-preserving approach and utility-privacy tradeoff. In Proc. of ACM Multimedia Systems (MMSys), pages 444–450, Vancouver, Canada, June 2023. [138] C. Wu, Z. Tan, Z. Wang, and S. Yang. A dataset for exploring user behaviors in VR spherical video streaming. In Proc. of ACM on Multimedia Systems Conference (MMSys), pages 193–198, Taipei, Taiwan, June 2017. [139] Y. Xu, Y. Dong, J. Wu, Z. Sun, Z. Shi, J. Yu, and S. Gao. Gaze prediction in dynamic 360Åã immersive videos. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5333–5342, Salt Lake City, USA, June 2018. [140] T. Xue, A. El, T. Zhang, G. Ding, and P. Cesar. CEAP-360VR: A continuous physiological and behavioral emotion annotation dataset for 360 VR videos. IEEE Transactions on Multimedia, 25:243 – 255, 2021. [141] A. Yaqoob, T. Bi, and G.-M. Muntean. A survey on adaptive 360 video streaming: Solutions, challenges and opportunities. IEEE Communications Surveys & Tutorials, 22(4):2801–2838, 2020. [142] G. Yule. VII. on a method of investigating periodicities disturbed series, with special reference to Wolfer’s sunspot numbers. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 226(636-646):267–298, 1927. [143] J. Zhai, S. Zhang, J. Chen, and Q. He. Autoencoder and its various variants. In Proc. of IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 415–419, Miyazaki, Japan, October 2018. [144] X. Zhang, M. Khalili, and M. Liu. Differentially private real-time release of sequential data. ACM Transactions on Privacy and Security, 26(1):1–29, 2022. [145] J. Zheng, K. Chan, and I. Gibson. Virtual reality. IEEE Potentials, 17(2):20–23, 1998. [146] Y. Zhou, T. Feng, S. Shuai, X. Li, L. Sun, and H. Duh. EDVAM: A 3D eye-tracking dataset for visual attention modeling in a virtual museum. Frontiers of Information Technology & Electronic Engineering, 23(1):101–112, 2022. |