帳號:guest(3.144.25.230)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):羅文志
作者(外文):Lo, Wen-Chih
論文名稱(中文):利用邊緣運算最佳化360度全景影片之頭戴式虛擬實境串流
論文名稱(外文):Edge-Assisted 360-degree Video Streaming for Head-Mounted Virtual Reality
指導教授(中文):徐正炘
指導教授(外文):Hsu, Cheng-Hsin
口試委員(中文):李哲榮
黃俊穎
口試委員(外文):Lee, Che-Rung
Huang, Chun-Ying
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系所
學號:105062620
出版年(民國):107
畢業學年度:106
語文別:英文
論文頁數:69
中文關鍵詞:360度全景影片邊緣網路影音串流虛擬實境頭戴式裝置
外文關鍵詞:360-degree VideoEdge NetworksVideo StreamingVirtual RealityHead-Mounted Display
相關次數:
  • 推薦推薦:0
  • 點閱點閱:401
  • 評分評分:*****
  • 下載下載:14
  • 收藏收藏:0
近年來,360度全景影片蔚為流行。諸多廠商也爭相發表各自的頭戴顯示器。與傳統的平面式螢幕相比,使用頭戴顯示器觀看360度全景影片,提供使用者更好的觀看體驗。然而,為了實現高品質的沈浸式觀看體驗,有諸多挑戰必須一一克服,例如:網路延遲、頻寬消耗等。在這篇論文中,我們提出利用邊緣運算360度全景影片之頭戴式虛擬實境串流系統。此外,我們更設計了一最佳化演算法,此演算法根據不同的使用者採取不同的串流策略,並能有效地降低網路延遲且提高使用者的觀看體驗。為了驗證本系統及演算法的效能,我們也蒐集並公開了一360度全景影片使用者觀看習慣資料庫,利用此資料庫,我們具體量化了利用邊端運緣360度全景影片之頭戴式虛擬實境串流系統的優缺點。與現有的串流系統相比下,我們的系統(一)降低頻寬消耗、(二)提供較好的使用者觀看體驗及(三)降低使用者端的運算資源消耗。此串流系統及使用者觀看習慣資料庫皆已在網路上開源,希望能幫助到產、學界對此領域有興趣的研究者。
Over the past years, 360◦ video streaming is getting increasingly popular. Watching these videos with Head-Mounted Displays (HMDs), also known as VR headsets, gives a better immersive experience than using traditional planar monitors. However, several open challenges keep state-of-the-art technology away from the immersive viewing experience, including high bandwidth consumption, long turn around latency, and heterogeneous HMD devices. In this thesis, we propose an edge-assisted 360◦ video streaming system, which leverage edge networks to perform viewport rendering. We formulate the optimization problem to determine which HMD client should be served without overloading the edge devices. We design an algorithm to solve the problem as mentioned earlier, and a real testbed is implemented to prove the concept. The resulting edge-assisted 360◦ video streaming system is evaluated through extensive experiments with an open-sourced 360◦ viewing dataset. With the assistance of edge devices, we can reduce the bandwidth usage and computation workload on HMD devices when serving the viewers. Also, the lower network latency is guaranteed. We also conduct several extensive experiment. The results show that compared to current 360◦ video streaming platforms, like YouTube, our edge-assisted rendering platform can: (i) save up to 62% in bandwidth consumption, (ii) achieve higher viewing video quality at a given bitrate, (iii) reduce the computation workload for those lightweight HMDs. Our proposed system and the viewing dataset are open-sourced and can be leveraged by researchers and engineers to improve the 360◦ video streaming further.
Acknowledgments
致謝
Abstract
中文摘要

1. Introduction ................................1
1.1 Contributions ................................ 4
1.2 ThesisOrganization............................. 5
2. Background ................................7
2.1 360◦VideosPreprocessing ......................... 7
2.2 360◦VideosStreaming ........................... 8
2.3 EdgesComputing.............................. 9
2.4 360◦VideosQualityAssessments ..................... 10
3. System Architecture ................................12
3.1 CloudServer ................................ 12
3.2 EdgeServer................................. 14
3.2.1 TileRewriting(TR)......................... 15
3.2.2 ViewportRendering(VPR)..................... 15
3.3 HMDClient................................. 16
4. Optimal Edge-Assisted Rendering to Head-Mounted Displays ................................17
4.1 NotationsandModels............................ 18
4.2 Formulation................................. 19
4.3 ProposedAlgorithm............................. 20
5. 360◦ Videos Viewing Dataset ................................24
5.1 ContentTraces ............................... 24
5.2 SensorTraces................................ 27
5.3 DatasetFormat ............................... 28
5.3.1 ContentDataset........................... 28
5.3.2 SensorDataset ........................... 28
6. Evaluations ................................32
6.1 Implementations .............................. 32
6.2 Setups.................................... 33
6.3 Results.................................... 34
7. Related Work ................................43
7.1 360◦VideoAcquisition........................... 43
7.2 360◦VideoEncoding............................ 46
7.3 360◦VideoTransmission.......................... 49
7.4 360◦VideoQualityAssessment ...................... 52
8. Conclusion and Future Work ................................56
Bibliography ................................58
[1] The OpenCV Library. http://opencv.org, 2000. Accessed May 2018.
[2] After mixed year, mobile AR to drive $108 billion VR/AR market by 2021.
https://goo.gl/P9N0z0, 2017. Accessed May 2018.
[3] Facebook Spaces. https://www.facebook.com/spaces, 2017. Accessed
April 2018.
[4] Oculus Video. ://www.oculus.com/experiences/rift/
926562347437041/, 2017. Accessed May 2018.
[5] EC2 Network Benchmark Results. https://cloudonaut.io/ behind-the-scences-ec2-network-performance-benchmark/, 2018. Accessed July 2018.
[6] Facebook. https://www.facebook.com/, 2018. Accessed May 2018.
[7] Facebook Oculus Rift. https://www.oculus.com, 2018. Accessed May
2018.
[8] Google Cardboard. https://vr.google.com/cardboard/, 2018. Accessed May 2018.
[9] HTC Vive. https://www.htcvive.com, 2018. Accessed May 2018.
[10] HTC Vive Focus. https://www.vive.com/cn/product/
vive-focus-en/, 2018. Accessed May 2018.
[11] IBM ILOG CPLEX Optimizer. http://www-01.ibm.com/software/ integration/optimization/cplex-optimizer/, 2018. Accessed July 2018.
[12] Luna 360 VR. http://luna.camera/, 2018. Accessed May 2018.
[13] MP4Box. https://gpac.wp.imt.fr/mp4box/, 2018. Accessed May 2018.
[14] Richo Theta S. https://theta360.com, 2018. Accessed May 2018.
[15] Samsung Gear 360. http://www.samsung.com/global/galaxy/
gear-360/, 2018. Accessed May 2018.
[16] Samsung Gear VR. http://www.samsung.com/global/galaxy/
gear-vr, 2018. Accessed May 2018.
[17] Sony Playstation VR. https://www.playstation.com/en-au/
explore/playstation-vr/, 2018. Accessed May 2018.
[18] YouTube. https://www.youtube.com/, 2018. Accessed May 2018.
[19] H. Afshari, V. Popovic, T. Tasci, A. Schmid, and Y. Leblebici. A spherical multi-camera system with real-time omnidirectional video acquisition capability. IEEE Transactions on Consumer Electronics, 58(4):1110–1118, 2012.
[20] S. Afzal, J. Chen, and K. Ramakrishnan. Characterization of 360-degree videos. In Proceedings of the 2017 ACM SIGCOMM Workshop on Virtual Reality and Augmented Reality Network, VR/AR Network’17, pages 1–6, 2017.
[21] R. Aggarwal, A. Vohra, and A. Namboodiri. Panoramic stereo videos with a single camera. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’16, pages 3755–3763, 2016.
[22] R. Aparicio-Pardo, K. Pires, A. Blanc, and G. Simon. Transcodclouding live adaptive video streams at a massive scale in the cloud. In Proceedings of the 2015 ACM Multimedia Systems Conference, MMSys ’15, pages 49–60, 2015.
[23] I. Bauermann, M. Mielke, and E. Steinbach. H.264 based coding. In Proceedings of the 2004 International Conference Computer Vision and Graphics, ICCVG’04, pages 209–215, 2004.
[24] A. Belbachir, S. Schraml, M. Mayerhofer, and M. Hofstatter. A novel HDR depth camera for real-time 3D 360◦ panoramic vision. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’14, pages 425– 432, 2014.
[25] M. Bessa, M. Melo, D. Narciso, L. Barbosa, and J. Vasconcelos-Raposo. Does 3D 360 video enhance user’s VR experience: An evaluation study. In Proceedings of the 2016 International Conference on Human Computer Interaction, HCI’16, pages 16:1–16:4, 2016.
[26] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli. Fog computing and its role in the Internet of Things. In Proceedings of the 2012 MCC Workshop on Mobile Cloud Computing, MCC’12, pages 13–16, 2012.
[27] A. Borji, M. Cheng, H. Jiang, and J. Li. Salient object detection: A survey. arXiv preprint arXiv:1411.5878, 2014.
[28] K. Calagari, M. Elgharib, S. Shirmohammadi, and M. Hefeeda. Sports VR content generation from regular camera feeds. In Proceedings of the 2017 ACM Multimedia Conference, MM ’17, pages 699–707, 2017.
[29] J. Carmack. Latency Mitigation Strategies.
https://www.twentymilliseconds.com/post/ latency-mitigation-strategies/, 2018. Accessed April 2018.
[30] F. Chollet. Keras. https://github.com/fchollet/keras, 2018. Accessed May 2018.
[31] O. Cogal, A. Akin, K. Seyid, V. Popovic, A. Schmid, B. Ott, P. Wellig, and Y. Leblebici. A new omni-directional multi-camera system for high resolution surveillance. Proceedings of SPIE, Mobile Multimedia/Image Processing, Security, and Applications, 9120(9):9120–9120, 2014.
[32] C. Concolato, J. Feuvre, F. Denoual, E. Nassor, N. Ouedraogo, and J. Taquet. Adaptive streaming of HEVC tiled videos using MPEG-DASH. IEEE Transactions on Circuits and Systems for Video Technology, PP(99):1–1, 2017.
[33] X. Corbillon, G. Simon, A. Devlic, and J. Chakareski. Viewport-adaptive navigable 360-degree video delivery. In Proceedings of the 2017 IEEE International Conference on Communications, ICC’17, pages 1–7, 2017.
[34] M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara. A deep multi-level network for saliency prediction. In Proceedings of the 2016 International Conference on Pattern Recognition, ICPR’16, pages 3488–3493, 2016.
[35] V. Couture, M. Langer, and S. Roy. Panoramic stereo video textures. In Proceedings of the 2011 International Conference on Computer Vision, ICCV’11, pages 1251–1258, 2011.
[36] L. D’Acunto, J. Berg, E. Thomas, and O. Niamut. Using MPEG DASH SRD for zoomable and navigable video. In Proceedings of the 2016 ACM Multimedia Systems Conference, MMSys ’16, page 34:1–34:4, 2016.
[37] F. Duanmu, E. Kurdoglu, S. Hosseini, Y. Liu, and Y. Wang. Prioritized buffer control in two-tier 360 video streaming. In Proceedings of the 2017 ACM SIGCOMM Workshop on Virtual Reality and Augmented Reality Network, VR/AR Network’17, pages 13–18, 2017.
[38] D. Egan, S. Brennan, J. Barrett, Y. Qiao, C. Timmerer, and N. Murray. An evaluation of heart rate and electrodermal activity as an objective QoE evaluation method for immersive virtual reality environments. In Proceedings of the 2016 International Conference on Quality of Multimedia Experience, QoMEX’16, pages 1–6, 2016.
[39] C. Fan, J. Lee, W. Lo, C. Huang, K. Chen, and C. Hsu. Fixation prediction for 360◦ video streaming in head-mounted virtual reality. In Proceedings of the 2017 Workshop on Network and Operating Systems Support for Digital Audio and Video, NOSSDAV’17, pages 67–72, 2017.
[40] C. Feldmann, C. Bulla, and B. Cellarius. Efficient stream-reassembling for video conferencing applications using tiles in HEVC. In Proceedings of the 2013 International Conferences on Advances in Multimedia, MMEDIA’13, pages 130–135, 2013.
[41] A.Fernandes and S.Feiner. Combating VR sickness through subtle dynamic Field-of-View modification. In Proceedings of the 2016 IEEE Symposium on 3D User Interfaces, 3DUI’16, pages 201–210, 2016.
[42] J. Feuvre and C. Concolato. Tiled-based adaptive streaming using MPEG-DASH. In Proceedings of the 2016 ACM Multimedia Systems Conference, MMSys ’16, page 41, 2016.
[43] J. Foote and D. Kimber. FlyCam: practical panoramic video and automatic camera control. In Proceedings of the 2000 IEEE International Conference on Multimedia and Expo., ICME’00, pages 1419–1422, 2000.
[44] C. Fu, L. Wan, T. Wong, and C. Leung. The rhombic dodecahedron map: An efficient scheme for encoding panoramic video. IEEE Transactions on Multimedia, 11(4):634–644, 2009.
[45] M. Graf, C. Timmerer, and C. Mueller. Towards bandwidth efficient adaptive streaming of omnidirectional video over HTTP. In Proceedings of the 2017 ACM Multimedia Systems Conference, MMSys’17, pages 261–271, 2017.
[46] S.Heymann, A.Smolic, K.Muller, Y.Guo, J.Rurainsky, P.Eisert, and T.Wiegand. Representation, coding and interactive rendering of high-resolution panoramic images and video using MPEG-4. In Proceedings of the 2005 Panoramic Photogrammetry Workshop, PPW’05, 2005.
[47] M. Hosseini and V. Swaminathan. Adaptive 360 VR video streaming: Divide and conquer. In Proceedings of the 2016 IEEE International Symposium on Multimedia, ISM’16, pages 107–110, 2016.
[48] C. Hsu, A. Chen, C. Hsu, C. Huang, C. Lei, and K. Chen. Is foveated rendering perceivable in virtual reality: Exploring the efficiency and consistency of quality assessment methods. In Proceedings of the 2017 ACM Multimedia Conference, MM’17, pages 55–63, 2017.
[49] Y. Hu, S. Xie, Y. Xu, and J. Sun. Dynamic VR live streaming over MMT. In Proceedings of the 2017 International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB’17, pages 1–4, 2017.
[50] C. Huang, C. Hsu, Y. Chang, and K. Chen. GamingAnywhere: An open cloud gaming system. In Proceedings of the 2013 ACM Multimedia Systems Conference, MMSys’13, pages 36–47, 2013.
[51] J. Huang, Z. Chen, D. Ceylan, and H. Jin. 6-DOF VR videos with a single 360-camera. In Proceedings of the 2017 IEEE Virtual Reality, VR’17, pages 37–44, 2017.
[52] I. Hupont, J. Gracia, L. Sanagustin, and M. Gracia. How do new visual immersive systems influence gaming QoE: A use case of serious gaming with Oculus Rift. In Proceedings of the 2015 International Conference on Quality of Multimedia Experience, QoMEX’15, pages 1–6, 2015.
[53] W. Jiang and J. Gu. Video stitching with spatial-temporal content-preserving warping. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’15, pages 42–48, 2015.
[54] R. Ju, J. He, F. Sun, J. Li, F. Li, J. Zhu, and L. Han. Ultra wide view based panoramic VR streaming. In Proceedings of the 2017 ACM SIGCOMM Workshop on Virtual Reality and Augmented Reality Network, VR/AR Network’17, pages 19–23, 2017.
[55] Y. Kavak, E. Erdem, and A. Erdem. A comparative study for feature integration strategies in dynamic saliency estimation. Signal Processing: Image Communication, 51(C):13–25, 2017.
[56] N. Khiem, G. Ravindra, and W. Ooi. Adaptive encoding of zoomable video streams based on user access pattern. Signal Processing: Image Communication, 27(4):360–377, 2012.
[57] H. Kimata, S. Shimizu, Y. Kunita, M. Isogai, and Y. Ohtani. Panorama video coding for user-driven interactive video application. In Proceedings of the 2009 IEEE International Symposium on Consumer Electronics, ISCE’09, pages 112– 114, 2009.
[58] G. Krishnan and S. Nayar. Cata-fisheye camera for panoramic imaging. In Proceedings of the 2008 IEEE Workshop on Applications of Computer Vision, WACV’08, pages 1–8, 2008.
[59] J. Lee, B. Kim, K. Kim, Y. Kim, and J. Noh. Rich360: Optimized spherical representation from structured panoramic camera arrays. ACM Transactions on Graphics, 35(4):63:1–63:11, 2016.
[60] L. Li, Z. Li, X. Ma, H. Yang, and H. Li. Co-projection-plane based 3-D padding for polyhedron projection for 360-degree video. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo., ICME’17, pages 55–60, 2017.
[61] K. Lin, S. Liu, L. Cheong, and B. Zeng. Seamless video stitching from hand-held camera inputs. Computer Graphics Forum, 35(2):479–487, 2016.
[62] W. Lo, C. Fan, J. Lee, C. Huang, K. Chen, and C. Hsu. 360◦ video viewing dataset in head-mounted virtual reality. In Proceedings of the 2017 ACM Multimedia Systems Conference, MMSys’17, pages 211–216, 2017.
[63] W. Lo, C. Fan, S. Yen, and C. Hsu. Performance measurements of 360◦ video streaming to head-mounted displays over live 4G cellular networks. In Proceedings of the 2017 Asia-Pacific Network Operations and Management Symposium, APNOMS ’17, pages 205–210, 2017.
[64] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the 1981 International Joint Conference on Artificial Intelligence, IJCAI’81, pages 674–679, 1981.
[65] K. Mania, B. Adelstein, S. Ellis, and M. Hill. Perceptual sensitivity to head tracking latency in virtual environments with varying degrees of scene complexity. In Proceedings of the 2004 Symposium on Applied Perception in Graphics and Visualization, APGV’04, pages 39–47, 2004.
[66] K. Misra, A. Segall, M. Horowitz, S. Xu, A. Fuldseth, and M. Zhou. An overview of tiles in HEVC. IEEE Journal of Selected Topics in Signal Processing, 7(6):969– 977, 2013.
[67] A. Nasrabadi, A. Mahzari, J. Beshay, and R. Prakash. Adaptive 360-degree video streaming using scalable video coding. In Proceedings of the 2017 ACM Multimedia Conference, MM’17, pages 1689–1697, 2017.
[68] K. Ng, S. Chan, and H. Shum. Data compression and transmission aspects of panoramic videos. IEEE Transactions on Circuits and Systems for Video Technology, 15(1):82–95, 2005.
[69] D. Nguyen, H. Tran, A. Pham, and T. Thang. A new adaptation approach for viewport-adaptive 360-degree video streaming. In Proceedings of the 2017 IEEE International Symposium on Multimedia, ISM’17, pages 38–44, 2017.
[70] O.Niamut,E.Thomas,L.D’Acunto,C.Concolato,F.Denoual,andS.Lim.MPEG DASH SRD: spatial relationship description. In Proceedings of the 2016 ACM Multimedia Systems Conference, MMSys ’16, page 5, 2016.
[71] J. Ohm and G. Sullivan. High efficiency video coding: the next frontier in video compression: Standards in a Nutshell. IEEE Signal Processing Magazine, 30(1):152–158, 2013.
[72] J. Oliva. A reference client implementation for the playback of MPEG DASH via javascript and compliant browsers. https://github.com/ Dash-Industry-Forum/dash.js/, 2017. Accessed April 2018.
[73] OpenTrack: head tracking software. https://github.com/opentrack/ opentrack, 2018. Accessed May 2018.
[74] C. Ozcinar, A. Abreu, S. Knorr, and A. Smolic. Estimation of optimal encoding ladders for tiled 360◦ VR video in adaptive streaming systems. In Proceedings of the 2017 IEEE International Symposium on Multimedia, ISM’17, pages 45–52, 2017.
[75] C. Ozcinar, A. Abreu, and A. Smolic. Viewport-aware adaptive 360◦ video streaming using tiles for virtual reality. In Proceedings of the 2017 IEEE International Conference on Image Processing, ICIP’17, pages 2174–2178, 2017.
[76] T. ParisTech. MP4Client. https://gpac.wp.imt.fr/player/, 2017. A cessed April 2018.
[77] F. Perazzi, A. Sorkine-Hornung, H. Zimmer, P. Kaufmann, O. Wang, S. Watson, and M. Gross. Panoramic video from unstructured camera arrays. Computer Graphics Forum, 34(2):57–68, 2015.
[78] S. Petrangeli, V. Swaminathan, M. Hosseini, and F. Turck. An HTTP/2-Based adaptive streaming framework for 360◦ virtual reality videos. In Proceedings of the 2017 ACM Multimedia Conference, MM’17, pages 306–314, 2017.
[79] S.Petrangeli,F.Turck,V.Swaminathan,andM.Hosseini.Improvingvirtualreality streaming using HTTP/2. In Proceedings of the 2017 ACM Multimedia Systems Conference, MMSys’17, pages 225–228, 2017.
[80] M. Rerabek, E. Upenik, and T. Ebrahimi. JPEG backward compatible coding of omnidirectional images. Applications of Digital Image Processing XXXIX, 9971(10):1–12, 2016.
[81] Y. Sanchez, R. Skupin, C. Hellge, and T. Schierl. Spatio-temporal activity based tiling for panorama streaming. In Proceedings of the 2017 Workshop on Network and Operating Systems Support for Digital Audio and Video, NOSSDAV’17, pages 61–66, 2017.
[82] Y. Sanchez, R. Skupin, and T. Schierl. Compressed domain video processing for tile based panoramic streaming using SHVC. In Proceedings of the 2015 International Workshop on Immersive Media Experiences, ImmersiveME’15, pages 13– 18, 2015.
[83] J. Sauer, J. Schneider, and M. Wien. Improved motion compensation for 360◦ video projected to polytopes. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo, ICME’17, pages 61–66, 2017.
[84] O. Schreer, I. Feldmann, C. Weissig, P. Kauff, and R. Schafer. Ultrahigh-resolution panoramic imaging for format-agnostic video production. Proceedings of the IEEE, 101(1):99–114, 2013.
[85] I. R. Sector. Methodology for the subjective assessment of the quality of television picture. ITU-R Recommendation, BT.500(13), January 2012.
[86] I. T. S. Sector. Subjective video quality assessment methods for multimedia applications. ITU-T Recommendation, P.910, April 2008.
[87] M. Shirer and S. Murray. IDC Sees the Dawn of the DX Economy and the Rise of the Digital-Native Enterprise. https:// www.businesswire.com/news/home/20161101005193/en/ IDC-Sees-Dawn-DX-Economy-Rise-Digital-Native, 2016. Accessed April 2018.
[88] R. Silva, B. Feijo ́, P. Gomes, T. Frensh, and D. Monteiro. Real time 360◦ video stitching and streaming. In Proceedings of the 2016 ACM Special Interest Group on Computer GRAPHics and Interactive Techniques Conference, SIGGRAPH’16, pages 70:1–70:2, 2016.
[89] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[90] A. Singla, S. Fremerey, W. Robitza, P. Lebreton, and A. Raake. Comparison of subjective quality evaluation for HEVC encoded omnidirectional videos at different bitrates for UHD and FHD resolution. In Proceedings of the 2017 ACM Multimedia Thematic Workshops, Thematic Workshops’17, pages 511–519, 2017.
[91] A. Singla, S. Fremerey, W. Robitza, and A. Raake. Measuring and comparing QoE and simulator sickness of omnidirectional videos in different head mounted displays. In Proceedings of the 2017 International Conference on Quality of Multimedia Experience, QoMEX’17, pages 1–6, 2017.
[92] R. Skupin, Y. Sanchez, C. Hellge, and T. Schierl. Tile based HEVC video for head mounted displays. In Proceedings of the 2016 IEEE International Symposium on Multimedia, ISM’16, pages 399–400, 2016.
[93] A. Steed, S. Frlston, M. Lopez, J. Drummond, Y. Pan, and D. Swapp. An ’in the wild’ experiment on presence and embodiment using consumer virtual reality equipment. IEEE Transactions on Visualization and Computer Graphics, 22(4):1406–1414, 2016.
[94] T. Stockhammer. Dynamic adaptive streaming over HTTP: Standards and design principles. In Proceedings of the 2011 ACM Multimedia Systems Conference, MMSys’11, pages 133–144, 2011.
[95] G. Sullivan, J. Ohm, W. Han, and T. Wiegand. Overview of the high efficiency video coding HEVC standard. IEEE Transactions on Circuits and Systems for Video Technology, 22(12):1649–1668, 2012.
[96] Y. Sa ́nchez, R. Skupin, and T. Schierl. Compressed domain video processing for tile based panoramic streaming using HEVC. In Proceedings of the 2015 IEEE International Conference on Image Processing, ICIP’15, pages 2244–2248, 2015.
[97] T. Tan, R. Weerakkody, M. Mrak, N. Ramzan, V. Baroncini, J. Ohm, and G. Sullivan. Video quality evaluation methodology and verification testing of HEVC compression performance. IEEE Transactions on Circuits and Systems for Video Technology, 26(1):76–90, 2016.
[98] I. Tosic and P. Frossard. Low bit-rate compression of omnidirectional images. In Proceedings of the 2009 Picture Coding Symposium, PCS’09, pages 1–4, 2009.
[99] H.Tran,N.Ngoc,C.Bui,M.Pham,andT.Thang.Anevaluationofqualitymetrics for 360 videos. In Proceedings of the 2017 International Conference on Ubiquitous and Future Networks, ICUFN’17, pages 7–11, 2017.
[100] E. Upenik, M. Rerabek, and T. Ebrahimi. Testbed for subjective evaluation of omnidirectional visual content. In Proceedings of the 2016 Picture Coding Symposium, PCS’16, pages 1–5, 2016.
[101] E. Upenik, M. Rerabek, and T. Ebrahimi. On the performance of objective metrics for omnidirectional visual content. In Proceedings of the 2017 International Conference on Quality of Multimedia Experience, QoMEX’17, pages 1–6, 2017.
[102] M. Viitanen, A. Koivula, A. Lemmetti, A. Yla ̈-Outinen, J. Vanne, and T. D. Ha ̈ ma ̈ la ̈ inen. Kvazaar: Open-source HEVC/H.265 encoder. In Proceedings of the 2016 ACM Multimedia Conference, MM’16, pages 1179–1182, 2016.
[103] D. Wagner, A. Mulloni, T. Langlotz, and D. Schmalstieg. Real-time panoramic mapping and tracking on mobile phones. In Proceedings of the 2010 Conference on Virtual Reality Conference, VR ’10, pages 211–218, 2010.
[104] H. Wang, M. Chan, and W. Ooi. Wireless multicast for zoomable video streaming. ACM Transactions on Multimedia Computing, Communications, and Applications, 12(1):5, 2015.
[105] H. Wang, V. Nguyen, W. Ooi, and M. Chan. Mixing tile resolutions in tiled video: A perceptual quality assessment. In Proceedings of the 2014 Workshop on Network and Operating Systems Support for Digital Audio and Video, NOSSDAV’14, pages 25:25–25:30, 2014.
[106] T. Wiegand, G. Sullivan, G. Bjontegaard, and A. Luthra. Overview of the H.264/AVC video coding standard. IEEE Transactions on Circuits and Systems for Video Technology, 13(7):560–576, 2003.
[107] S. Xiao and F. Wang. Generation of panoramic view from 360 degree fisheye images based on angular fisheye projection. In Proceedings of the 2011 International Symposium on Distributed Computing and Applications to Business, Engineering and Science, DCABES’11, pages 187–191, 2011.
[108] L. Xie, Z. Xu, Y. Ban, X. Zhang, and Z. Guo. 360ProbDASH: Improving QoE of 360 video streaming using tile-based HTTP adaptive streaming. In Proceedings of the 2017 ACM Multimedia Conference, MM’17, pages 315–323, 2017.
[109] Y. Xiong and K. Pulli. Color matching for high-quality panoramic images on mobile phones. IEEE Transactions on Consumer Electronics, 56(4):2592–2600, 2010.
[110] S. Yao. Modeling Quality-of-Experience of 360◦ videos in head-mounted virtual reality. Master’s thesis, National Tsing Hua University, 2018.
[111] R. Youvalari, A. Aminlou, M. Hannuksela, and M. Gabbouj. Efficient coding of 360-degree pseudo-cylindrical panoramic video for virtual reality applications. In Proceedings of the 2016 IEEE International Symposium on Multimedia, ISM’16, pages 525–528, 2016.
[112] M.Yu ,H.Lakshman ,and B.Girod. Content adaptive representations of omnidirectional videos for cinematic virtual reality. In Proceedings of the 2015 International Workshop on Immersive Media Experiences, ImmersiveMe’15, pages 1–6, 2015.
[113] V. Zakharchenko, K. Choi, and J. Park. Quality metric for spherical panoramic video. In Proceedings of the 2016 SPIE, Optics, Photonics: Optical Engineering, and Applications, OP’16, pages 9970–9979, 2016.
[114] A. Zare, A. Aminlou, M. Hannuksela, and M. Gabbouj. HEVC-compliant tilebased streaming of panoramic video for virtual reality applications. In Proceedings of the 2016 ACM Multimedia Conference, MM’16, pages 601–605, 2016.
[115] L. Zhang, B. Tiwana, R. Dick, Z. Qian, Z. Mao, Z. Wang, and L. Yang. Accurate online power estimation and automatic battery behavior based power model generation for smartphones. In Proceedings of the 2010 IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis, CODES ’10, pages 105–114, 2010.
[116] C. Zhou, Z. Li, and Y. Liu. A measurement study of Oculus 360 degree video streaming. In Proceedings of the 2017 ACM Multimedia Systems Conference, MMSys’17, pages 27–37, 2017.
[117] A. Zomet, A. Levin, S. Peleg, and Y. Weiss. Seamless image stitching by minimizing false edges. IEEE Transactions on Image Processing, 15(4):969–977, 2006.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *