|
[1] Lytro Light Field camera. http://lightfield-forum.com/lytro/lytro-lightfield-camera/, 2013. Accessed August 2019. [2] RayTrix R11 3D Light Field Camera. http://lightfield-forum.com/raytrix/raytrix-r11-3d-lightfield-camera/, 2014. Accessed August 2019. [3] Lytro Illum - Professional Light Field Camera. http://lightfield-forum.com/lytro/lytro-illum-professional-light-field-camera/, 2015. Accessed August 2019. [4] Facebook Spaces. https://www.facebook.com/spaces, 2017. Accessed April 2018. [5] Lytro Support. https://support.lytro.com/hc/en-us, 2017. [6] Meet the Lytro Immerge 2.0, a 95-lens VR camera that could soon shoot in 10K. https://www.digitaltrends.com/photography/meet-the-lytro-immerge-2/, 2017. Accessed August 2019. [7] csmatio .NET Library for Matlab MAT-files. https://sourceforge.net/projects/csmatio/files/, 2018. [8] Facebook Oculus Rift. https://www.oculus.com, 2018. Accessed May 2018. [9] Filming the Future with RED and Facebook 360. https://facebook360.fb.com/2018/09/26/film-the-future-with-red-and-facebook-360/, 2018. Accessed August 2019. [10] FOVE: Eye-Tracking Virtual Reality Headset. https://www.getfove.com/, 2018. [11] GOOGLE AR AND VR - Experimenting with Light Fields. https://www.blog.google/products/google-ar-vr/experimenting-light-fields/, 2018. Accessed August 2019. [12] Google Cardboard. https://vr.google.com/cardboard/, 2018. Accessed May 2018. [13] HTC Vive. https://www.htcvive.com, 2018. Accessed May 2018. [14] HTC Vive Focus. https://www.vive.com/cn/product/vive-focus-en/, 2018. Accessed May 2018. [15] Luna 360 VR. http://luna.camera/, 2018. Accessed May 2018. [16] OpenCV (Open Source Computer Vision Library). https://opencv.org/, 2018. [17] OpenCVSharp for Unity - Unity Asset Store. https://assetstore.unity.com/packages/tools/integration/opencv-for-unity-100374, 2018. [18] RayTrix: 3D Light Field Camera Technology. https://raytrix.de/, 2018. [19] Richo Theta S. https://theta360.com, 2018. Accessed May 2018. [20] Samsung Gear 360. http://www.samsung.com/global/galaxy/gear-360/, 2018. Accessed May 2018. [21] Sony Playstation VR. https://www.playstation.com/en-au/explore/playstation-vr/, 2018. Accessed May 2018. [22] Unity. https://unity.com/, 2018. [23] A. A. Ageev and M. I. Sviridenko. Approximation algorithms for maximum coverage and max cut with given sizes of parts. In Integer Programming and Combinatorial Optimization, pages 17–30. Springer Berlin Heidelberg, 1999. [24] S. Avidan and A. Shashua. Novel view synthesis in tensor space. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 1034–1040, June 1997. [25] J. Berent and P. L. Dragotti. Segmentation of epipolar-plane image volumes with occlusion and disocclusion competition. In 2006 IEEE Workshop on Multimedia Signal Processing, pages 182–185, Oct 2006. [26] C. Birklbauer and O. Bimber. Panorama light-field imaging. EUROGRAPHIC, 33(2):43–52, May 2014. [27] R. C. Bolles, H. H. Baker, and D. H. Marimont. Epipolar-plane image analysis: An approach to determining structure from motion. International Journal of Computer Vision, 1(1):7–55, Mar. 1987. [28] V. Boominathan, K. Mitra, and A. Veeraraghavan. Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. In 2014 IEEE International Conference on Computational Photography (ICCP), pages 1–10, May 2014. [29] K. Carnegie and T. Rhee. Reducing visual discomfort with hmds using dynamic depth of field. IEEE Computer Graphics and Applications, 35(5):34–41, September 2015. [30] J. Chakareski. Adaptive multiview video streaming: challenges and opportunities. IEEE Communications Magazine, 51(5):94–100, May 2013. [31] S. E. Chen and L.Williams. View interpolation for image synthesis. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’93), pages 279–288. ACM Press, 1993. [32] B. Clipp, J. Kim, J. Frahm, M. Pollefeys, and R. Hartley. Robust 6dof motion estimation for non-overlapping, multi-camera systems. In 2008 IEEE Workshop on Applications of Computer Vision, pages 1–8. IEEE, Jan. 2008. [33] A. Collet, M. Chuang, P. Sweeney, D. Gillett, D. Evseev, D. Calabrese, H. Hoppe, A. Kirk, and S. Sullivan. High-quality streamable free-viewpoint video. ACM Trans. Graph., 34(4):69:1–69:13, July 2015. [34] D. Dansereau. Matlab Light Field Toolbox v0.4. https://www.mathworks.com/matlabcentral/fileexchange/49683-light-field-toolbox-v0-4, 2015. [35] D. G. Dansereau, O. Pizarro, and S. B. Williams. Linear volumetric focus for light field cameras. ACM Trans. Graph., 34(2):15:1–15:20, Mar. 2015. [36] R. Dor´e, G. Briand, and T. Tapie. Technicolor 3DoFPlus Test Materials. International Organization for Standardization Meeting Document ISO/IEC JTC1/SC29/WG11 MPEG/M42349, 2018. Meeting held at San Diego USA. [37] A. Dziembowski, J. Samelak, and M. Doma´nski. View selection for virtual view synthesis in free navigation systems. In 2018 International Conference on Signals and Electronic Systems (ICSES), pages 83–87, Sept. 2018. [38] S. Fachada, D. Bonatto, A. Schenkel, and G. Lafruit. Depth image based view synthesis with multiple reference views for virtual reality. In 2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), pages 1–4, June 2018. [39] T. Georgiev and A. Lumsdaine. Superresolution with plenoptic camera 2.0. 2009. [40] X. Guo, Z. Yu, S. B. Kang, H. Lin, and J. Yu. Enhancing light fields through ray-space stitching. IEEE Transactions on Visualization and Computer Graphics, 22(7):1852–1861, July 2016. [41] S. Khuller, A. Moss, and J. S. Naor. The budgeted maximum coverage problem. Information Processing Letters, 70(1):39–45, 1999. [42] C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph., 32:73:1–73:12, 2013. [43] B. Krolla, M. Diebold, B. Goldluecke, and D. Stricker. Spherical light fields. In Proceedings of the British Machine Vision Conference 2014 : BMVC, 2014, Nottingham, Durham, 2014. BMVA Press. [44] B. Kroon. Reference View Synthesizer (RVS) manual. International Organization for Standardization Meeting Document ISO/IEC JTC1/SC29/WG11 MPEG/N18068, 2018. Meeting held at Macau SAR CN. [45] Y. Lai and C. Hsu. Refocusing supports of panorama light-field images in head-mounted virtual reality. In Proceedings of the 3rd International Workshop on Multimedia Alternate Realities, AltMM’18, pages 15–20. ACM, 2018. [46] M. Levoy and P. Hanrahan. Light field rendering. In Proc. of ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’96), pages 31–42, New Orleans, USA, August 1996. [47] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light field microscopy. ACM Trans. Graph., 25(3):924–934, July 2006. [48] K. M¨uller, A. Smolic, K. Dix, P. Merkle, P. Kauff, and T. Wiegand. View synthesis for advanced 3d video systems. EURASIP Journal on Image and Video Processing, 2008(1), Feb. 2009. [49] J. Moss, J. Scisco, and E. Muth. Simulator sickness during head mounted display (hmd) of real world video captured scenes. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 52(19):1631–1634, September 2008. [50] R. Ng, M. Levoy, M. Br´edif, G. Duval, M. Horowitz, and P. Hanrahan. Light field photography with a hand-held plenoptic camera. Stanford Tech Report CTSR 2005-02, 2005. [51] S. Ohl. Tele-immersion concepts. IEEE Transactions on Visualization and Computer Graphics, 24(10):2827–2842, Oct. 2018. [52] N. Padmanaban, R. Konrad, E. A. Cooper, and G. Wetzstein. Optimizing vr for all users through adaptive focus displays. In Proc. of ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’17) Talks, pages 77:1–77:2, Los Angeles, USA, July 2017. [53] B. Ray, J. Jung, and M. Larabi. On the possibility to achieve 6-dof for 360 video using divergent multi-view content. In 2018 26th European Signal Processing Conference (EUSIPCO), pages 211–215, Sept. 2018. [54] Z. M. Research. Virtual Reality (VR) market by hardware and software for (consumer, commercial, enterprise, medical, aerospace and defense, automotive, energy and others): Global industry perspective, comprehensive analysis and forecast, 2016–2022. https://www.zionmarketresearch.com/report/virtual-reality-market, 2017. Accessed August 2019. [55] M. Shirer and S. Murray. IDC Sees the Dawn of the DX Economy and the Rise of the Digital-Native Enterprise. https://www.businesswire.com/news/home/20161101005193/en/IDC-Sees-Dawn-DX-Economy-Rise-Digital-Native, 2016. Accessed April 2018. [56] A. Smolic, K. Mueller, P. Merkle, C. Fehn, P. Kauff, P. Eisert, and T. Wiegand. 3d video and free viewpoint video - technologies, applications and mpeg standards. In 2006 IEEE International Conference on Multimedia and Expo, pages 2161–2164, July 2006. [57] M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi. Depth from combining defocus and correspondence using light-field cameras. In 2013 IEEE International Conference on Computer Vision, pages 673–680, Dec 2013. [58] D. Tian, P. Lai, P. Lopez, and C. Gomila. View synthesis techniques for 3d video. In Applications of Digital Image Processing XXXII, volume 7443. International Society for Optics and Photonics, Sept. 2009. [59] J. Unger, A. Wenger, T. Hawkins, A. Gardner, and P. Debevec. Capturing and rendering with incident light fields. In Proceedings of the 14th Eurographics Workshop on Rendering, pages 141–149, 2003. [60] V. Vazirani. Approximation algorithms. Springer, Berlin New York, 2001. [61] K. Venkataraman, D. Lelescu, J. Duparr´e, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar. Picam: An ultra-thin high performance monolithic camera array. ACM Trans. Graph., 32(6):166:1–166:13, Nov. 2013. [62] T. Wang, J. Zhu, N. K. Kalantari, A. A. Efros, and R. Ramamoorthi. Light field video capture using a learning-based hybrid imaging system. ACM Trans. Graph., 36(4):133:1–133:13, July 2017. [63] X. Wang, L. Chen, S. Zhao, and S. Lei. From OMAF for 3DoF VR to MPEGI Media Format for 3DoF+, Windowed 6DoF and 6DoF VR. International Organization for Standardization Meeting Document ISO/IEC JTC1/SC29/WG11 MPEG/M41197, 2017. Meeting held at Torino, Italy. [64] S. Wanner and B. Goldluecke. Variational light field analysis for disparity estimation and super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(3):606–619, March 2014. [65] B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy. High performance imaging using large camera arrays. In ACM SIGGRAPH 2005 Papers, SIGGRAPH ’05, pages 765–776. ACM, 2005. [66] G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu. Light field image processing: An overview. IEEE Journal of Selected Topics in Signal Processing, 11(7):926–954, Oct 2017. [67] J. C. Yang, M. Everett, C. Buehler, and L. McMillan. A real-time distributed light field camera. In Proceedings of the 13th Eurographics Workshop on Rendering, EGRW ’02, pages 77–86, 2002. [68] T. Yang, Y. Zhang, J. Yu, J. Li, W. Ma, X. Tong, R. Yu, and L. Ran. All-in-focus synthetic aperture imaging. In Computer Vision – ECCV 2014, pages 1–15, 2014. [69] Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu. Line assisted light field triangulation and stereo matching. In 2013 IEEE International Conference on Computer Vision, pages 2792–2799, Dec 2013. [70] K. Y¨ucer, A. Sorkine-Hornung, O. Wang, and O. Sorkine-Hornung. Efficient 3d object segmentation from densely sampled light fields with applications to 3d reconstruction. ACM Trans. Graph., 35(3):22:1–22:15, Mar. 2016. [71] M. Zink, R. Sitaraman, and K. Nahrstedt. Scalable 360 video stream delivery: Challenges, solutions, and opportunities. Proceedings of the IEEE, pages 1–12, 2019. |