|
[1] Eric Guérin. Deep terrains –code and data | page perso - eric guérin, 2021. URL http://web.archive.org/web/20210228103649/https://perso. liris.cnrs.fr/eguerin/new/blog/deep-terrains-code-and-data/. [2] Morton Leonard Heilig. Sensorama simulator, U.S. Patent 3 050 870, Aug. 1962. [3] Ivan E Sutherland. A head-mounted three dimensional display. In Proceedings of the December 9-11, 1968, fall joint computer conference, part I, pages 757–764, 1968. [4] Aurora Berni and Yuri Borgianni. Applications of virtual reality in engineering and product design: Why, what, how, when and where. Electronics, 9(7):1064, 2020. [5] Daria Vlah, Vanja Čok, and Uroš Urbas. Vr as a 3d modelling tool in engineering design applications. Applied Sciences, 11(16):7570, 2021. [6] Chien-Wen Chen, Jain-Wei Peng, Chia-Ming Kuo, Min-Chun Hu, and Yuan-Chi Tseng. Ontlus: 3d content collaborative creation via virtual reality. In International Conference on Multimedia Modeling, pages 386–389. Springer, 2018. [7] SangSu Choi, Kiwook Jung, and Sang Do Noh. Virtual reality applications in manufacturing industries: Past research, present findings, and future directions. Concurrent Engineering, 23(1):40–63, 2015. [8] Inc. Grand View Research. Virtual reality market size, share trends analysis report by technology (semi fully immersive, non-immersive), by device (hmd, gtd, pdw), by component (hardware, software), by application, by region, and segment forecasts, 2022 - 2030, 2022. URL http://web.archive. org/web/20220612102626/https://www.grandviewresearch.com/industry-analysis/virtual-reality-vr-market. [9] International Data Corporation. Ar/vr headset shipments grew dramatically in 2021, thanks largely to meta’s strong quest 2 volumes, with growth forecast to continue, according to idc, 2022. URL http://web.archive.org/web/20220613212048/https://www.idc.com/getdoc.jsp?containerId=prUS48969722. [10] Metaverse, 2022. URL https://web.archive.org/web/20220615135747/https://en.wikipedia.org/wiki/Metaverse. [11] Sander Van Goethem, Regan Watts, Arno Dethoor, Rik Van Boxem, Kaz van Zegveld, Jouke Verlinden, and Stijn Verwulgen. The use of immersive technologies for concept design. In International Conference on Applied Human Factors and Ergonomics, pages 698–704. Springer, 2020. [12] Rossitza Setchi and Carole Bouchard. In search of design inspiration: a semantic-based approach. Journal of Computing and Information Science in Engineering, 10(3), 2010. [13] Tariq S Mujber, Tamas Szecsi, and Mohammed SJ Hashmi. Virtual reality applications in manufacturing process simulation. Journal of materials processing technology, 155: 1834–1838, 2004. [14] Jilin Ye, R Ian Campbell, Tom Page, and Kevin S Badni. An investigation into the implementation of virtual reality technologies in support of conceptual design. Design Studies, 27(1):77–97, 2006. [15] Rainer Stark, Johann Habakuk Israel, and Thomas Wöhler. Towards hybrid modelling environments—merging desktop-cad and virtual reality-technologies. CIRP annals, 59(1): 179–182, 2010. [16] Johann Habakuk Israel, Eva Wiese, Magdalena Mateescu, Christian Zöllner, and Rainer Stark. Investigating three-dimensional sketching for early conceptual design—results from expert discussions and user studies. Computers & Graphics, 33(4):462–473, 2009. [17] Joshua Q Coburn, Ian Freeman, and John L Salmon. A review of the capabilities of current low-cost virtual reality technology and its potential to enhance the design process. Journal of computing and Information Science in Engineering, 17(3), 2017. [18] Steve Bryson. Virtual reality in scientific visualization. Computers & graphics, 17(6): 679–685, 1993. [19] Adobe Inc. Top 3d sculpting tools for virtual reality authoring | medium by adobe, 2022. URL http://web.archive.org/web/20220417130949/https:// www.adobe.com/products/medium.html. [20] Google LLC. Tilt brush by google, 2021. URL http://web.archive.org/web/20220502202135/https://www.tiltbrush.com/. [21] Google LLC. Blocks - create 3d models in vr - google vr, 2021. URL http://web.archive.org/web/20220423181749/https://arvr.google.com/blocks/. [22] Xingyuan Sun, Jiajun Wu, Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Tianfan Xue, Joshua B Tenenbaum, and William T Freeman. Pix3d: Dataset and methods for single-image 3d shape modeling. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2974–2983, 2018. [23] Éric Guérin, Julie Digne, Eric Galin, Adrien Peytavie, Christian Wolf, Bedrich Benes, and Benoît Martinez. Interactive example-based terrain authoring with conditional generative adversarial networks. Acm Transactions on Graphics (TOG), 36(6):1–13, 2017. [24] H-P Balzerkiewitz and C Stechert. The evolution of virtual reality towards the usage in early design phases. In Proceedings of the Design Society: DESIGN Conference, volume 1, pages 91–100. Cambridge University Press, 2020. [25] Timothy Gwynn. A user interface for terrain modelling in virtual reality using a head mounted display. Master’s thesis, Faculty of Science, 2021. [26] Oculus VR. Mixed reality with passthrough, 2022. URL http://web.archive.org/web/20220416225425/https://developer.oculus.com/blog/mixed-reality-with-passthrough/. [27] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3dr2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision, pages 628–644. Springer, 2016. [28] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In Proceedings of the IEEE international conference on computer vision, pages 2088–2096, 2017. [29] Haozhe Xie, Hongxun Yao, Xiaoshuai Sun, Shangchen Zhou, and Shengping Zhang. Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2690–2698, 2019. [30] Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. Advances in neural information processing systems, 29, 2016. [31] Haozhe Xie, Hongxun Yao, Shengping Zhang, Shangchen Zhou, and Wenxiu Sun. Pix2vox++: Multi-scale context-aware 3d object reconstruction from single and multiple images. International Journal of Computer Vision, 128(12):2919–2935, 2020. [32] Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. Advances in neural information processing systems, 29, 2016. [33] Shuo Yang, Min Xu, Haozhe Xie, Stuart Perry, and Jiahao Xia. Single-view 3d object reconstruction from shape priors in memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3152–3161, 2021. [34] Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 605–613, 2017. [35] Matheus Gadelha, Rui Wang, and Subhransu Maji. Multiresolution tree networks for 3d point cloud processing. In Proceedings of the European Conference on Computer Vision (ECCV), pages 103–118, 2018. [36] Andrey Kurenkov, Jingwei Ji, Animesh Garg, Viraj Mehta, JunYoung Gwak, Christopher Choy, and Silvio Savarese. Deformnet: Free-form deformation network for 3d shape reconstruction from a single image. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 858–866. IEEE, 2018. [37] David Novotny, Diane Larlus, and Andrea Vedaldi. Learning 3d object categories by looking around them. In Proceedings of the IEEE international conference on computer vision, pages 5218–5227, 2017. [38] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4460–4470, 2019. [39] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3504–3515, 2020. [40] Qiangeng Xu, Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. Disn: Deep implicit surface network for high-quality single-view 3d reconstruction. Advances in Neural Information Processing Systems, 32, 2019. [41] Yifan Xu, Tianqi Fan, Yi Yuan, and Gurprit Singh. Ladybird: Quasi-monte carlo sampling for deep implicit field based 3d reconstruction with symmetry. In European Conference on Computer Vision, pages 248–263. Springer, 2020. [42] Eric Galin, Eric Guérin, Adrien Peytavie, Guillaume Cordonnier, Marie-Paule Cani, Bedrich Benes, and James Gain. A review of digital terrain modeling. In Computer Graphics Forum, volume 38, pages 553–577. Wiley Online Library, 2019. [43] Chien-Wen Chen, Min-Chun Hu, Wei-Ta Chu, and Jun-Cheng Chen. A real-time sculpting and terrain generation system for interactive content creation. IEEE Access, 9:114914–114928, 2021. [44] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [45] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. [46] panmari. panmari/ stanford-shapenet-renderer: Scripts for batch rendering models using blender. tested with models from stanfords shapenet library., 2021. URL https://web.archive.org/web/20210507031131/https://github.com/panmari/stanford-shapenet-renderer. [47] Jianxiong Xiao, Krista A Ehinger, James Hays, Antonio Torralba, and Aude Oliva. Sun database: Exploring a large collection of scene categories. International Journal of Computer Vision, 119(1):3–22, 2016. [48] Patrick Min. binvox 3d mesh voxelizer, keywords: voxelization, voxelisation, 3d model, 2021. URL https://web.archive.org/web/20220531004757/https://www.patrickmin.com/binvox/. [49] Inc. 1-10. Getting started | zig sim by 1→10 zigsim, 2021. URL https://web.archive.org/web/20211024044716/https://1-10.github.io/zigsim/getting-started.html. [50] Keijiro Takahashi. keijiro/ klakndi: Newtek ndi™ plugin for unity, 2022. URL https://web.archive.org/web/20220302175150/https://github.com/keijiro/KlakNDI. [51] Microsoft Facebook. Onnx | home, 2022. URL https://web.archive.org/web/20220630090358/https://onnx.ai/. [52] Microsoft. Onnx runtime | home, 2022. URL https://web.archive.org/web/20220624015513/https://onnxruntime.ai/. [53] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017. [54] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015. [55] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017. [56] Wikipedia. Signed distance function - wikipedia, 2022. URL https://web.archive.org/web/20220725211733/https://en.wikipedia.org/wiki/Signed_distance_function. [57] Unity. Unity - manual: Trees, 2022. URL http://web.archive.org/web/20220709061733/https://docs.unity3d.com/Manual/terrain-Trees.html. [58] Unity. Unity - manual: Wind zones, 2022. URL https://web.archive.org/web/20220629214030/https://docs.unity3d.com/Manual/class-WindZone.html. [59] Unity Technologies. Standard assets (for unity 2018.4) | asset packs | unity asset store, 2022. URL https://assetstore.unity.com/packages/essentials/asset-packs/standard-assets-for-unity-2018-4-32351. [60] ISO/TC 159/SC 4 Ergonomics of Human-System Interaction (Subcommittee). Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs).: Guidance on Usability. International Organization for Standardization, 1998. [61] Cambridge University Press. efficiency | translate to traditional chinese: Cambridge dictionary, 2022. URL https://web.archive.org/web/20220716064520/https://dictionary.cambridge.org/dictionary/english-chinese-traditional/efficiency?q=Efficiency. [62] Ju Yeon Lee, Ju Young Kim, Seung Ju You, You Soo Kim, Hye Yeon Koo, Jeong Hyun Kim, Sohye Kim, Jung Ha Park, Jong Soo Han, Siye Kil, et al. Development and usability of a life-logging behavior monitoring application for obese patients. Journal of Obesity & Metabolic Syndrome, 28(3):194, 2019. [63] International Organization for Standardization. Ergonomics of human-system interaction—Part 11: Usability: Definitions and concepts. International Organization for Standardization, Vernier, Geneva, Switzerland, ISO 9241-11:2018(en) edition, 2018. URL https://www.iso.org/standard/63500.html. [64] Ann Fruhling and Sang Lee. Assessing the reliability, validity and adaptability of pssuq. AMCIS 2005 proceedings, page 378, 2005. [65] Sandra G Hart and Lowell E Staveland. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, volume 52, pages 139–183. Elsevier, 1988. [66] Udo Schultheis, Jason Jerald, Fernando Toledo, Arun Yoganandan, and Paul Mlyniec. Comparison of a two-handed interface to a wand interface and a mouse interface for fundamental 3d tasks. In 2012 IEEE Symposium on 3D User Interfaces (3DUI), pages 117–124. IEEE, 2012. [67] Autodesk. Autocad 2011 user’s guide, 2016. URL https://web.archive.org/web/20161121143818/http://docs.autodesk.com/ACD/2011/ENU/pdfs/acad_aug.pdf. [68] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3907–3916, 2018. |