|
參考文獻 [1] T. Brogardh.“Present and Future Robot Control Development – An Industrial Perspective”. Annual Reviews in Control, 31(1): 69-79. (2007). [2] B. Braye. “Programming Manual”. ABB Flexible Automation AS, Rep. 3HNT 105 086R1001. (1996). [3] B. Solvang, G. Sziebig, and P. Korondi. “Vision-based Robot Programming”. IEEE International Conference on Networking Sensing and Control, pp.949-954. (2008). [4] O. A. Anfindsen, C. Skourup, T. Petterson, and J. Pretlove. “Method and a System for Programming an Industrial Robot”. U.S. Patent 7 209 801 B2. (2007). [5] G. Biggs, B. MacDonald. “A survey of robot programming systems”. Proceedings of the Australasian conference on robotics and automation, pp. 1-3. (2003). [6] 黎百加,擴增實境中基於機器編程示範之運動規劃,清華大學工業工程與工程管理學系,碩士論文(2017). [7] P.P. Valentini. “Interactive virtual assembling in augmented reality”. Intern. J. Inter. Design Manuf. 3(2). pp. 109-119. (2009). [8] P.P. Valentini. “Interactive cable harnessing in augmented reality”. Intern. J. Inter. Design Manuf. 5(1). pp. 45-53. (2011). [9] J. W. S. Chong, S. K. Ong, A. Y. C. Nee, and K. Y. Youmi. “Robot programming using augmented reality: an interactive method for planning collision-free paths”. Robot.Comput. Integr. Manuf. 25(3). pp. 689-701. (2009). [10] M. F. Zaeh, W. Vogl. “Interactive laser-projection for programming industrial robots”. Proceedings of the International Symposium on Mixed and Augmented Reality. pp. 125-128. (2006). [11] G. A. Lee, G. J. Kim. “Immersive authoring of Tangible Augmented Reality content: A user study”. Journal of Visual Languages & Computing. vol. 20. pp. 61-79. (2009). [12] S. K. Ong, J. W. S. Chong, and A. Y. C. Nee. “A novel AR-based robot programming and path planning methodology”. Robotics and Computer-Integrated Manufacturing(IJIDM). vol. 26. pp. 240-249. 6. (2010). [13] H. C. Fang, S. K. Ong, and A. Y. C. Nee. “A novel augmented reality-based interface for robot path planning”. International Journal on Interactive Design and Manufacturing (IJIDM). vol. 8. pp. 33-42. (2014). [14] Z. Y. Zhang. “A flexible new technique for camera calibration”, IEEE Transactions on pattern analysis and machine intelligence. vol. 22. pp. 1330-1334. (2000). [15] Kinect v2, Microsoft. Retrieved from: https://support.xbox.com/en-US/xbox-on-windows/accessories/Kinect v2-for-windows-v2-setup#d38879587035411fbc6231c4982e0afa [16] Time of flight, Wikipedia. Retrieved from: https://en.wikipedia.org/wiki/Time-of-flight_camera [17] M. Pirovano, C. Y. Ren, and I. Frosio. “Robust Silhouette Extraction from Kinect Data”. International Conference on Image Analysis and Processing (ICIAP). pp.642-651. (2013). [18] K. Xu, J. Zhou, and Z. Wang. “A method of hole-filling for the depth map generated by Kinect with moving objects detection”. (2013). [19] OpenGL.org, Khronos Group. Retrieved from: https://www.opengl.org/ [20] Open Source Augmented Reality SDK, Artoolkit.org. Retrieved from: https://www.artoolkit.org/ [21] W. Song, L. A. Vu, and S. W. Jung. “Hole Filling for Kinect v2 Depth Images”. (2014). [22] E. Lachat, H. Macher, and M. A. Mittet. “First Experiences with Kinect v2 Sensor for Close Range 3D Modeling”. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. vol. XL-5/W4. (2015). [23] Y. Y. Chuang, D. B. Goldman, B. Curless, D. H. Salesin, and R. Szeliski. “Shadow matting and compositing”. ACM Transactions on Graphics (TOG). pp. 494-500. (2003). [24] H. Asada, H. Izumi. “Automatic program generation from teaching data for the hybrid control of robots”. IEEE Transactions on Robotics and Automation. vol. 5, pp. 166-173. (1989). [25] OpenCV - Miscellaneous Image Transformations. Retrieved from: http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html [26] J. Yin, Y. Han, J. Li, and A. Cao. “Research on Real-Time Object Tracking by Improved CAMShift”. International Symposium on Computer Network and Multimedia Technology. pp.1-4. (2009). [27] D. Comaniciu, P. Meer. “Mean Shift: A Robust Approach Toward Feature Space Analysis”. IEEE Transactions on Pattern Analysis and Machine Intelligence. vol.24. (2002). [28] D. Held, S. Thrun, and S. Savarese. “Learning to Track at 100 FPS with Deep Regression Networks”. ECCV. (2016). [29] Z.Y. Zhang. “A flexible new technique for camera calibration”, IEEE Transactions on pattern analysis and machine intelligence. vol. 22. pp. 1330-1334. (2000). [30] Richard's blog - camera calibration - part 1 camera model. Retrieved from: http://wycwang.blogspot.tw/2012/09/camera-calibration-part-1-camera-model.html [31] Wall, E. Michael. “Singular value decomposition and principal component analysis”. (2003). [32] B. Nuernberger, E. Ofek, and H. Benko. “SnapToReality: Aligning Augmented Reality to the Real World”, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. (2016). [33] J. Canny. “A Computational Approach to Edge Detection”. IEEE Transactions on Pattern Analysis and Machine Intelligence. vol.8. (1986). [34] N. Kiryati, Y. Eldar, and A. M. Bruckstein. “A Probabilistic Hough Transform”. Pattern Recognition. vol.24. (1991). [35] 毛星云.“OpenCV3編程入門”. https://github.com/QianMo/OpenCV3-Intro-Book-Src. (2015). [36] S. V. Burtsev, Y. P. Kuzmin. “An Efficient Flood-Filling Algorithm”. Computers&Graphics. vol.17. (1993). [37] A. R. Smith. “Color Gamut Transform Pairs”. SIGGRAPH 78 Conference Proceedings. pp. 12-19. (1978). [38] M. A. Fischler, R. C. Bolles. “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography”. Comm. of the ACM. vol 24. pp. 381-395. (1981). [39] E. G. Gibert, D. W. Johnson, and S. S. Keerthi. “A fast procedure for computing the distance between complex objects in three-dimensional space”. IEEE Journal on Robotics and Automation. vol. 4. (1988). [40] GJK - Distance & Closest Points. Retrieved from: http://www.dyn4j.org/2010/04/gjk-distance-closest-points/ [41] G. V. D. Bergen. “Collision Detection in Interactive 3D Environments”. (2003). [42] M. Donald. “Octree Encoding: A New Technique for the Representation, Manipulation and Display of Arbitrary 3-D Objects by Computer”. Rensselaer Polytechnic Institute. (1980). [43] S.Raschdorf, M. Kolonko. “Loose Octree: a data structure for the simulation of polydisperse particle packings”. (2009). [44] C. Ericson. “Real-Time Collision Detection”. (2004). [45] T. Nikodym. “Ray Tracing Algorithm For Interactive Applications”. (2010). [46] LOCTAI Enterprise. Retrieved from: http://www.loctai.com.tw/ [47] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. “Introduction to Algorithms, Third Edition”. (2009). [48] J. Brooke, “SUS: a "quick and dirty" usability scale”. Usability Evaluation in Industry. (1996). [49] MATLAB Calibration Toolbox. Retrieved from: http://www.vision.caltech.edu/bouguetj/calib_doc/download/index.html [50] L. Yang, L. Y. Zhang. “Evaluating and Improving the Depth Accuarcy of Kinect for Windows v2”. IEEE Sensors Journal. vol.15. (2015). [51] M. Laukkanen. “Performance Evaluation of Time-of-Flight Depth Camera”. Thesis for the degree of Master of Science in Technology. Aalto University. (2015). [52] J. Wlley. “Encyclopedia of Statistical Sciences”. QA276.14.E5. (1982). [53] Bullet Collision Detection & Physics Library SDK, Bulletphysics.org. Retrieved from: https://www.bulletphysics.org/Bullet/BulletFull/ [54] Intel RealSense Depth Camera D435. Retrieved from: https://click.intel.com/intelr-realsensetm-depth-camera-d435.html [55] Intel RealSense Depth Camera D435 Product Specifications. Retrieved from: https://ark.intel.com/products/128255/Intel-RealSense-Depth-Camera-D435 [56] C. Dickinson. “Learning Game Physics with Bullet Physics and OpenGL”. (2013). [57] Qt | Cross-platform software development for embedded&desktop. Retrieved from: https://www.qt.io/ [58] Microsoft Hololens. Retrieved from: https://www.microsoft.com/en-us/hololens
|