帳號:guest(3.139.107.174)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):劉宇望
作者(外文):Liu, Yu-Wang
論文名稱(中文):擴增實境中示範學習之運動規劃人機界面
論文名稱(外文):A Human-Machine Interface of Motion Planning in Augmented Reality based Programming for Demonstration
指導教授(中文):瞿志行
指導教授(外文):Chu, Chih-Hsing
口試委員(中文):陸元平
郭嘉真
口試委員(外文):Luh, Yuan-Ping
Kuo, Chia-Chen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:工業工程與工程管理學系
學號:105034470
出版年(民國):107
畢業學年度:106
語文別:中文
論文頁數:92
中文關鍵詞:擴增實境示範學習影像處理路徑規劃點膠機誤差分析
外文關鍵詞:Augmented RealityProgramming by DemonstrationImage ProcessingMotion Planning5-Axis DispenserError Analysis
相關次數:
  • 推薦推薦:0
  • 點閱點閱:340
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
擴增實境(Augmented Reality, AR)為一種將虛擬資訊融入或合成於真實世界影像的介面技術,能夠提供更好的人機互動功能,適合協助人機之間的協同合作。示範學習為自動化設備常見的運動規劃方式,由操作者透過教導器,在真實場景中控制機器執行工作,並將過程輸出為運動指令。過往研究已針對點膠機的運動規劃,驗證AR人機互動介面的可行性,然而於坐標轉換、相機校正與物件追蹤等步驟中,並未進行對應的優化,仍存在較大的誤差。本研究為改善上述問題,改善深度相機擷取的深度資訊,嘗試不同演算法以提高物件追蹤的精度;於點雲中自動找出平面與直線,使用者能夠直接選取真實工件的輪廓,快速進行點膠的運動規劃;此外量化分析系統不同來源的誤差,找出顯著的影響因素,供未來研究進行改善。最後於五軸點膠機上進行測試,實際驗證人機互動功能在示範學習上的效能,並與使用教導器進行實驗比較,顯示研究概念的實用價值。
Augmented Reality (AR) is an interface technology that integrates virtual information into real-world images. It provides better human-machine interaction and is suitable for assisting human-machine collaboration. Programming by Demonstration (PbD) is a common motion planning method for automation equipment. The operator controls the machine to perform work in the real scene through a teaching pad, and outputs the signals as motion instructions. Previous studies have verified the feasibility of AR human-machine interaction interface for the motion planning of 3-Axis dispenser. However, the steps of coordinate transformation, camera calibration, and object tracking still suffer from large errors. To solve these problems, this study improves the data quality captured by a depth camera using different algorithms. Automatic construction of 3D lines and planes from the point cloud is implemented so that the user can directly select the contours of the real workpiece to quickly perform motion planning of a 5-Axes glue dispenser. In addition, we conduct quantitative analysis for various errors of the system and thus identify significant factors for improving the system performance. Finally, verification tests were carried out to demonstrate the effectiveness of the human-machine interaction interface consisting of the above improved functions. Comparison tests were also performed to show the advantage of the AR interface over the traditional teaching pad for PbD.
目錄
摘要 2
Abstract 3
目錄 4
圖目錄 7
表目錄 11
第一章、緒論 12
1.1研究背景 12
1.2 文獻探討 13
1.2.1 示範學習文獻探討 13
1.2.2 擴增實境於工業製造與路徑規劃上之應用 15
1.3 研究目的 15
第二章、系統建構方法論述 17
2.1 擴增實境場景建立 17
2.1.1 系統模塊架構 17
2.1.2 深度資訊改善 18
 Kinect v2三維場景雲點生成 19
 像素濾波深度資訊改善 20
 加權移動平均深度資訊改善 21
2.1.3 目標物追蹤 22
 顏色追蹤演算法 22
 CamShift演算法 23
2.1.4 座標系整合 26
 相機內部參數計算 29
 系統變換矩陣計算 31
2.2 三維特征提取 32
2.2.1 三維邊緣線提取 33
2.2.2 三維面提取 34
 漫水填充演算法 34
 色域轉換 35
 RANSAC獲取平面方程式 35
2.3 互動功能開發 36
2.3.1 原系統互動功能 36
 碰撞顏色警示功能 36
 半透明顯示功能 36
 投影點計算功能 36
 加工範圍偵測功能 37
 工件表面引導功能 37
2.3.2 新系統添加互動功能 37
 鏡像翻轉功能 38
 碰撞檢測GJK演算法 38
 空間多個物體碰撞檢測 39
 光線追蹤消除虛擬物體 40
 多線程並行運算 41
第三章、系統實作 42
3.1 深度資訊改善結果 42
3.1.1 深度資訊實時改善結果 42
3.1.2 改善前後缺失點數統計 44
3.2 CamShift追蹤與顏色追蹤比較 47
3.3 驗證實驗 52
3.2.1 實驗設計 52
 系統功能介紹 52
 工件點膠設計 53
 受測者的背景要求 54
 實驗具體設計 54
 D1點膠過程詳細敘述 55
 D2點膠過程詳細敘述 56
3.2.2 測試過程 56
3.2.3 實驗結果 56
3.2.4 實驗結論 57
3.3 實作流程 58
第四章、誤差驗證 60
4.1 系統存在誤差分析 60
4.2 Kinect v2 相機彩色圖案與點雲圖案偏差測定 60
4.3相機標定誤差測定 63
4.3.1 相機標定誤差測定的輸入 63
4.3.2 相機標定結果誤差測定 63
4.3.3 測定步驟 64
4.4顏色追蹤誤差估計 69
4.4.1 實驗準備 69
4.4.2 實驗步驟 71
4.4.3 實驗結果 71
4.4.4 實驗結論 74
4.5 Kinect v2深度資料誤差分析 75
4.5.1 實驗背景分析 75
4.5.2 實驗步驟 76
4.5.3 實驗結果 77
第五章、結論與未來展望 83
5.1 研究結論 83
 AR示範學習運動規劃介面 83
 點膠實驗對比 84
 系統誤差分析 84
5.2 未來展望 84
參考文獻 85
附錄 90
附錄A:問卷1&2內容:(問卷1和問卷2內容相同,*為必答題目) 90
附錄B:五軸點膠機資料傳入指令 90

參考文獻
[1] T. Brogardh.“Present and Future Robot Control Development – An Industrial Perspective”. Annual Reviews in Control, 31(1): 69-79. (2007).
[2] B. Braye. “Programming Manual”. ABB Flexible Automation AS, Rep. 3HNT 105 086R1001. (1996).
[3] B. Solvang, G. Sziebig, and P. Korondi. “Vision-based Robot Programming”. IEEE International Conference on Networking Sensing and Control, pp.949-954. (2008).
[4] O. A. Anfindsen, C. Skourup, T. Petterson, and J. Pretlove. “Method and a System for Programming an Industrial Robot”. U.S. Patent 7 209 801 B2. (2007).
[5] G. Biggs, B. MacDonald. “A survey of robot programming systems”. Proceedings of the Australasian conference on robotics and automation, pp. 1-3. (2003).
[6] 黎百加,擴增實境中基於機器編程示範之運動規劃,清華大學工業工程與工程管理學系,碩士論文(2017).
[7] P.P. Valentini. “Interactive virtual assembling in augmented reality”. Intern. J. Inter. Design Manuf. 3(2). pp. 109-119. (2009).
[8] P.P. Valentini. “Interactive cable harnessing in augmented reality”. Intern. J. Inter. Design Manuf. 5(1). pp. 45-53. (2011).
[9] J. W. S. Chong, S. K. Ong, A. Y. C. Nee, and K. Y. Youmi. “Robot programming using augmented reality: an interactive method for planning collision-free paths”. Robot.Comput. Integr. Manuf. 25(3). pp. 689-701. (2009).
[10] M. F. Zaeh, W. Vogl. “Interactive laser-projection for programming industrial robots”. Proceedings of the International Symposium on Mixed and Augmented Reality. pp. 125-128. (2006).
[11] G. A. Lee, G. J. Kim. “Immersive authoring of Tangible Augmented Reality content: A user study”. Journal of Visual Languages & Computing. vol. 20. pp. 61-79. (2009).
[12] S. K. Ong, J. W. S. Chong, and A. Y. C. Nee. “A novel AR-based robot programming and path planning methodology”. Robotics and Computer-Integrated Manufacturing(IJIDM). vol. 26. pp. 240-249. 6. (2010).
[13] H. C. Fang, S. K. Ong, and A. Y. C. Nee. “A novel augmented reality-based interface for robot path planning”. International Journal on Interactive Design and Manufacturing (IJIDM). vol. 8. pp. 33-42. (2014).
[14] Z. Y. Zhang. “A flexible new technique for camera calibration”, IEEE Transactions on pattern analysis and machine intelligence. vol. 22. pp. 1330-1334. (2000).
[15] Kinect v2, Microsoft. Retrieved from: https://support.xbox.com/en-US/xbox-on-windows/accessories/Kinect v2-for-windows-v2-setup#d38879587035411fbc6231c4982e0afa
[16] Time of flight, Wikipedia. Retrieved from: https://en.wikipedia.org/wiki/Time-of-flight_camera
[17] M. Pirovano, C. Y. Ren, and I. Frosio. “Robust Silhouette Extraction from Kinect Data”. International Conference on Image Analysis and Processing (ICIAP). pp.642-651. (2013).
[18] K. Xu, J. Zhou, and Z. Wang. “A method of hole-filling for the depth map generated by Kinect with moving objects detection”. (2013).
[19] OpenGL.org, Khronos Group. Retrieved from: https://www.opengl.org/
[20] Open Source Augmented Reality SDK, Artoolkit.org. Retrieved from: https://www.artoolkit.org/
[21] W. Song, L. A. Vu, and S. W. Jung. “Hole Filling for Kinect v2 Depth Images”. (2014).
[22] E. Lachat, H. Macher, and M. A. Mittet. “First Experiences with Kinect v2 Sensor for Close Range 3D Modeling”. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. vol. XL-5/W4. (2015).
[23] Y. Y. Chuang, D. B. Goldman, B. Curless, D. H. Salesin, and R. Szeliski. “Shadow matting and compositing”. ACM Transactions on Graphics (TOG). pp. 494-500. (2003).
[24] H. Asada, H. Izumi. “Automatic program generation from teaching data for the hybrid control of robots”. IEEE Transactions on Robotics and Automation. vol. 5, pp. 166-173. (1989).
[25] OpenCV - Miscellaneous Image Transformations. Retrieved from: http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html
[26] J. Yin, Y. Han, J. Li, and A. Cao. “Research on Real-Time Object Tracking by Improved CAMShift”. International Symposium on Computer Network and Multimedia Technology. pp.1-4. (2009).
[27] D. Comaniciu, P. Meer. “Mean Shift: A Robust Approach Toward Feature Space Analysis”. IEEE Transactions on Pattern Analysis and Machine Intelligence. vol.24. (2002).
[28] D. Held, S. Thrun, and S. Savarese. “Learning to Track at 100 FPS with Deep Regression Networks”. ECCV. (2016).
[29] Z.Y. Zhang. “A flexible new technique for camera calibration”, IEEE Transactions on pattern analysis and machine intelligence. vol. 22. pp. 1330-1334. (2000).
[30] Richard's blog - camera calibration - part 1 camera model. Retrieved from: http://wycwang.blogspot.tw/2012/09/camera-calibration-part-1-camera-model.html
[31] Wall, E. Michael. “Singular value decomposition and principal component analysis”. (2003).
[32] B. Nuernberger, E. Ofek, and H. Benko. “SnapToReality: Aligning Augmented Reality to the Real World”, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. (2016).
[33] J. Canny. “A Computational Approach to Edge Detection”. IEEE Transactions on Pattern Analysis and Machine Intelligence. vol.8. (1986).
[34] N. Kiryati, Y. Eldar, and A. M. Bruckstein. “A Probabilistic Hough Transform”. Pattern Recognition. vol.24. (1991).
[35] 毛星云.“OpenCV3編程入門”. https://github.com/QianMo/OpenCV3-Intro-Book-Src. (2015).
[36] S. V. Burtsev, Y. P. Kuzmin. “An Efficient Flood-Filling Algorithm”. Computers&Graphics. vol.17. (1993).
[37] A. R. Smith. “Color Gamut Transform Pairs”. SIGGRAPH 78 Conference Proceedings. pp. 12-19. (1978).
[38] M. A. Fischler, R. C. Bolles. “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography”. Comm. of the ACM. vol 24. pp. 381-395. (1981).
[39] E. G. Gibert, D. W. Johnson, and S. S. Keerthi. “A fast procedure for computing the distance between complex objects in three-dimensional space”. IEEE Journal on Robotics and Automation. vol. 4. (1988).
[40] GJK - Distance & Closest Points. Retrieved from: http://www.dyn4j.org/2010/04/gjk-distance-closest-points/
[41] G. V. D. Bergen. “Collision Detection in Interactive 3D Environments”. (2003).
[42] M. Donald. “Octree Encoding: A New Technique for the Representation, Manipulation and Display of Arbitrary 3-D Objects by Computer”. Rensselaer Polytechnic Institute. (1980).
[43] S.Raschdorf, M. Kolonko. “Loose Octree: a data structure for the simulation of polydisperse particle packings”. (2009).
[44] C. Ericson. “Real-Time Collision Detection”. (2004).
[45] T. Nikodym. “Ray Tracing Algorithm For Interactive Applications”. (2010).
[46] LOCTAI Enterprise. Retrieved from: http://www.loctai.com.tw/
[47] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. “Introduction to Algorithms, Third Edition”. (2009).
[48] J. Brooke, “SUS: a "quick and dirty" usability scale”. Usability Evaluation in Industry. (1996).
[49] MATLAB Calibration Toolbox. Retrieved from: http://www.vision.caltech.edu/bouguetj/calib_doc/download/index.html
[50] L. Yang, L. Y. Zhang. “Evaluating and Improving the Depth Accuarcy of Kinect for Windows v2”. IEEE Sensors Journal. vol.15. (2015).
[51] M. Laukkanen. “Performance Evaluation of Time-of-Flight Depth Camera”. Thesis for the degree of Master of Science in Technology. Aalto University. (2015).
[52] J. Wlley. “Encyclopedia of Statistical Sciences”. QA276.14.E5. (1982).
[53] Bullet Collision Detection & Physics Library SDK, Bulletphysics.org. Retrieved from: https://www.bulletphysics.org/Bullet/BulletFull/
[54] Intel RealSense Depth Camera D435. Retrieved from: https://click.intel.com/intelr-realsensetm-depth-camera-d435.html
[55] Intel RealSense Depth Camera D435 Product Specifications. Retrieved from: https://ark.intel.com/products/128255/Intel-RealSense-Depth-Camera-D435
[56] C. Dickinson. “Learning Game Physics with Bullet Physics and OpenGL”. (2013).
[57] Qt | Cross-platform software development for embedded&desktop. Retrieved from: https://www.qt.io/
[58] Microsoft Hololens. Retrieved from: https://www.microsoft.com/en-us/hololens
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *