帳號:guest(18.222.108.185)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):潘威丞
作者(外文):Pan, Wei-Cheng
論文名稱(中文):具定位與解碼功能之影像系統與其於高精度影像量測之應用
論文名稱(外文):Image processing system with positioning and decoding function and its application in high precision image measurement
指導教授(中文):蔡宏營
指導教授(外文):Tsai, Hung-Yin
口試委員(中文):宋震國
黃衍任
口試委員(外文):Sung, Cheng-Kuo
Hwang, Yean-Ren
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:104033586
出版年(民國):106
畢業學年度:105
語文別:中文
論文頁數:99
中文關鍵詞:影像處理定位解碼機器視覺影像尺
外文關鍵詞:image processpositioningdecodingmachine visionimage scale
相關次數:
  • 推薦推薦:0
  • 點閱點閱:50
  • 評分評分:*****
  • 下載下載:8
  • 收藏收藏:0
本研究藉由影像處理技術和光學元件設備,搭建一套具定位與解碼功能之影像擷取與分析系統,透過低成本、非接觸式的方法,達到目標物移動定位之效果,以利後續諸多應用,例如X-Y平台定位、量測物件的長寬、外觀尺寸檢測、更新圖檔尺寸與姿態資訊。本研究的主要成果為搭建一套基於影像處理之影像定位與解碼系統,但由於目前尚未有合適的X-Y平台來做此系統之定位精度之驗證,本研究基於此系統,另設計一個高精度影像量測方法,加入電腦視覺與機器學習領域之技術,進行線段尺寸的量測實驗,以目前量測4 mm的一級精度塊規為例,75組數據的平均量測誤差為11 µm,最大誤差為37 µm。
在線段量測的過程中,為了取得足夠高的量測解析度,主相機的視野將會變得非常狹窄,無法在單一影像看到完整的物件,因此會需要至少兩張影像才能完成量測。藉由本研究所搭建之影像定位與編碼的系統,可以不依賴X-Y平台的定位回饋,僅靠影像中的定位符號與編碼圖案來作為線段量測時,不同張影像座標匹配時的定位依據,最終獲得線段兩端點在影像中的座標位置,進而計算出線段的長度、與基準軸的夾角等資訊。由於經過相機校正,因此可以輕易推得線段在真實世界的幾何資訊。
影像定位與解碼系統共有四個步驟:(一)相機和光源的架設與校正;(二)影像尺:在不鏽鋼板上雷射雕刻出定位符號與編碼圖案; (三)定位符號偵測:利用直方圖分布來分析影像中定位符號的位置; (四)解碼:對定位符號附近的編碼圖案進行解碼動作,用來識別定位符號之身分。
高精度影像量測技術則是基於影像定位與解碼系統,再加上兩個步驟:(一)物件邊緣偵測:將待測物件的邊緣線段透過影像處理與機器視覺技術提取出來;(二)端點偵測:紀錄邊緣線段之交點。
Through the image processing technology and optical component equipment, this Thesis demonstrate a low cost, non-contact, image capture and analysis system with positioning and decoding function, which can achieve the target positioning. The positioning effect has many Applications such as X-Y table positioning, measuring object length and width, dimension checkout, update cad file dimension and pose information. The main result of this research is to build an image processing system with positioning and decoding function, but because there is no suitable X-Y table to verify the positioning accuracy of this system, this research based on the above system, design a high-precision image Measurement method, which is based on the computer vision and machine learning technology. The line size measurement test of the 4 mm precision block gauge with 75 sets of data, has the average 11 μm measurement error, and the maximum 37 μm measurement error.
In the process of line segment measurement, in order to obtain a sufficiently high measurement resolution, the field of view of the main camera will become too narrow to see the complete object in a single image, so at least two images will be required for complete measurement. By the image positioning and coding system built in this research, it does not need the positioning feedback of the XY platform. The positioning symbol and the coding pattern in the image are used as the positioning basis when the different image coordinates are matched. Obtain the coordinates of the two ends of the line in the image, and then calculate the length of the line segment, and the reference axis angle and other information. As a result of the camera correction, so it can easily deduce the line in the real world of geometric information.
Image processing system with positioning and decoding function has four steps: (a) camera and light source setup and calibration; (b) image scale: the stainless steel plate laser engraved positioning symbols and coding patterns; (c) positioning symbol detection: using histogram distribution of positioning mark "L" to locate the positioning symbol in the image; (d) decoding: The decoding operation in the vicinity of the positioning symbol is performed to identify the identity of the positioning symbol.
High-precision image measurement technology is based on the Image processing system with positioning and decoding function, plus two steps: (a) object edge detection: extracting the edge of object by image processing and machine vision technology; Endpoint detection: record the intersection of edge lines.
Abstract I
摘要 III
目錄 VIII
圖目錄 XII
表目錄 XVIII
第一章 緒論 1
1.1 前言 1
1.2 研究動機 3
第二章 文獻回顧 4
2.1 自動對焦 9
2.2 影像識別定位 11
2.2.1對位圖案(fiducial mark) 11
2.2.2 物件識別 13
2.3 輪廓偵測 14
2.3.1 邊緣偵測 14
2.3.2 線段檢測 18
2.4 特徵點 19
2.4.1 角點偵測 19
2.4.2 亞像素角點偵測 21
2.4.3 Fast角點偵測 22
2.4.4 局部不變特徵描述 23
第三章 研究方法 25
3.1 硬體設備 29
3.1.1 影像擷取單元 29
3.1.2 影像尺 31
3.1.3 校正片 33
3.1.4 塊規 35
3.2 物件邊緣偵測 37
3.2.1 HOG特徵 37
3.2.2 SVM訓練 38
3.2.3 影像處理流程 39
3.3 端點偵測 41
3.4 定位符號偵測 41
3.4.1 影像處理流程 44
3.4.2 L符號的偵測與識別 46
3.5解碼 51
第四章 實驗結果與討論 53
4.1 塊規邊長量測流程 53
4.1.1 塊規左端點的影像座標擷取 54
4.1.2 塊規左底部影像尺「L」符號的座標擷取 56
4.1.3 塊規右端點的影像座標擷取 60
4.1.4 塊規右底部影像尺「L」符號的座標擷取 62
4.1.5 塊規長度計算 67
4.2 量測數據 71
4.3 量測誤差探討 74
4.3.1 X-Y平台的移動 74
4.3.2 Hough line transform應用於邊線偵測 76
4.3.3 「L」符號中心位置的偏差 79
4.4 重複精度探討 80
4.4.1 兩組4秒鐘之測試影片的定位結果 80
4.4.2 Best Hough line transform applied in “L” detection 84
4.5 影像尺的材料選用 92
第五章 結論與未來展望 94
5.1 本研究之貢獻 95
5.2 未來展望 96
參考文獻 97
[1] F. Li, A. P. Longstaff, S. Fletcher, and A. Myers, "Rapid and accurate reverse engineering of geometry based on a multi-sensor system," The International Journal of Advanced Manufacturing Technology, vol. 74, pp. 369-382, 2014.
[2] H. Srinivasan, O. L. A. Harrysson, and R. A. Wysk, "Automatic part localization in a CNC machine coordinate system by means of 3D scans," The International Journal of Advanced Manufacturing Technology, vol. 81, pp. 1127–1138, 2015.
[3] L.-y. Lei, X.-j. Zhou, and M.-q. Pan, "Automated Vision Inspection System for the Size Measurement of Workpieces," in Instrumentation and Measurement Technology Conference, pp. 872-877, 2005.
[4] R. Anchini, G. D. Leo, and C. Liguori, "Metrological Characterization of a Vision-Based Measurement System for the Online Inspection of Automotive Rubber Profile," IEEE Transactions on Instrumentation and Measurement, vol. 58, pp. 4-13, 2009.
[5] S.-Y. Lee, Y. Kumar, and J.-M. Cho, "Enhanced Autofocus Algorithm Using Robust Focus Measure and Fuzzy Reasoning," IEEE Transactions on Circuits and Systems for Video Technology vol. 18, pp. 1237-1246, 2008.
[6] M. E. Sobel, "Asymptotic Confidence Intervals for Indirect Effects in Structural Equation Models," Sociological Methodology, vol. 13, pp. 290-312, 1982.
[7] J. Canny, "A Computational Approach to Edge Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence vol. PAMI-8, pp. 679-698, 1986.
[8] L. Shih, "Autofocus survey: A comparison of algorithms," SPIE Proceedings, vol. 6502, pp. 65020B1-65020B11, 2007.
[9] W. M. Kuo, S. F. Chuang, C. Y. Nian, and Y. S. Tarng, "Precision nano-alignment system using machine vision with motion controlled by piezoelectric motor," Mechatronics, vol. 18, pp. 21-34, 2008.
[10] (2014). 北美智權報 RD專欄 3D IC晶圓接合技術. Available: http://www.naipo.com/Portals/1/web_tw/Knowledge_Center/Research_Development/publish-54.htm
[11] N. Dalal and B. Triggs, "Histograms of Oriented Gradients for Human Detection," in IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.
[12] G. Papari and N. Petkov, "Edge and line oriented contour detection: State of the art," Image and Vision Computing, vol. 29, pp. 79-103, 2011.
[13] M. S. Prewitt, "Object enhancement and extraction," in Picture processing and Psychopictorics, New York: Academic Press, 1970, pp. 75-149.
[14] C. Grigorescu, N. Petkov, and M. A. Westenberg, "Contour and boundary detection improved by surround suppression of texture edges," Image and Vision Computing, vol. 22, pp. 609-622, 2004.
[15] J. H. Elder and R. M. Goldberg, "Ecological statistics of Gestalt laws for the perceptual organization of contours," Journal of Vision, vol. 2, pp. 324-353, 2002.
[16] R. O. Duda and P. E. Hart, "Use of the Hough transformation to detect lines and curves in pictures," Communications of the ACM, vol. 15, pp. 11-15, 1972.
[17] (2012). Hough Transform 霍夫變換檢測直線. Available: http://blog.csdn.net/rocky_shared_image/article/details/8037361
[18] D. H. Ballard, "Generalizing the Hough transform to detect arbitrary shapes " Pattern Recognition, vol. 13, pp. 111-122, 1981.
[19] H. P. Moravec, Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover. Pittsburgh, Pennsylvania: Carnegie Mellon University Robotics Institue Technical Report, 1980.
[20] C. Harris and M. Stephens, "A combined corner and edge detector," in Alvey Vision Conference, pp. 147-151, 1988.
[21] J. Shi and Tomasi, "Good Features to Track," in IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600, 1994.
[22] E. Rosten and T. Drummond, "Machine Learning for High-Speed Corner Detection," in European conference on computer vision, vol. 3951, pp. 430-443, 2006.
[23] D. Lowe, "Object recognition from local scale-invariant features," in IEEE International Conference on Computer Vision, vol. 2, pp. 1150-1157, 1999.
[24] H. Bay, T. Tuytelaars, and L. V. Gool, "SURF: Speeded Up Robust Features," in European Conference on Computer Vision, vol. 3951, pp. 404-417, 2006.
[25] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, "BRIEF: Binary Robust Independent Elementary Features," in European Conference on Computer Vision vol. 6314, pp. 778-792, 2010.
[26] E. Rublee, V. Rabaud, and K. Konolige, "ORB: An efficient alternative to SIFT or SURF," in IEEE International Conference on Computer Vision pp. 2564-2571, 2011.
[27] (2015). 開發1微米線寬印刷電子技術應用於顯示器. Available: http://www.digitimes.com.tw/tw/rpt/rpt_show.asp?report_type=&CnlID=3&v=20151227-435&n=1
[28] (2016). 國立交通大學奈米中心儀器設備 - 雷射光罩製作系統Available: http://www.nfc.nctu.edu.tw/mechine_new/mechine/DWL-200.htm
[29] (2007). 交大奈米中心光罩製作流程交大奈米中心光罩製作Available: http://www.nfc.nctu.edu.tw/mechine_new/notice/C_Laser_notice.pdf

 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *