帳號:guest(3.17.76.5)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林柔
作者(外文):Lin, Jou
論文名稱(中文):基於最大梯度特徵之局部二元模式與邊緣映射之人臉辨識
論文名稱(外文):Local Binary Pattern Edge-Mapped Descriptor Using MGM Interest Points for Face Recognition
指導教授(中文):邱瀞德
指導教授(外文):Chiu, Ching Te
口試委員(中文):陳煥宗
楊家輝
范倫達
口試委員(外文):Chen, Hwann Tzong
Yang, Jar Ferr
Van, Lan Da
學位類別:碩士
校院名稱:國立清華大學
系所名稱:通訊工程研究所
學號:103064534
出版年(民國):105
畢業學年度:104
語文別:英文
論文頁數:53
中文關鍵詞:臉部辨識最大梯度值局部二元模式二元特徵局部特徵
外文關鍵詞:Face RecognitionMaxima of Gradient Magnitude (MGM)Local Binary Pattern (LBP)Binary FeatureLocal Feature
相關次數:
  • 推薦推薦:0
  • 點閱點閱:330
  • 評分評分:*****
  • 下載下載:14
  • 收藏收藏:0
在近年來的學術界與產業界中,臉部辨識是一個受到眾人關注的話題。儘管現今已發展出許多演算法,但應用於現實環境中仍受到許多的挑戰。與全面 (holistic) 方法相比,局部 (local) 方法(如:local binary pattern (LBP) [4], [6]、local derivative pattern (LDP) [10] 與scale invariant feature transform (SIFT) [14] 等演算法)對於影像的細節描述地較為詳盡,故能得到較高的辨識率;但相對而言,局部方法的運算複雜度較高,會降低其應用的價值,如實現於行動裝置。此外,基於SIFT所發展出來的演算法,遇到光影有所變化的情況時,較無法維持良好的辨識率。因此,我們提出一個基於最大梯度特徵(Maxima of Gradient Magnitude, MGM [20])之局部二元模式與邊緣映射(LBP Edge-mapped)的描述子。這是一個具強健性且簡單、所需運算時間短的描述子。LBP Edge-mapped描述子藉由記錄最大梯度特徵周圍的亮度和邊緣資訊,進而得到一組二元碼。它不僅能完整地描述臉部的輪廓,還能同時擁有較低的運算複雜度。另外,由於描述子為二元模式,故我們可以使用一個簡單的方法來估測影像的相似程度、找到合適的配對。遇到光影不同的狀況時,實驗結果證明在FERET fc [22] 中,我們的方法不僅比SIFT高了約16.5%的辨識率,而且執行時間也節省了約9.06倍;在The Extended Yale Face Database B [32] 中,我們的方法明顯優於SIFT-based的方法,並比SIFT少了約70.9%的計算時間。面對臉部表情不同的情況時,FERET fb [22]的結果顯示我們的方法不僅可以維持令人接受的辨識率,還能比SIFT少花了7.50倍的運算時間。另外,於未受到任何條件限制的實際狀況中,我們使用UFI Database [30] 來評估方法的效能,並發現我們方法比local derivative pattern histogram sequences (LDPHS) 高了0.82%的辨識率。
Face recognition is one of popular topics in academic and industrial areas in recent years. Numerous approaches have been developed nowadays, but there are still several challenges in real-world circumstances. Present local methods such as local binary pattern (LBP) [4], [6], local derivative pattern (LDP) [10] and scale invariant feature transform (SIFT) [14] own better performance than holistic methods; however, high complexity results in some limitations for applications such as mobile devices. In addition, SIFT-based schemes are sensitive to illumination variation. Thus we propose a LBP Edge-mapped descriptor by using Maxima of Gradient Magnitude (MGM) [20] points. It is a robust, simple and fast descriptor. LBP Edge-mapped descriptor is a string of binary codes which record surrounding information of illumination and edges of MGM [20]. It can illustrate facial contours completely and have low computational complexity simultaneously. Due to binary codes, a simple matching method can be adopted for face recognition. Under variable lighting, experimental results show that our method has 16.5% higher recognition rate and spends 9.06 times less execution time than SIFT in FERET fc [22]. Besides, our method outperforms SIFT-based approaches and saves about 70.9% execution time compared with SIFT in the Extended Yale Face Database B [32]. In the variation of expression, our method maintains acceptable recognition rate and has 7.50 times less computational time than SIFT in FERET fb [22]. Furthermore, in uncontrolled conditions, our method owns 0.82% higher recognition rate than local derivative pattern histogram sequences (LDPHS) [10] in Unconstrained Facual Images (UFI) Database [30].
1 Introduction 1
1.1 Related Works 2
1.2 Motivation and Problem Description 3
1.3 Goal and Contribution 4
1.4 Thesis Organization 6
2 Maxima of Gradient Magnitude 7
2.1 Gradient and Edge Detection 9
2.2 Maxima of Gradient Magnitude (MGM) 11
3 Local Binary Pattern (LBP) Edge-mapped Descriptor for Face Recognition 14
3.1 Contrast Enhancement 15
3.2 LBP Edge-mapped Descriptor 20
3.3 Matching Method 23
4 Experimental Results 27
4.1 FERET Database 27
4.2 The Extended Yale Face Database B 35
4.3 Unconstrained Facial Images (UFI) Database 40
4.4 Helen Facial Feature Database 44
5 Conclusion and Future Work 47
Bibliography 49
[1] J. Lu, V. E. Liong, X. Zhou, and J. Zhou, “Learning Compact Binary Face Descriptor for Face Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 37, no. 10, pp. 2041-2056, Oct. 2015.
[2] M. Turk and A. Pentland, “Eigenfaces for Recognition,” Journal of Congnitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.
[3] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman,“Eigenfaces vs. Fisherfaces:
Recognition Using Class Specific Linear Projection” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, Jul. 1997.
[4] T. Ojala, M. Pietikäinen, and D. Harwood, “A Comparative Study of Texture Measures with Classification Based on Feature Distributions,” Pattern Recognition, vol. 29, no. 1, pp. 51-59, 1996.
[5] T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns,” IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, Jul. 2002.
[6] T. Ahonen, A. Hadid, and M. Pietikäinen, “Face Description with Local Binary Patterns: Application to Face Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037-2041, Dec. 2006.
[7] C. Liu and H. Wechsler, “Gabor Feature Based Classification Using the Enhanced Fisher Linear Discriminant Model for Face Recognition,” IEEE Trans.
Image Process., vol. 11, no. 4, pp. 467-476, Apr. 2002.
[8] S. Zhao, Y. Gao, and B. Zhang, “SOBEL-LBP,” IEEE International Conference on Image Processing (ICIP), pp. 2144-2147, Oct. 2008.
[9] X. Tan and B. Triggs, “Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions,” IEEE Trans. Image Process., vol. 19, no. 6, pp. 1635-1650, Jun. 2010.
[10] B. Zhang, Y. Gao, S. Zhao, and J. Liu, “Local Derivative Pattern Versus Local Binary Pattern: Face Recognition with High-Order Local Pattern Descriptor,”
IEEE Trans. Image Process., vol. 19, no. 2, pp. 533-544, Feb. 2010.
[11] S. Xie, S. Shan, X. Chen, and J. Chen, “Fusing Local Patterns of Gabor Magnitude and Phase for Face Recognition,” IEEE Trans. Image Process., vol. 19, no. 5, pp. 1349-1361, May 2010.
[12] N. S. Vu, H. M. Dee, and A. Caplier, “Face Recognition Using The POEM Descriptor,” Pattern Recognition, pp. 2478-2488, 2012.
[13] S. U. Hussain, T. Napoléon, and F. Jurie, “Face Recognition Using Local Quantized Patterns,” Proc. Brit. Mach. Vis. Conf., pp. 1-12, 2012.
[14] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, Jan. 2004.
[15] J. Luo, Y. Ma, E. Takikawa, S. Lao, M. Kawade, and B. L. Lu, “Personspecific SIFT Features for Face Recognition,” IEEE International Conference
on Acoustics, Speech and Signal Processing (ICASSP), pp. 593-596, 2007.
[16] J. Krizaj, V. Struc, and N. Pavesić, “ Adaptation of SIFT Features for Robust Face Recognition,” International Conference Image Analysis and Recognition (ICIAR), pp. 394-404, 2010.
[17] P. Kamencay, M. Breznan, D. Jelsovka, and M. Zachariasova, “Improved Face Recognition Method based on Segmentation Algorithm Using SIFT-PCA,” Telecommunications and Signal Processing (TSP), pp. 758-762, 2012.
[18] Y. K. Shen and C. T. Chiu, “Local Binary Pattern Orientation Based Face Recognition,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1091-1095, Apr. 2015.
[19] S. Zhang, Q. Tian, K. Lu, Q. Hung, and W. Gao, “Edge-SIFT: Discriminative Binary Descriptor for Scalable Partial-Duplicate Mobile Search,” IEEE Trans. Image Process., vol. 22, no. 7, pp. 2899-2902, Jul. 2013.
[20] M. Faraji, J. Shanbehzadeh, K. Narollahi, and T. B. Moeslund, “Extremal Regions Detection Guided by Maxima of Gradient Magnitude,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 5401-5415, Dec. 2015.
[21] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss, “The FERET Database and Evaluation Procedure for Face-recognition Algorithms,” Image and Vision Computing J, vol. 16, no. 5, pp. 295-306, 1998.
[22] P. J. Phillips, H. Moon, S. A. Rizvi, and P. J. Rauss, “The FERET Evaluation Methodology for Face-Recognition Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1090-1104, Oct. 2000.
[23] J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, Nov. 1986.
[24] Thomas B. Moeslund. (2012). Introduction to Video and Image Processing: Building Real Systems and Applications. London: Springer London.
[25] T. W. Ridler and S. Calvard, “Picture Thresholding Using an Iterative Selection Method,” IEEE Trans. Systems, Man, and Cybernetics, vol. SMC-8, no. 8, pp. 630-632, Aug. 1978.
[26] P. Viola and M. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features,” Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, 2001.
[27] Rafael C. Gonzalez and Richard E. Woods. (2008). Digital Image Processing: An Adapted Version (3rd ed.). Taiwan: Pearson Education Taiwan.
[28] K. Zuiderveld, “Contrast Limited Adaptive Histogram Equalization,” Graphic Gems IV, Academic Press, pp. 474-485, 1994.
[29] L. D. Huang, W. Zhao, J. Wang, and Z. B. Sun, “Combination of Contrast Limited Adaptive Histogram Equalisation and Discrete Wavelet Transform for Image Enhancement,” IET Image Processing, vol. 9, pp. 908-915, Sep. 2015.
[30] L. Lenc and P. Král, “Unconstrained Facial Images: Database for Face Recognition under Real-word Conditions,” Mexican International Conference on Artificial Intelligence (MICAI), pp. 349-361, Oct. 2015.
[31] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.
23, no. 6, pp. 643-660, Jun. 2001.
[32] K. C. Lee, J. Ho, and D. J. Kriegman, “Acquiring Linear Subspace for Face Recognition under Variable Lighting,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684-698, May 2005.
[33] L. Lenc and P. Král, “Automatically Detected Feature Positions for LBP Based Face Recognition,” Artificial Intelligence Applications and Innovations, pp. 246-255, 2014.
[34] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang, “Interactive Facial Feature Localization,” 12th European Conference on Computer Vision, pp. 679-692, 2012.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *