帳號:guest(3.17.81.228)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):黃馨慧
作者(外文):Huang, Hsin-Hui
論文名稱(中文):以隨機森林架構結合臉部動作單元辨識之臉部表情分類技術
論文名稱(外文):Using Random Forest Combined with Action Unit Recognition for Facial Expression Classification
指導教授(中文):黃仲陵
鐘太郎
指導教授(外文):Huang, Chung-Lin
Jong, Tai-Lang
口試委員(中文):柳金章
莊仁輝
黃仲陵
鐘太郎
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:100061701
出版年(民國):102
畢業學年度:102
語文別:英文
論文頁數:39
中文關鍵詞:表情辨識臉部動作元件隨機森林
外文關鍵詞:facial expression recognitionAction UnitRandom Forest
相關次數:
  • 推薦推薦:0
  • 點閱點閱:466
  • 評分評分:*****
  • 下載下載:32
  • 收藏收藏:0
影像處理研究領域中,表情分析辨識一直是具挑戰性的研究議題。表情在人與人的溝通中扮演著很重要的角色,最主要是用來傳遞一些情感上面的訊息,是除了語言之外最重要的溝通方式之一。
表情辨識的難度在於每個人的五官長相和個性都不相同,因此表情的表現方式也會有所差異,這些差異會使得臉部因表情產生較細微的變化,而使分析起來更加複雜。表情基本上是屬於一種臉部連續的變化,所以過去的人把表情分割成四個不同的階段去分析。這四個階段分別是,(1)Neutral、(2)Onset、(2)Apex、(4)Offset。表情變化狀態的順序為,Neutral-> Onset-> Apex-> Offset-> Neutral。
在本研究中,我們採取針對靜態影像做處理的做法,主要是對單張影像的人臉做辨識,只分析人臉部表情四個狀態中Apex時的狀態。我們以Gabor當作訓練特徵,第一階段辨識出數個臉部細微部份的特徵,稱為動作元件(Action Units (AUs) ),再利用這些AU當作辨識表情的特徵,不同的表情會由不同AU的組合來明顯表達,透過隨機森林做訓練,最終辨識六種典型的表情(高興、生氣、難過、驚訝、厭惡、害怕)。
Facial expression recognition has been one of the most challenging researches in computer vision. Second to verbal language, facial expression is another way of body language communication. The technical difficulty of facial expression recognition lies in that every individual has a unique way of facial expression for his emotion. Even for the same person, there exists a very slight difference of facial expression for the same emotion. Facial expression results from a continuous series of physical changes of the facial muscle. These facial muscle changes are usually divided into four phases: Neutral, Onset, Apex, Offset, and Neutral.
In this thesis, we apply image processing technique to recognize the Apex phases of human faces. We use Gabor filter to obtain the facial features, and then identify several minute features on the face. These detailed features serve as Action Units (AUs), of which their different combinations represent different expression. To recognize different facial expression, we find different combinations of AUs through Random Forest training to classify six typical facial expressions: anger, disgust, fear, happiness, sadness, and surprise.
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Purpose 1
1.3 System Overview 2
Chapter 2 Related Works 5
2.1 Expression Analysis 6
2.2 Facial Action Coding System 7
2.3 Face Detection 8
2.4 Feature Extraction 10
2.5 Classifier 11
Chapter 3 AUs Recognition 13
3.1 Image Preprocessing 13
3.2 Feature Extraction 16
3.3 Recognition 18
Chapter 4 Expression Classification 21
4.1 Random Forest 21
4.2 Random Forest Training 22
4.3 AU Features for Facial Expression Detector 24
4.4 Facial Expression Detector Training 24
4.5 Facial Expression Detection 26
Chapter 5 Experimental Results 30
5.1 Database 30
5.2 Experiments 31
Chapter 6 Conclusion & Future Prospect 34
References 35
[1] P. Ekman and W.V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, vol. 17, pp. 124-129, 1971.
[2] P. Ekman and W.V. Friesen, “The Facial Action Coding System: A Technique for The Measurement of Facial Movement,” San Francisco: Consulting Psychologists Press, 1978.
[3] R. W. Picard, “Affective Computing,” The MIT Press, 1997.
[4] T. Kanade, J. Cohn and Y. Tian, “Comprehensive database for facial expression analysis,” IEEE Proceedings of the Fourth International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 2000.
[5] M. F. Valstar and M. Pantic, “Biologically vs. Logic Inspired Encoding of Facial Actions and Emotions in Video,” IEEE Inter. Conference on Multimedia and Expo, pp. 325—328, 2006.
[6] M. H. Yang, D.J. Kriegman and N. Ahuja, “Detecting faces in images: A survey,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34-58, Jan. 2002.
[7] T. F. Cootes, G. J. Edwards and C. J. Taylor, “Active appearance models,” IEEE Trans. on PAMI, vol. 23, no. 6, pp. 681-685, 2001.
[8] P. Wanga, F. Barrettb, E. Martin, M. Milonova, R. E. Gur, R. C. Gur, C. Kohler and R. Verma, “Automated video-based facial expression analysis of neuropsychiatric disorders,” Neuroscience Methods, vol. 168, pp. 224-238, Feb. 2008.
[9] M. Valstar, B. Jiang, M. Mehu, M. Pantic and S. Klaus, “The first facial expression recognition and analysis challenge,” Automatic Face and Gesture Recognition, 2011.
[10] M. Pantic and L. J.M. Rothkrantz, “Automatic analysis of facial expressions: The state of the art,” IEEE Trans. on PAMI, vol. 22, no. 12, pp. 1424-1445, 2000.
[11] N. Esau, E. Wetzel, L. Kleinjohann and B. Kleinjohann, “Real-time facial expression recognition using a fuzzy emotion model,” 2007 IEEE International Fuzzy Systems Conference, pp. 1-6, July 2007.
[12] N. Cristianini and J. Shawe-Taylor, “An Introduction to Support Vector Machines,” Cambridge University Press, 2000.
[13] Y. L. Tian, T. Kanade and J. F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans on PAMI, vol.23, no. 2, pp. 97–115, 2001.
[14] M. S. Bartlett, J. C. Hager, P. Ekman and T. J. Sejnowski, “Measuring Facial Expressions by Computer Image Analysis,” Psychophysiology, vol. 36, pp. 253-263, 1999.
[15] B. Fasel and J. Luettin, “Recognition of asymmetric facial action unit activities and intensities,” Proc. Int',l Conf. Pattern Recognition, pp. 1100–1103, 2000.
[16] J. Lien, T. Kanade, J. Cohn and C. Li, “Subtly different facial expression recognition and emotion expression intensity estimation,” Proc. IEEE CVPR, Santa Barbara, CA, pp. 853-859, 1998.
[17] M. J. Lyons, J. Budynek and S. Akamatsu, ”Automatic Classification of Single Facial Images,” IEEE Trans on PAMI, vol. 21, no. 12, pp. 1357–1362, 1999.
[18] G. Littlewort, M. Bartlett, I. Fasel, J. Susskind and J. Movellan, “Dynamics of facial expression extracted automatically from video,” Image and Vision Computing, vol. 24, no. 6, pp. 615–625, 2006.
[19] Z. Zhang, M. Lyons, M. Schuster and S. Akamatsu, “Comparison between Geometry-Based and Gabor Wavelets-Based Facial Expression Recognition Using Multi-Layer Perceptron,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 454-459, 1998.
[20] J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” Journal of the Optical Society of America A: Optics, Image Science, and Vision, vol. 2, pp. 1160-1169, 1985.
[21] A. C. Bovik, M. Clark and W. S. Geisler, “Multichannel texture analysis using localized spatial filters,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 12, pp. 55–73, 1990.
[22] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews, “The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” Computer Vision and Pattern Recognition Workshop on Human-Communicative Behavior, 2010.
[23] W. Liu and Z. Wang, “Facial Expression Recognition Based on Fusion of Multiple Gabor Features,” Proc. 18th Int.Conf. on Pattern Recognition, 2006.
[24] Y. Zhan, J. F. Ye, Dejiao Niu and Peng Cao, “Facial Expression Recognition Based on Gabor Wavelet Transformation and Elastic Templates Matching,” Proc. of the 3rd Int. Conference on Image and Graphics, 2004.
[25] M. Lades, J. Vorbruuggen, J. Buhmann, J. Lange, W. Konen, C. von der Malsburg and R. Wurtz., “Distortion Invariant Object Recognition in the Dynamic Link Architecture,” IEEE Trans. on Computers, vol. 42, no. 3, pp. 300-311, 1993.
[26] L. Wiskott, J. Fellous, N. Kruger and C. von der Malsburg, “Face recognition by elastic bunch graph matching,” IEEE Trans. PAMI, vol. 19, no. 7, pp. 775-779, 1997.
[27] M. J. Lyons, S. Akamatsu, M. Kamachi and J. Gyoba, “Coding Facial Expressions with Gabor Wavelets,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 200-205, 1998.
[28] C. C. Chang and C. J. Lin, “LIBSVM : A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 27, 2011.
[29] C. W. Hsu and C. J. Lin, “A Comparison of Methods for Multiclass Support Vector Machines,” IEEE Trans. On Neural Networks, vol. 13, pp. 415-425, 2002.
[30] C. Cortes and V. Vapnik, “Support-Vector Networks,” Machine Learning, vol. 20, 1995.
[31] http://en.wikipedia.org/wiki/Support_vector_machine.
[32] L. Breiman, “Random forests,” Mach. Learning, vol. 45, no. 1, pp. 5–32, 2001.
[33] T. K. Ho, “Random Decision Forest,” Proc. of the 3rd Int. Conf. on Document Analysis and Recognition, Montreal, QC, 14–16. pp. 278–282 , Aug. 1995 .
[34] M. Mariappan, M. Suk and B. Prabhakaran, “Facial Expression Recognition Using Dual Layer Hierarchical SVM Ensemble Classification,” Multimedia (ISM), 2012 IEEE International Symposium, Irvine, 2012.
[35] A. Saeed, A. Al-Hamadi, R. Niese and M. Elzobi, “Effective geometric features for human emotion recognition,” Signal Processing (ICSP), 2012 IEEE 11th International Conference, Beijing, 2012.
[36] E. Sonmez, B. Sankur and S. Albayrak, “Classification with emotional faces via a robust sparse classifier,” Image Processing Theory, Tools and Applications (IPTA), 2012 3rd International Conference, Istanbul, 2012.
[37] C. Shan, S. Gong and P. W. McOwan, “Robust facial expression recognition using local binary patterns,” Image Processing, ICIP 2005. IEEE International Conference, 2005.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *