帳號:guest(18.218.119.28)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):劉書宏
作者(外文):Liou, Shu-Hong
論文名稱(中文):基於人臉正面和側面特徵之唐氏症辨識
論文名稱(外文):Down Syndrome Recognition Based on Frontal and Lateral Facial Features
指導教授(中文):陳永昌
鐘太郎
指導教授(外文):Chen, Yung-Chang
Jong, Tai-Lang
口試委員(中文):張隆紋
謝凱生
口試委員(外文):Chang, Long-Wen
Hsieh, Kai-Hsien
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:100061530
出版年(民國):102
畢業學年度:101
語文別:中文英文
論文頁數:61
中文關鍵詞:唐氏症人臉
相關次數:
  • 推薦推薦:0
  • 點閱點閱:485
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
唐氏症(Down Syndrome)是目前新生兒群體中,較為常見的先天性遺傳疾病。其患者可能因為人體中第21條染色體上的基因異常,導致發展遲緩,甚至五官和身體上的部份畸變。而目前醫學上,對於新生兒的唐氏症疾病診斷,最準確的方法乃是採用染色體檢驗。然而,儘管染色體檢驗方式能夠達成較高的準確率,但此方法仍需花費昂貴的金錢和冗長的時間去做DNA分析。因此,我們希望能提出一個辨識方法,以用來進行簡單、快速的疾病診斷。其中使用者不需要擁有醫學上的專業知識,就可自動化的將判別作業交由系統處理,進而有效節省其人力資源。

正如之前所提,唐氏症患者由於基因的異常,因此五官部分相較於正常人會有部分明顯的特徵。例如 : 鼻子太扁、耳朵位置較低和兩眼距離過寬等等。因此,此篇論文主要想法就是希望藉由擷取這些明顯的人臉特徵,以區分出孩童中患有唐氏症的患者。我們的目的是希望能夠建立一個唐氏症疾病辨識系統,其能夠藉由輸入的人臉正面和側面影像,並擷取其中的主要特徵,再經過分類器的篩選,以確定患者是否患有唐氏症疾病。

在這個辨識系統中,我們針對提供的正面及側面的兩張人臉,設計了8個重要且常見的唐氏症臉部特徵來加以擷取:其中5個特徵是從正面擷取,剩下3個是從側面。其分別是針對兩眼的距離、眼睛的傾斜程度和形狀、鼻梁的傾斜度、鼻尖的上仰角度和耳朵在側臉中的相對位置來進行分析。而擷取出來的特徵,會再被輸入之前所訓練好的分類器加以分類,進而得到其識別的結果。

實驗結果顯示,其系統在唐氏症患者的辨識上,對於輸入的測試資料具有將近九成的成功辨識率。
Down syndrome is one of the genetic disorders that usually happened to the humans. Children born with Down syndrome may have a delay in physical growth and a particular set of facial characteristics because of the presence of an extra 21st chromosome. In order to diagnose the patient with Down syndrome, a chromosomal test can be done to check for the extra chromosome and confirm the diagnosis. However, the testing consumes a large amount of time and money. Therefore, we want to develop a method to recognize the child with Down syndrome in a simple and fast way.

As mentioned before, patients with Down syndrome usually have multiple abnormal facial features. Therefore, we can develop a disease recognition system with the facial characteristics to help doctors examine the patient. By inputting the photo, including lateral and frontal face of the patient, into our proposed system, the important facial features of Down syndrome can be extracted. After the extraction, the features will be inputted into the classification system, with a recognized model trained before. The system can discriminate between the person with Down syndrome and normal one by the trained model. Hence, we can recognize the child with Down syndrome.

We select the eight important facial features that usually happen to the patient with Down syndrome. The characteristics are as follows: hypertelorism, almond eyes, up-slanting palpebral fissures, up-turned nose, flat nasal bridge and low-set ears.

Experimental results show that the accuracy rate of our disease recognition system is nearly 90% to recognize Down syndrome case correctly.
Table of Contents i
List of Figures iv
List of Tables viii
Chapter 1 Introduction 1
1.1 Overview of Down Syndrome 1
1.2 Motivation 3
1.3 Related Works 3
1.4 Facial Features 5
1.4.1 Features of Frontal Face 6
1.4.2 Features of Lateral Face 7
1.5 System Flowchart 9
1.5.1 Extraction of Frontal Face 9
1.5.2 Extraction of Lateral Face 9
1.6 Thesis Organization 10
Chapter 2 Face Detection 11
2.1 Color Based Skin Segmentation 11
2.1.1 Skin Region Segmentation in Y Color Space 12
2.1.2 Threshold Selection Method – Otsu’s Method 14
2.1.3 Skin Color Segmentation with Otsu’s Method 15
2.2 Morphological Processing – Opening 18
Chapter 3 Facial Features Extraction: Frontal View 19
3.1 Eye Detection 19
3.2 Eye Contour Extraction 21
3.2.1 Level Set Method with Distance Regularized Level Set Evolution 21
3.2.1.1 Distance Regularized Term 22
3.2.1.2 External Energy Term 23
3.2.1.3 Evolution Equation 24
3.2.2 Eye Contour Extraction with Level Set Method in DRLSE Model 25
3.3 Ellipse Fitting 29
3.3.1 Direct Least Square Fitting of Ellipses 29
3.3.2 Feature Extraction with Ellipse Fitting 30
3.4 Nose Tip Detection 32
Chapter 4 Facial Features Extraction: Lateral View 35
4.1 Eye Detection 36
4.2 Eye Corner Extraction 37
4.3 Nose Contour Extraction 41
4.4 Ear Detection and Contour Extraction 43
4.4.1 SURF 44
4.4.2 Application 46
Chapter 5 Classification System – Adaboost 49
5.1 Boosting 49
5.2 Adaboost 50
Chapter 6 Experimental Results 52
6.1 Sample Preparation 52
6.2 Classification System Training and Testing Result 53
6.3 Receiver Operating Characteristics 54
Chapter 7 Conclusions and Future Research 56
7.1 Summary and Conclusions 56
7.2 Suggestions for Future Work 57
References 59
[1] G. W. Pong, “Down Syndrome Recognition Based on 2D Facial Features,” Master’s thesis, National Tsing Hua University, Hsinchu, Taiwan, R.O.C., July 2008.
[2] K. L. Jones, “Smith's recognizable patterns of human malformation,” 6th edition, pp. 7–11, Philadelphia, Elsevier Saunders, 2006.
[3] J. M. Corretger, A. Seres, J. Casaldaliga, and K. Trias, “Síndrome de Down: aspectos médicos actuales,” pp. 24–32, Barcelona, Elsevier España, 2005.
[4] Y. Sivan, P. Merlob, and S. H. Reisner, “Assessment of ear length and low set ears in newborn infants,” Journal of medical genetics, 20.3, pp. 213–215, 1983.
[5] V. Vezhnevets, V. Sazonov, and A. Andreeva, “A survey on pixel-based skin color detection techniques,” Proc. Graphicon, Vol. 3, 2003.
[6] J. Cai and A. Goshtasby, “Detecting human faces in color images,” Image and Vision Computing, 18.1, pp. 63–75, 1999.
[7] J. Kovac, P. Peer, and F. Solina, “Human skin color clustering for face detection,” Vol. 2, IEEE, 2003.
[8] D. Chai and K. N. Ngan, “Face segmentation using skin-color map in videophone applications,” IEEE Transactions on Circuits and Systems for Video Technology, 9.4, pp. 551–564, 1999.
[9] J. A. M. Basilio, G. A. Torres, G. S. Pérez, L. K. T. Medina, and H. M. P. Meana, “Explicit image detection using YCbCr space color model as skin detection,” Proc. of the 2011 American conference on applied mathematics and the 5th WSEAS international conference on Computer engineering and applications, 2011.
[10] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9, No. 1, pp. 62–66, 1979.
[11] R. C. Gonzalez and R. W. Woods, “Digital image processing,” 3rd edition, pp. 635–638, Prentice Hall, 2007.
[12] G. Dave, X. Chao, and K. Sriadibhatla, “Face Recognition in Mobile Phones,” Department of Electrical Engineering Stanford University, USA, 2010.
[13] H. Seyedarabi, W. S. Lee, A. Aghagolzadeh, and S. Khanmohammadi, “Facial expressions recognition in a single static as well as dynamic facial images using tracking and probabilistic neural networks,” Advances in Image and Video Technology, Springer Berlin Heidelberg, pp. 292–304, 2006.
[14] C. Li, C. Xu, C. Gui, and M. D. Fox, “Distance regularized level set evolution and its application to image segmentation,” IEEE Transactions on Image Processing, 19.12, pp. 3243–3254, 2010.
[15] S. Osher and R. Fedkiw, “Level set methods and dynamic implicit surfaces,” New York, Springer-Verlag, 2002.
[16] J. Gomes and O. Faugeras, “Reconciling distance functions and level sets,” J. Vis. Commun. Image Repersen,., vol. 11, no. 2, pp. 209–223, Jun. 2000.
[17] L. Masek, “Recognition of human iris patterns for biometric identification,” Master’s thesis, University of Western Australia, 2003.
[18] A. Fitzgibbon, M. Pilu, and R. B. Fisher, "Direct least square fitting of ellipses," IEEE Transactions on Pattern Analysis and Machine Intelligence, 21.5, pp. 476-480, 1999.
[19] P. L. Rosin, “A Note on the Least Square Fitting of Ellipses,” Pattern Recognition Letters, no. 14, pp. 799–808, Oct. 1993.
[20] G. Taubin, “Estimation of Planar Curves, Surface and Non-Planar Space Curves Defined by Implicit Equations With Appliactions to Edge and Range Image Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, no. 11, pp. 1115–1138, Nov. 1991.
[21] S. S. Rao, “Optimization: Theory and applications,” 2nd edition, New York, Wiley Eastern, 1984
[22] R. D. Agushinta, S. Adang, M. Sarifuddin, and H. S. Suryadi, “Face Component Extraction Using Segmentation Method on Face Recognition System,” Journal of Emerging Trends in Computing and Information Sciences 2.0, 2, 2011.
[23] Z. Zheng, J. Yang, and L. Yang, “A robust method for eye features extraction on color image,” Pattern Recognition Letters 26.14, pp. 2252–2261, 2005.
[24] P.V.C. Hough, “Machine Analysis of Bubble Chamber Pictures,” Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959.
[25] S. P. Lloyd, “Least squares quantization in PCM,” IEEE Transactions on Information Theory, 28.2, pp. 129–137, 1982.
[26] H. Chen and B. Bhanu, “Human ear recognition in 3D,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 29.4, pp. 718–737, 2007.
[27] S. Prakash and P. Gupta. “An efficient ear recognition technique invariant to illumination and pose,” Telecommunication Systems, pp. 1–14, 2011.
[28] J. D. Bustard and M. S. Nixon. “Robust 2D ear registration and recognition based on SIFT point matching,” 2nd IEEE International Conference on Biometrics: Theory, Applications and Systems, 2008, BTAS 2008, IEEE, 2008.
[29] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision 60.2, pp. 91–110, 2004.
[30] S. Prakash and P. Gupta, “An efficient ear localization technique,” Image and Vision Computing 30.1, pp. 38–50, 2012.
[31] H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” Computer Vision–ECCV 2006, Springer Berlin Heidelberg, pp. 404–417, 2006.
[32] M. Brown and D. G. Lowe. “Invariant Features from Interest Point Groups,” BMVC, No. s 1, 2002.
[33] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM 24.6, pp. 381-395, 1981.
[34] Y. Freund and R. E. Schapire, “A desicion-theoretic generalization of on-line learning and an application to boosting,” Computational learning theory. Springer Berlin Heidelberg, 1995.
[35] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters, vol. 27, No. 8, pp. 861-874, 2006.
(此全文未開放授權)
電子全文
摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *