帳號:guest(3.142.174.13)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):羅偉誠
作者(外文):LO, Wei-Chen
論文名稱(中文):基於深度卷積神經網路方法下,針對秀麗隱桿線蟲顯微影像進行日齡估測及其視覺化解釋
論文名稱(外文):Caenorhabditis elegans Age Prediction and Visual Explanations Using Deep Convolutional Neural Networks
指導教授(中文):鐘太郎
指導教授(外文):Jong, Tai-Lang
口試委員(中文):謝奇文
黃裕煒
口試委員(外文):XIE, QI-WEN
HUANG, YU-WEI
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:107061597
出版年(民國):109
畢業學年度:108
語文別:中文
論文頁數:111
中文關鍵詞:機器學習深度學習深度卷積神經網路秀麗隱桿線蟲
外文關鍵詞:InceptionResNetV2XceptionGrad CAM
相關次數:
  • 推薦推薦:0
  • 點閱點閱:842
  • 評分評分:*****
  • 下載下載:59
  • 收藏收藏:0
本篇論文主要是利用深度學習的方法來預測秀麗隱桿線蟲的日齡以及將深度卷積模型預測之結果做視覺化解釋,所以前半部分在介紹機器學習原理及深度學習中不同卷積神經網路的架構,後半部分則在討論本篇論文所做的實驗結果。
本篇論文利用陽明大學許翱麟教授所提供的秀麗隱桿線蟲顯微影像,訓練多種深度卷積網路架構來完成日齡判讀的任務,並且希望藉由深度學習自動抽取特徵的特性,來取代以往生物學家利用肉眼對生理特徵做出日齡判斷。訓練過程中,也透過輸入額外彎曲與否的特徵及秀麗隱桿線蟲身長的特徵,來觀察加入額外的因素是否可以在判斷日齡任務中能夠更加的精準。
實驗結果中,可以發現在不同的訓練-驗證-測試資料集裡,InceptionResNetV2在加入長度特徵後,不同測試資料的判斷誤差mean absolute error平均從未加入長度特徵時的0.87天降至0.83天,一天內的平均準確率可以從68.16%提升至70.40%,兩天內的平均準確率可以從89.50%提升至91.75%。此外利用影像分類的方式,將現有14類的日齡利用卷積神經網路模型來做預測類別,發現Xception這個卷積網路模型在加入長度特徵時,不同的測試資料分類平均正確率可以從未加入長度特徵時的56.17%提升至62.16%,一天內的平均準確率可以從77.89%提升至80.89%,兩天內的正確率可以從90.99%提升至92.50%。以上兩種方式能夠幫助生物學家預測秀麗隱桿線蟲日齡時能夠更加迅速。
為了讓卷積神經網路模型的判斷結果有個依據,本篇論文利用Grad CAM這個視覺化解釋模型的技術,將秀麗隱桿線蟲日齡判斷正確時的熱區描繪出來,發現在未發育完全的秀麗隱桿線蟲,網路主要關注的部位位於子宮或會陰;而發育完全的秀麗隱桿線蟲,網路主要關注的部位在於腸子。

關鍵詞:機器學習、深度學習、深度卷積神經網路、秀麗隱桿線蟲、InceptionResNetV2、Xception、Grad CAM
This thesis mainly uses deep learning to predict the age of Caenorhabditis elegans (C. elegans) and to visually explain the classification of deep convolutional neural networks. The first part of this thesis introduces the principles of machine learning and different architectures of deep learning. The second part will discuss the experimental results in this thesis.
This thesis uses the microscopic C. elegans images provided by Prof. Ao-Lin Hsu, Yangming University for training four deep convolutional neural networks to carry out the task of age prediction and hope to select features automatically by deep learning’s characteristic to replace age prediction made by biologists based on C. elegans’s pysiological characteristics. By augmenting other global features like curve or not and the length of C. elegans to the input during the training process to observe whether the augmentation of other features could predict more accurately in the age prediction or not.
In the experimental results, it can be found that when InceptionResNetV2 model without and with augmenting the length features for all different training-validation-testing datasets, the average mean absolute error in age prediction of the test data can improve from 0.87 days to only 0.83 days. The average test accuracy within one day can improve from 68.16% to 70.40%, and the average test accuracy within two days can improve from 89.50% to 91.75%. In addition to the above regression prediction,
classification by convolutional neural networks with softmax is used to classify C. elegans into existing 14 ages categories. It can be found that when Xception model without and with augmenting the length features for all different testing datasets, the average test accuracy in age prediction of the test data can improve from 56.17% to 62.16%. The average test accuracy within one day can improve from 77.89% to 80.89%, and the average test accuracy within two days can improve from 90.99% to 92.50%. The above two ways both can help biologists predict the ages of C. elegans more quickly.
In order to explain the predictions made by convolutional networks, this thesis used Grad CAM to visually explain the age classification by drawing a heatmap of C. elegans when the ages predicted correctly. It can be found that the networks focus mainly on Vulva and Uterus when assessing underdeveloped C. elegans, while the networks focus mainly on Intestine when assessing developed C. elegans.
















Keywords: Machine learning, Deep learning, Deep Convolutional Neural Network, C. elegans, InceptionResNetV2, Xception, Grad CAM
目錄
中文摘要 I
Abstract II
致謝 IV
目錄 V
圖目錄 VIII
表目錄 XI
第一章 緒論 1
1.1 前言 1
1.2 研究背景 1
1.3 研究動機 2
1.4 文獻回顧 4
1.5 論文貢獻 6
1.6 論文架構 6
第二章 機器學習 7
2.1前言 7
2.2監督式學習 (Supervised learning)[12-18] 9
2.2.1 線性回歸分析(Linear regression) 9
2.2.2 非線性回歸分析(Non Linear Regression) 13
2.2.3 分類(classification) 13
2.3分類器(Classifier) 14
2.3.1邏輯回歸(logistic regression)[13] 15
2.3.2 貝氏網路(Bayesian network)[14] 16
2.3.3 支持向量機(Support vector machine, SVM)[15] 18
2.3.4 最近鄰居分類器(K-Nearest-Neighbors Classifier)[16] 19
2.3.5 決策樹(decision trees)[17] 20
2.3.6 隨機森林(random forests)[18] 21
第三章 深度學習 24
3.1 前言 24
3.2 卷積神經網路(Convolutional Neural Network,CNN)[31] 25
3.2.1 卷積層(Convolutional Layer) 26
3.2.2 池化層(Pooling Layer) 27
3.2.3 全連接層(Fully Connected Layer) 28
3.3 CNN經典模型 29
3.3.1 VGG16(Visual Geometry Group-16)[6] 29
3.3.2 GoogLeNet[7] 31
3.3.3 ResNet[9] 33
3.3.4 Xception[11] 35
3.4 過擬合(Overfiting) 37
3.4.1 Data Augmentation[43] 37
3.4.2 Earlystopping[44] 38
3.4.3 權重衰減(Weight decay)[45] 39
3.4.4 Dropout[46] 39
第四章 Grad CAM 41
4.1 前言 41
4.2 CAM[50] 41
4.3 Grad-CAM[5] 43
4.3.1 Grad-CAM介紹 43
4.3.2 Grad-CAM-未使用GAP層詳細推導過程 44
4.3.3 Grad-CAM-使用GAP層詳細過程 47
4.3.4 方法總結 50
第五章 分析方法與實驗結果 51
5.1前言 51
5.2 C. elegans影像資料介紹 52
5.3資料前處理 57
5.3.1 資料分割 57
5.3.2 資料擴增 58
5.4 實驗方法與結果比較 59
5.4.1 實驗步驟 59
5.4.2 回歸方法實驗結果 60
5.4.3 邏輯回歸實驗結果 77
5.4.4 回歸分析與邏輯回歸實驗結果比較 95
5.5 模型視覺化解釋 97
結論與未來展望 104
參考文獻 106

[1] Heidi A. Tissenbaum, “Using C.elegans for aging research,” Invertebrate Reproduction & Development, 59:sup1, 59-63, DOI: 10.1080/07924259.2014.940470, 2015
[2] Zhang, William B et al. “Extended Twilight among Isogenic C. elegans Causes a Disproportionate Scaling between Lifespan and Health.” Cell systems vol. 3,4 (2016): 333-345.e4.
[3] Hsu, Ao-Lin et al. “Identification by machine vision of the rate of motor activity decline as a lifespan predictor in C. elegans.” Neurobiology of aging vol. 30,9 (2009): 1498-503.
[4] J. Lin, W. Kuo, Y. Huang, T. Jong, A. Hsu and W. Hsu, "Using Convolutional Neural Networks to Measure the Physiological Age of Caenorhabditis elegans," in IEEE/ACM Transactions on Computational Biology and Bioinformatics.
[5] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra, "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, 2017, pp. 618-626.
[6] Karen Simonyan, Andrew Zisserman (2015)."Very Deep Convolutional Networks for Large-Scale Image Recognition"
[7] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich (2014): "Going deeper with convolutions"
[8] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in AAAI Conference on Artificial Intelligence, 2017, pp. 4278–4284
[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun (2015)."Deep Residual Learning for Image Recognition"
[10] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam (2017)."MobileNets: efficient convolutional neural networks for mobile vision applications"
[11] F. Chollet, "Xception: Deep Learning with Depthwise Separable Convolutions," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 1800-1807.
[12] Nasteski, Vladimir. (2017). An overview of the supervised machine learning methods. HORIZONS.B. 4. 51-62. 10.20544/HORIZONS.B.04.1.17.P05.
[13] T. Haifley, "Linear logistic regression: an introduction," IEEE International Integrated Reliability Workshop Final Report, 2002., Lake Tahoe, CA, USA, 2002, pp. 184-187.
[14] C. Yonghui, "Study of the Case of Learning Bayesian Network from Incomplete Data," 2009 International Conference on Information Management, Innovation Management and Industrial Engineering, Xi'an, 2009, pp. 66-69.
[15] Hsu, C.-W & Chang, C.-C & Lin, C.-J. (2003). A Practical Guide to Support Vector Classification. 101. 1396-1400.
[16] F. Pernkopf, “Bayesian network classifiers versus selective k-NN classifiers”, Pattern recognition, vol. 38, no. 1, pp. 1-10, 2005
[17] Quinlan, J. 1986. Induction of decision trees. Machine Learning
[18] L. Breiman. 2001. Random forests. Machine learning
[19] H. U. Dike, Y. Zhou, K. K. Deveerasetty and Q. Wu, "Unsupervised Learning Based On Artificial Neural Network: A Review,"2018 IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, 2018, pp. 322-327.
[20] W. Qiang and Z. Zhongli, "Reinforcement learning model, algorithms and its application," 2011 International Conference on Mechatronic Science, Electric Engineering and Computer (MEC), Jilin, 2011, pp. 1143-1146.
[21] O'Shea, Keiron & Nash, Ryan. (2015). An Introduction to Convolutional Neural Networks. ArXiv e-prints.
[22] Alpha Go [Online]. Available: https://deepmind.com/research/alphago/
[23] KEVIN(2016)機器學習介紹(Machine Learning)介紹,[Online].Available: http://hadoopspark.blogspot.com/2016/02/blog-post.html
[24] Machine Learning Classification, https://data-flair.training/blogs/machine-learning-classification-algorithms/
[25] Sigmoid function, https://zh.wikipedia.org/wiki/S%E5%87%BD%E6%95%B0
[26] Softmax Classifier and Cross-Entropy, [Online].Available :https://mc.ai/notes-on-deep-learning%E2%80%8A-%E2%80%8Asoftmax-classifier/
[27] Introduction to Support Vector Machines [Online]. Available: https://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html
[28] K-近鄰算法解讀,[Online].Available:https://kknews.cc/zh-tw/news/gvx3jae.html
[29] What is Random Forest? https://medium.com/@ryotennis0503/random-forest-27e650072269
[30] A. Singh, N. Thakur and A. Sharma, "A review of supervised machine learning algorithms," 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, 2016, pp. 1310-1315.
[31] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
[32] Sherstinsky, Alex. “Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network.” Physica D: Nonlinear Phenomena 404 (2020): 132306. Crossref. Web.
[33] Fischer, Asja & Igel, Christian. (2012). An Introduction to Restricted Boltzmann Machines. 14-36. 10.1007/978-3-642-33275-3_2.
[34] Shiruru, Kuldeep. (2016). AN INTRODUCTION TO ARTIFICIAL NEURAL NETWORK. International Journal of Advance Research and Innovative Ideas in Education. 1. 27-30.
[35] Yeh James (2017) 資料分析-機器學習-第5-1講-卷積神經網絡介紹 [Online]. Available: https://medium.com/@yehjames/
[36] GGWithRabitLIFE(2018) [機器學習 ML NOTE]Convolution Neural Network卷積神經網路[Online].Available: https:medium.com/機機與兔兔的工程世界/機器學習-ml-note-convolution-neural-network-卷積神經網路-bfa8566744ep
[37] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
[38] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2017. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 6 (May 2017), 84–90.
[39] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 2818-2826.
[40] Ioffe, Sergey & Szegedy, Christian. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.
[41] Kaiser, Lukasz & Gomez, Aidan & Chollet, Francois. (2017). Depthwise Separable Convolutions for Neural Machine Translation.
[42] YIN GUOBING (2018) Separable Convolution,[Online].Available: https://blog.csdn.net/tintinetmilou/article/details/81607721
[43] Shorten, Connor & Khoshgoftaar, Taghi. (2019). A survey on Image Data Augmentation for Deep Learning. Journal of Big Data. 6.10.1186/s40537-019-0197-0.
[44] Prechelt, Lutz. (2000). Early Stopping - But When?. 10.1007/3-540-49430-8_3.
[45] Krogh, A., and Hertz, J. A. (1992). “A Simple Weight Decay Can Improve Generalization,” in Advances in Neural Information Processing Systems 4, eds. J. E. Moody, S. J. Hanson, and R. P. Lippmann (San Francisco, CA: Morgan Kaufmann), 950--957. Available at: ftp://ftp.ci.tuwien.ac.at/pub/texmf/bibtex/nips-4.bib.
[46] Srivastava, Nitish & Hinton, Geoffrey & Krizhevsky, Alex & Sutskever, Ilya & Salakhutdinov, Ruslan. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research. 15. 1929-1958.
[47] 莉森揪(2018) [精進魔法]Regularization:減少Overfitting,提高模型泛化能力,[Online].Available:https://ithelp.ithome.com.tw/articles/10203371
[48] Vogl, Richard. (2018). Deep Learning Methods for Drum Transcription and Drum Pattern Generation. 10.13140/RG.2.2.34638.51529.
[49] Microstrong (2019) 深度學習中Dropout原理解析, [Online].Available: https://www.itread01.com/content/1547209261.html
[50] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva and A. Torralba, "Learning Deep Features for Discriminative Localization," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 2921-2929.
[51] Xu, L. & Ren, Jimmy & Liu, C. & Jia, J.. (2014). Deep convolutional neural network for image deconvolution. Advances in Neural Information Processing Systems. 2. 1790-1798.
[52] Springenberg, Jost & Dosovitskiy, Alexey & Brox, Thomas & Riedmiller, Martin. (2014). Striving for Simplicity: The All Convolutional Net.
[53] Uygur Kabael, Tangül. (2010). Kabael, T. U. (2010). Cognitive development of applying the chain rule through three worlds of mathematics. Australian Senior Mathematics Journal, 24(2), 14-28.
[54] Python, https://www.python.org/
[55] Keras, https://keras.io/
[56] Tensorflow, https://www.tensorflow.org/
[57] Theano, http://deeplearning.net/software/theano/
[58] Keras, Theano and TensorFlow on Windows and Linux, [Online].Available: https://gettocode.com/2016/12/02/keras-on-theano-and-tensorflow-on-windows-and-linux/
[59] Jiunn-Liang Lin, Yung-Sheng Chen, Yi-Hao Huang, Ao-Lin Hsu, Tai-Lang Jong, and Wen-Hsing Hsu, “Approach to the Caenorhabditis elegans segmentation from its microscopic image,” IEEE International Conference on Systems, Man, and Cybernetics, Oct. 2018
[60] Wikipedia contributors. "Caenorhabditis elegans." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 27 Jun. 2020. Web. 29 Jun. 2020.
[61] Uppaluri, Sravanti, and Clifford P Brangwynne. “A size threshold governs Caenorhabditis elegans developmental progression.”Proceedings. Biological sciences vol. 282,1813 (2015): 20151283.
[62] Zhou, Y., Wang, X., Song, M.et al.A secreted microRNA disrupts autophagy in distinct tissues of Caenorhabditis elegans upon ageing. Nat Commun 10, 4827 (2019).
[63] Pan CL, Peng CY, Chen CH, McIntire S. Genetic analysis of age-dependent defects of the Caenorhabditis elegans touch receptor neurons.Proc Natl Acad Sci U S A. 2011;108(22):9274‐9279.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *