帳號:guest(3.144.118.103)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林培權
作者(外文):Lin, Pei-Chuan
論文名稱(中文):利用類別增強進行胸部X光之小樣本學習
論文名稱(外文):Few-shot learning for chest X-ray using class augmentation
指導教授(中文):郭柏志
指導教授(外文):Kuo, Po-Chih
口試委員(中文):李濬屹
邱維辰
口試委員(外文):Lee, Chun-Yi
Chiu, Wei-Chen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:108062658
出版年(民國):111
畢業學年度:110
語文別:英文
論文頁數:48
中文關鍵詞:機器學習電腦視覺胸部X光小樣本學習元學習
外文關鍵詞:Machine learningComputer visionChest X-rayFew-shot learningMeta-learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:187
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
由於病患隱私問題以及高標籤成本,數據稀缺是醫學影像中最常見的問題之
一。在罕見病分類任務中,數據稀缺問題會更加嚴重,這使得常用的深度學習
方法難以使用。因此,我們提出了一種元學習方法來解決胸部X 光的小樣本分
類問題。元學習模型透過使用許多不同的分類任務進行訓練,使其能夠只使用
少量的數據便能快速適應到新的分類任務。為了解決元學習中的過擬合問題,
我們使用了一種新穎的基於生成對抗網路的類別增強來增加圖像數量和類別數
量。我們使用三個不同的二分類任務和一個三分類任務來測試元學習性能。
其中兩個是從公開胸部X光數據集MIMIC-CXR 中獲得的,一個是從本地醫院
獲得的罕見疾病數據集。與不使用類別增強的元學習相比,我們的方法在各
使用50張影像的二分類任務上分別可以提高7.14%,4.47%,以及4.43% 的準確
率並在各使用50張影像的三分類任務中提高2.5% 的準確率。除了小樣本學習
任務外,我們還將我們的方法應用於跨數據集分類任務。在實驗中,我們使
用MIMIC-CXR 作為訓練集,另一個公開胸部X光數據集CheXpert 作為測試集。
實驗結果證明,在一些存在嚴重領域過擬合問題的跨數據集分類任務中,使用
各50張訓練樣本的元學習可以比傳統方法高出2.5%的準確率。本研究顯示我們
提出的方法可以加強小樣本胸部X光對疾病的判斷同時可以有好的概括性。
I
Data scarcity is one of the most common problems in medical imaging due to the data privacy issue and the high cost of annotation. In rare disease classification tasks, the data scarcity problem becomes more severe which makes the use of general deep learning approaches infeasible. In this study, we present a meta-learning approach to solve the few-shot chest X-ray (CXR) classification problem. A meta-learning model is trained by many different classification tasks and can adapt to new classification tasks efficiently using only a few training samples. To combat the overfitting problem in meta-learning, we used a novel GAN-based class augmentation to increase not only image numbers but also class numbers. We used three different 2-way classification tasks and a 3-way classification task to evaluate the proposed method. Two of them are obtained from the public CXR dataset (MIMIC-CXR) and one is the rare disease dataset obtained from the local hospital. Compared to the meta-learning without augmentation, the proposed class augmentation can increase the accuracy of three 2-way 50-shot tasks by 7.14\%, 4.47\%, and 4.43\%, respectively. For the 3-way 50-shot classification task, the accuracy increased by 2.5\%. We also applied our method to the cross-dataset classification task, where the domain overfitting problems may happen. In the experiment, we used MIMIC-CXR as the training set and the other public CXR dataset (CheXpert) as the testing set. The results showed that 50-shot meta-learning can increase the performance by 2.5\% compared to the conventional methods. This study demonstrated that incorporating a class augmentation method into meta-learning has good generalizability and can improve the accuracy of disease classification from CXR images.
Contents
Abstract (Chinese) I
Abstract II
Contents III
List of Figures V
List of Tables VII
List of Algorithms VIII
1 Introduction 1
2 Related Works 5
2.1 Generative adversarial network . . . . . . . . . . . . . . . . . 5
2.2 Generative adversarial network in medical imaging . . . . . . . 6
2.3 Optimization-based meta-learning . . . . . . . . . . . . . . . . 8
2.4 Optimization-based meta-learning in medical fields . . . . . . . 9
2.5 Meta-learning in CXR classification . . . . . . . . . . . . . . 10
2.6 Few-Shot skin disease identification using meta-learning . . . .11
3 Methodology 12
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
3.2 Meta-learning method: Reptile . . . . . . . . . . . . . . . . . 13
3.3 Class augmentation method . . . . . . . . . . . . . . . . . . . 16
3.3.1 Conditional Generative Adversarial Network . . . . . . . . . .16
3.3.2 Generate pseudo-class chest X-ray using conditional GAN . . . 20
4 Experiments & Results 21
4.1 Dataset description . . . . . . . . . . . . . . . . . . . . . . 21
4.1.1 MIMIC-CXR . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1.2 CheXpert . . . . . . . . . . . . . . . . . . . . . . . . . . .21
4.1.3 Dataset from local hospital . . . . . . . . . . . . . . . . . 22
4.2 Dataset splitting . . . . . . . . . . . . . . . . . . . . . . . 22
4.3 Experiment details . . . . . . . . . . . . . . . . . . . . . . .24
4.4 Experimental results . . . . . . . . . . . . . . . . . . . . . .25
4.4.1 Generate pseudo class MNIST images . . . . . . . . . . . . . .25
4.4.2 Generate new types of X-ray images by Condtional GAN . . . . 25
4.4.3 Reptile using class augmentation . . . . . . . . . . . . . . .28
4.4.4 Results of few-shot learning Using Reptile . . . . . . . . . .29
4.4.5 MAML with class augmentation . . . . . . . . . . . . . . . . .31
4.4.6 Cross-dataset analysis . . . . . . . . . . . . . . . . . . . .32
4.4.7 Comparison of meta-learning and transfer learning on NTM
and TB classification . . . . . . . . . . . . . . . . . . . . . . . 34
5 Discussion and future work 36
6 Conclusions 41
Bibliography 42
[1] Xin Yi, Ekta Walia, and Paul Babyn. Generative adversarial network in medical imaging: A review. Medical Image Analysis, 58:101552, 2019.
[2] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. 70:1126–1135, 06–11 Aug 2017.
[3] Sagar Kora Venu and Sridhar Ravula. Evaluation of deep convolutional generative adversarial networks for data augmentation of chest x-ray images. Future Internet, 13(1), 2021.
[4] Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn L. Ball, Katie S. Sh- panskaya, Jayne Seekins, David A. Mong, Safwan S. Halabi, Jesse K. Sandberg, Ricky Jones, David B. Larson, Curtis P. Langlotz, Bhavik N. Patel, Matthew P. Lungren, and Andrew Y. Ng. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. CoRR, abs/1901.07031, 2019.
[5] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bot- tou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012.
[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database, 2009.
[7] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
[8] Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolu- tional networks. CoRR, abs/1608.06993, 2016.
[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
[10] Sema Candemir and Sameer Antani. A review on lung boundary detection in chest x-rays. International Journal of Computer Assisted Radiology and Surgery, 14(4):563–576, April 2019.
[11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. 27, 2014.
[12] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Yi Ding, Aarti Bagul, Curtis P. Langlotz, Katie S. Shpanskaya, Matthew P. Lungren, and Andrew Y. Ng. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. CoRR, abs/1711.05225, 2017.
[13] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, and Ronald M. Summers. Tienet: Text-image embedding network for common thorax disease classification and re- porting in chest x-rays. CoRR, abs/1801.04334, 2018.
[14] Ilyas Sirazitdinov, Maksym Kholiavchenko, Tamerlan Mustafaev, Yuan Yixuan, Ramil Kuleev, and Bulat Ibragimov. Deep neural network ensemble for pneumo- nia localization from a large-scale chest x-ray database. Computers Electrical Engineering, 78:388–399, 2019.

[15] Alistair E. W. Johnson, Tom J. Pollard, Seth J. Berkowitz, Nathaniel R. Green- baum, Matthew P. Lungren, Chih-ying Deng, Roger G. Mark, and Steven Horng. MIMIC-CXR: A large publicly available database of labeled chest radiographs. CoRR, abs/1901.07042, 2019.
[16] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers. Chestx-ray8: Hospital-scale chest x-ray database and bench- marks on weakly-supervised classification and localization of common thorax dis- eases. CoRR, abs/1705.02315, 2017.
[17] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers. Chestx-ray8: Hospital-scale chest x-ray database and bench- marks on weakly-supervised classification and localization of common thorax dis- eases. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3462–3471, 2017.
[18] Mohamed Abdel-Basset, Victor Chang, Hossam Hawash, Ripon K Chakrabortty, and Michael Ryan. Fss-2019-ncov: A deep learning architecture for semi- supervised few-shot segmentation of covid-19 infection. Knowledge-Based Sys- tems, 212:106647, 2021.
[19] Tianle Ma and Aidong Zhang. Affinitynet: semi-supervised few-shot learning for disease type prediction. CoRR, abs/1805.08905, 2018.
[20] Ruslan Salakhutdinov, Joshua Tenenbaum, and Antonio Torralba. One-shot learn- ing with a hierarchical nonparametric bayesian model. In Isabelle Guyon, Gideon Dror, Vincent Lemaire, Graham Taylor, and Daniel Silver, editors, Proceedings of ICML Workshop on Unsupervised and Transfer Learning, volume 27 of Proceed- ings of Machine Learning Research, pages 195–206, Bellevue, Washington, USA, 02 Jul 2012. PMLR.

[21] Tyler R. Scott, Karl Ridgeway, and Michael C. Mozer. Adapted deep embed- dings: A synthesis of methods for k-shot inductive transfer learning. CoRR, abs/1805.08402, 2018.
[22] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. 2018.
[23] Sachin Ravi and H. Larochelle. Optimization as a model for few-shot learning. In
ICLR, 2017.

[24] Rishav Singh, Vandana Bharti, Vishal Purohit, Abhinav Kumar, Amit Kumar Singh, and Sanjay Kumar Singh. MetaMed: Few-shot medical image classification using gradient-based meta-learning. Pattern Recognition, 120:108111, December 2021.
[25] ZongYuan Ge, Sergey Demyanov, Zetao Chen, and Rahil Garnavi. Generative openmax for multi-class open set classification, 2017.
[26] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. 2014. cite arxiv:1411.1784.
[27] Jiawei Chen, Janusz Konrad, and Prakash Ishwar. Vgan-based image repre- sentation learning for privacy-preserving facial expression recognition. CoRR, abs/1803.07100, 2018.
[28] Jinsung Yoon, James Jordon, and Mihaela van der Schaar. PATE-GAN: Generating synthetic data with differential privacy guarantees. In International Conference on Learning Representations, 2019.
[29] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020.

[30] Martin Arjovsky, Soumith Chintala, and Le´on Bottou. Wasserstein gan, 2017.

[31] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image- to-image translation using cycle-consistent adversarial networks. 2017.
[32] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. 2016.
[33] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. 2015.
[34] Ali Madani, Mehdi Moradi, Alexandros Karargyris, and Tanveer Syeda- Mahmood. Semi-supervised learning with generative adversarial networks for chest x-ray classification with ability of data domain adaptation. pages 1038– 1042, 2018.
[35] Fabio A. Spanhol, Luiz S. Oliveira, Caroline Petitjean, and Laurent Heutte. A dataset for breast cancer histopathological image classification. IEEE Transactions on Biomedical Engineering, 63(7):1455–1462, 2016.
[36] Jinyi Zou, Xiao Ma, Cheng Zhong, and Yao Zhang. Dermoscopic image analysis for ISIC challenge 2018. CoRR, abs/1807.08948, 2018.
[37] Jan Jantzen, Jonas Norup, Georgios Dounias, and Beth Bjerregaard. Pap-smear benchmark data for pattern classification. Nature inspired Smart Information Sys- tems (NiSIS 2005), pages 1–9, 2005.
[38] Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural net- works for one-shot image recognition. 2015.
[39] Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few- shot learning. CoRR, abs/1703.05175, 2017.

[40] Mohammad Shorfuzzaman and M Shamim Hossain. MetaCOVID: A siamese neural network framework with contrastive loss for n-shot diagnosis of COVID- 19 patients. Pattern Recognit, 113:107700, October 2020.
[41] Kushagra Mahajan, Monika Sharma, and Lovekesh Vig. Meta-dermdiagnosis: Few-shot skin disease identification using meta-learning. In 2020 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 3142–3151, 2020.
[42] Jeremy Kawahara, Sara Daneshvar, Giuseppe Argenziano, and Ghassan Hamarneh. 7-point checklist and skin lesion classification using Multi-Task Multi- Modal neural nets. IEEE J Biomed Health Inform, April 2018.
[43] Xiaoxiao Sun, Jufeng Yang, Ming Sun, and Kai Wang. A benchmark for automatic visual classification of clinical skin disease images. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, pages 206– 222, Cham, 2016. Springer International Publishing.
[44] Taco Cohen and Max Welling. Group equivariant convolutional networks. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Ma- chine Learning Research, pages 2990–2999, New York, New York, USA, 20–22 Jun 2016. PMLR.
[45] Gilberto Manunza. Covid 19 chest x-rays deep learning analysis, 2021.

[46] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional net- works for biomedical image segmentation. CoRR, abs/1505.04597, 2015.
[47] Alistair Johnson, Lucas Bulgarelli, Tom Pollard, Steven Horng, Leo Anthony Celi, and Roger Mark. Mimic-iv, 2021.

[48] Paras Lakhani and Baskaran Sundaram. Deep learning at chest radiography: Au- tomated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology, 284(2):574–582, April 2017.
[49] Eui Jin Hwang, Sunggyun Park, Kwang-Nam Jin, Jung Im Kim, So Young Choi, Jong Hyuk Lee, Jin Mo Goo, Jaehong Aum, Jae-Joon Yim, Chang Min Park, and Deep Learning-Based Automatic Detection Algorithm Development and Evalua- tion Group. Development and validation of a deep learning-based automatic detec- tion algorithm for active pulmonary tuberculosis on chest radiographs. Clin Infect Dis, 69(5):739–747, August 2019.
[50] Karl Pearson F.R.S. Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559–572, 1901.
[51] Antreas Antoniou, Harrison Edwards, and Amos J. Storkey. How to train your MAML. CoRR, abs/1810.09502, 2018.


 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *