帳號:guest(3.138.67.97)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張德萱
作者(外文):Chang, De-Syun
論文名稱(中文):利用非線性資料擴增方法提升超音波影像惡性肋膜積液鑑定
論文名稱(外文):Enhancing Malignancy Identification in Ultrasound Images of Pleural Effusion through Non-linear Augmentation
指導教授(中文):郭柏志
指導教授(外文):Kuo, Po-Chih
口試委員(中文):賴尚宏
劉育綸
口試委員(外文):Lai, Shang-Hong
Liu, Yu-Lun
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:110062640
出版年(民國):112
畢業學年度:111
語文別:英文
論文頁數:58
中文關鍵詞:超音波影像肋膜積液資料擴增
外文關鍵詞:ultrasound imagespleural effusiondata augmentation
相關次數:
  • 推薦推薦:0
  • 點閱點閱:114
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
現今主流侵入式診斷惡性肋膜積液的方法可能會對病患造成風險,等待細胞檢驗結果的時間也會造成診斷延遲。而非侵入式的超音波影像的安全性與立即性,使其成為了診斷的可能選項。然而,準確解讀超音波影像是一項具有挑戰性的任務,需要專業知識並容易出現錯誤。因此,我們的目標是訓練第一個能準確分類超音波影像中肋膜積液良惡性的模型,旨在降低侵入式診斷所帶來的風險以及減少診斷延遲。然則醫學影像的不易取得所導致的稀少性,往往成為深度學習模型訓練上的一大挑戰。
此研究中,我們探討了不同非線性資料擴增在改善胸腔超音波影像惡性肋膜積液分類的有效性,並且探索使用這些方法來增強模型預測能力,藉此減少依賴臨床上侵入性診斷分析的潛力。我們利用NTUH-HC資料集訓練模型以及內部驗證,並且利用NTUH資料及來進行外部驗證。我們的研究結果顯示,幾種非線性擴增方法的表現皆優於不做資料擴增以及利用線性擴增方法之表現。基於結合隨機選擇多種有效之非線性方法,可以大幅增加診斷模型的表現。在資料擴增圖像配對的不同方法中,隨機選擇來自同一組的配對圖像在準確度、AUC和F1分數方面都得到最好的表現。我們也觀察到增加圖片的分組數量對結果有影響,且在表現進步上可能存在一個分組數量的上限。
此外,我們還訓練了一個物件偵測模型,用於分類任務之前偵測肋膜積液的區域使用,相比不圈選出肋膜積液位置的原始影像,能有效提升分類表現。我們的研究貢獻於理解影像資料擴增方法及其對分類表現的影響,模型預測正確率達0.75 (95%信賴區間為 0.70–0.79)、AUC達0.75 (95%信賴區間為 0.67–0.77)、F1分數達0.64 (95%信賴區間為 0.56–0.71)。與未做資料擴增、也未圈選肋膜積液位置,僅使用原本資料訓練的模型相比,正確率提升9%、AUC提升5%、F1分數提升8%。通過有效擴增現有數據,我們可以提升分類模型的表現,展現作為臨床醫學上的輔助診斷工具的潛力。
The current diagnostic methods for malignant pleural effusion (MPE) can result in delays in the diagnosis through cytopathological examination and increased risks associated with invasive procedures. Ultrasound imaging of the chest offers a non-invasive, safe, and real-time alternative for diagnosing MPE patients. However, interpreting ultrasound images accurately can be a challenging task that requires expertise and is prone to errors. Therefore, our objective was to train the first model to classify the malignancy of pleural effusion in ultrasound images, with the aim to minimize delays in diagnosis and reduce the risks associated with invasive procedures. However, the scarcity of annotated medical images due to privacy concerns and the high cost of annotation pose significant challenges in training deep learning models.
In this study, we investigated the effectiveness of various non-linear augmentation methods in improving the performance of pleural effusion classification of ultrasound images. We explored the potential of these methods to enhance the predictive capabilities of the models and reduce the reliance on invasive procedures for medical image analysis. We trained the models and internally validated on NTUH-HC dataset and then externally validated in NTUH dataset. Our findings revealed that several non-linear methods outperformed both the baseline and linear methods. Additionally, randomly selecting from non-linear methods exceeding the baseline can create an outstanding ensemble method. Among the image pairing methods, selecting the paired image from the same cluster consistently exhibited superior performance in accuracy, AUC, and F1 score. We also observed that increasing the number of partition clusters had a mixed effect on performance, suggesting the existence of a saturation point or diminishing returns. Furthermore, we trained an object detection model to detect the presence of pleural effusion before the classification task. This additional step of cropping the ROI significantly improved the classification performance compared to not cropping the ROI in the original images.
Our work contributes to understanding image augmentation techniques and their impact on classification performance. We achieved an accuracy of 0.75 (95% CI 0.70–0.79), an AUC of 0.75 (95% CI 0.67–0.77) and an F1 score of 0.64 (95% CI 0.56–0.71). Compared to simply using the original data without augmentation and cropping the ROI, we observed an accuracy increase of 0.09, an AUC improvement of 0.05, and an F1 score improvement of 0.08. By effectively augmenting available data, we can enhance the performance of classification models, thereby holding the potential to furnish an initial screening mechanism for identifying malignant pleural effusion.
Abstract (Chinese) . . . . . . . . . . . . . . . . . . I
Abstract . . . . . . . . . . . . . . . . . . II
Acknowledgements (Chinese) . . . . . . . . . . . . . . . . . . IV
Contents . . . . . . . . . . . . . . . . . . V
List of Figures . . . . . . . . . . . . . . . . . . VIII
List of Tables . . . . . . . . . . . . . . . . . . XI
1 Introduction . . . . . . . . . . . . . . . . . . 1
2 Related Works . . . . . . . . . . . . . . . . . . 4
3 Methodology . . . . . . . . . . . . . . . . . . 9
4 Results . . . . . . . . . . . . . . . . . . 24
5 Discussion . . . . . . . . . . . . . . . . . . 34
6 Conclusion . . . . . . . . . . . . . . . . . . 41
Bibliography . . . . . . . . . . . . . . . . . . 43
7 Supplementary . . . . . . . . . . . . . . . . . . 49
[1] Walid Al-Dhabyani, Mohammed Gomaa, Hussien Khaled, and Aly Fahmy. Deep learning approaches for data augmentation and classification of breast masses using ultrasound images. International Journal of Advanced Computer Science and Applications, 10, 01 2019.
[2] Antonio Bugalho, Dalila Ferreira, Sara S Dias, Maren Schuhmann, Jose C Branco, Maria J Marques Gomes, and Ralf Eberhardt. The diagnostic value of transthoracic ultrasonographic features in predicting malignancy in undiagnosed pleural effusions: a prospective observational study. Respiration, 87(4):270–278, January 2014.
[3] Heang-Ping Chan, Ravi K. Samala, Lubomir M. Hadjiiski, and Chuan Zhou. Deep Learning in Medical Image Analysis, pages 3–21. Springer International Publishing, Cham, 2020.
[4] P. Chlap, H. Min, N. Vandenberg, J. Dowling, L. Holloway, and A. Haworth. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol, 65(5):545–563, 2021.
[5] Florian Dubost, Gerda Bortsova, Hieab Adams, M. Arfan Ikram, Wiro Niessen, Meike Vernooij, and Marleen de Bruijne. Hydranet: Data augmentation for regression neural networks. In Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, pages 438–446, Cham, 2019. Springer International Publishing.
[6] Zach Eaton-Rosen, Felix Bragman, Sebastien Ourselin, and M Jorge Cardoso. Improving data augmentation for medical image segmentation. 2018.
[7] Maayan Frid-Adar, Idit Diamant, Eyal Klang, Michal Amitai, Jacob Gold- berger, and Hayit Greenspan. Gan-based synthetic medical image augmentation for increased cnn performance in liver lesion classification. Neurocom- puting, 321:321–331, 2018.
[8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, Los Alamitos, CA, USA, jun 2016. IEEE Computer Society.
[10] Minui Hong, Jinwoo Choi, and Gunhee Kim. Stylemix: Separating content and style for enhanced data augmentation. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14857–14865, 2021.
[11] Umair Javaid, Damien Dasnoy, and John A. Lee. Semantic segmentation of computed tomography for radiotherapy with deep learning: compensating insufficient annotation quality using contour augmentation. In Elsa D. Angelini and Bennett A. Landman, editors, Medical Imaging 2019: Image Processing, volume 10949 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, page 109492P, March 2019.
[12] Matej Kompanek, Martin Tamajka, and Wanda Benesova. Volumetrie data augmentation as an effective tool in mri classification using 3d convolutional neural network. In 2019 International Conference on Systems, Signals and Image Processing (IWSSIP), pages 115–119, 2019.
[13] Egor Krivov, Maxim Pisov, and Mikhail Belyaev. MRI Augmentation via Elastic Registration for Brain Lesions Segmentation, pages 369–380. 12 2017.
[14] Philippe Lambin, Emmanuel Rios-Velazquez, Ralph Leijenaar, Sara Carvalho, Ruud G.P.M. van Stiphout, Patrick Granton, Catharina M.L. Zegers, Robert Gillies, Ronald Boellard, Andr ́e Dekker, and Hugo J.W.L. Aerts. Radiomics: Extracting more information from medical images using advanced feature analysis. European Journal of Cancer, 48(4):441–446, 2012.
[15] Lok Hin Lee, Yuan Gao, and J. Alison Noble. Principled ultrasound data augmentation for classification of standard planes. In Aasa Feragen, Stefan Sommer, Julia Schnabel, and Mads Nielsen, editors, Information Processing in Medical Imaging, pages 729–741, Cham, 2021. Springer International Publishing.
[16] Junzhao Liang and Junying Chen. Data augmentation of thyroid ultrasound images using generative adversarial network. In 2021 IEEE International Ultrasonics Symposium (IUS), pages 1–4, 2021.
[17] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram van Ginneken, and Clara I. S ́anchez. A survey on deep learning in medical image analysis. Medical Image Analysis, 42:60–88, 2017.
[18] Jakub Nalepa, Grzegorz Mrukwa, Szymon Piechaczek, Pablo Ribalta Lorenzo, Michal Marcinkiewicz, Barbara Bobek-Billewicz, Pawel Wawrzyniak, Pawel Ulrych, Janusz Szymanek, Marcin Cwiek, Wojciech Dudzik, Michal Kawu- lok, and Michael P. Hayball. Data augmentation via image registration. In 2019 IEEE International Conference on Image Processing (ICIP), pages 4250–4254, 2019.
[19] N R Qureshi, N M Rahman, and F V Gleeson. Thoracic ultrasound in the diagnosis of malignant pleural effusion. Thorax, 64(2):139–143, October 2008.
[20] Subhankar Roy, Willi Menapace, Sebastiaan Oei, Ben Luijten, Enrico Fini, Cristiano Saltori, Iris Huijben, Nishith Chennakeshava, Federico Mento, Alessandro Sentelli, Emanuele Peschiera, Riccardo Trevisan, Giovanni Maschietto, Elena Torri, Riccardo Inchingolo, Andrea Smargiassi, Gino Sol- dati, Paolo Rota, Andrea Passerini, Ruud J. G. van Sloun, Elisa Ricci, and Libertario Demi. Deep learning for classification and localization of covid-19 markers in point-of-care lung ultrasound. IEEE Transactions on Medical Imaging, 39(8):2676–2687, 2020.
[21] Rohit Singla, Cailin Ringstrom, Ricky Hu, Victoria Lessoway, Janice Reid, Robert Rohling, and Christophe Nguan. Speckle and shadows: Ultrasound- specific physics-based data augmentation for kidney segmentation. In Ender Konukoglu, Bjoern Menze, Archana Venkataraman, Christian Baumgartner, Qi Dou, and Shadi Albarqouni, editors, Proceedings of The 5th International Conference on Medical Imaging with Deep Learning, volume 172 of Proceedings of Machine Learning Research, pages 1139–1148. PMLR, 06–08 Jul 2022.
[22] E. J. Snider, S. I. Hernandez-Torres, and R. Hennessey. Using ultrasound image augmentation and ensemble predictions to prevent machine-learning model overfitting. Diagnostics (Basel, Switzerland), 13(3):417, 2023.
[23] Cecilia Summers and Michael J. Dinneen. Improved mixed-example data augmentation. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1262–1270, 2019.
[24] Ryo Takahashi, Takashi Matsubara, and Kuniaki Uehara. Data augmentation using random image cropping and patching for deep cnns. IEEE Transactions on Circuits and Systems for Video Technology, 30(9):2917–2931, 2020.
[25] Lorenzo Tutino, Giovanni Cianchi, Francesco Barbani, Stefano Batacchi, Rita Cammelli, and Adriano Peris. Time needed to achieve completeness and accuracy in bedside lung ultrasound reporting in intensive care unit. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine, 18(1):44, August 2010.
[26] Ruud J. G. van Sloun, Regev Cohen, and Yonina C. Eldar. Deep learning in ultrasound imaging. Proceedings of the IEEE, 108(1):11–29, 2020.
[27] Ruud J. G. van Sloun and Libertario Demi. Localizing b-lines in lung ultra-sonography by weakly supervised deep learning, in-vivo results. IEEE Journal of Biomedical and Health Informatics, 24(4):957–964, 2020.
[28] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696, 2022.
[29] Xiang Ying, Yulin Zhang, Xi Wei, Mei Yu, Jialin Zhu, Jie Gao, Zhiqiang Liu, Xuewei Li, and Ruiguo Yu. Msdan: Multi-scale self-attention unsupervised domain adaptation network for thyroid ultrasound images. In 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 871–876, 2020.
[30] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. International Conference on Learning Representations, 2018.
(此全文20280826後開放外部瀏覽)
電子全文
摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *