帳號:guest(18.191.132.26)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):廖映淇
作者(外文):Liao, Ying-Ci
論文名稱(中文):使用深度學習透過結合不同損失函數應用於低劑量電腦斷層影像的去雜訊
論文名稱(外文):Deep learning for low dose CT denoising using different loss functions
指導教授(中文):許靖涵
指導教授(外文):Hsu, Ching-Han
口試委員(中文):彭旭霞
蕭穎聰
口試委員(外文):Peng, Hsu-Hsia
Hsiao, Ing-Tsung
學位類別:碩士
校院名稱:國立清華大學
系所名稱:生醫工程與環境科學系
學號:109012524
出版年(民國):111
畢業學年度:110
語文別:中文
論文頁數:91
中文關鍵詞:電腦斷層影像去雜訊卷積神經網路混和損失函數
外文關鍵詞:Computed tomographyNoise reductionHybrid loss functionFeature extractionDeep learningImage quality
相關次數:
  • 推薦推薦:0
  • 點閱點閱:349
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
電腦斷層(Computed Tomography, CT)是現代醫學常用的診斷工具,使用低劑量電腦斷層可以降低接收劑量,但會產生雜訊影響影像品質,從而降低診斷價值。為了提昇影像品質,一般會使用三種方法:正弦濾波器、迭代重建、和影像後處理,然而這些方法都各有其優缺點與限制。卷積神經網路是一種具有非線性和多層卷積層組合的神經網路,已被證明在各種任務中有效,運用於去雜訊其優點是可以從低劑量和正常劑量的影像對中學習非線性轉換函數。
早期的研究集中於架構的優化上,然而損失函數也是影響生成影像品質的原因之一,一般常用的損失函數包含像素級損失函數、感知損失函數……等。不同的損失函數擁有不同性質,此研究的目標是設計一個損失函數組合可以保留各別的優點,最終得到最適合使用於低劑量電腦斷層影像去雜訊的組合。
本篇研究裡我使用了不同種類的神經網路架構,包含殘差學習的卷積神經網絡與生成對抗神經網路,並結合平均絕對誤差(Mean absolute error , MAE)、使用VGG-net計算的感知損失函數、和結構相似性損失函數。
最終我們得到的結果是MAE對於模型的訓練有很大的幫助,感知損失在影像上可以得到和正常劑量電腦斷層影像類似的紋理特徵,結構相似性損失函數對於像素值的準確性佔有重要的角色。
Computed tomography (CT) is a popular medical imaging modality. Low-dose CT is a way to reduce the radiation dose, but it will increase the noise and artifacts which significantly degrade the image quality and can compromise diagnostic information. To improve image quality, three methods are generally used: sinogram filtering, iterative reconstruction, and image post-processing. However, these methods have their advantages, disadvantages, and limitations. A Convolutional neural network (CNN) is a neural network that has one or more convolutional layers and is used mainly for image processing, classification, segmentation, and also for other tasks. The advantage of applying CNN to denoising is that it can learn non-linear transfer functions from low-dose and normal-dose image pairs.
Early research focused on the optimization of the architecture, but the loss function is also one of the reasons that affect the quality. Generally, the commonly used loss functions include pixel-level loss functions, perceptual loss functions, etc. Different loss functions have different properties. The goal of the study is to design a combination of loss functions that preserves the respective advantages and finally obtains the combination that is most suitable for use in low-dose CT denoising.
In this study, we used two kinds of neural network architectures, including residual convolutional neural networks and generative neural networks. Combined with mean absolute error (MAE), perceptual loss function, and structural similarity loss function.
At the end, our research shows that MAE is very helpful for the training of the model. Perceptual loss function can obtain texture similar to normal dose CT images. Structural similarity loss function plays an important role in the accuracy of pixel values.
1. 前言 - 1 -
2. 卷積神經網路 - 5 -
2.1 卷積層(Convolution layer) - 7 -
2.2 池化層(Pooling layer) - 9 -
2.3 神經網路訓練流程 - 10 -
2.3.1 參數 - 10 -
2.3.2 梯度下降(Gradient descent) - 11 -
2.3.3 反向傳播演算法(Backpropagation) - 12 -
2.3.4 梯度消失和梯度爆炸 - 14 -
2.4 優化器(Optimizer) - 15 -
2.4.1 SGD (Stochastic Gradient Decent) - 16 -
2.4.2 Adagrad (Adaptive gradient algorithm) [26] - 16 -
2.4.3 RMSprop (Root Mean Square Propagation) - 17 -
2.4.4 Adam (Adaptive Moment Estimation) [27] - 17 -
2.5 激活函數(Activation function) - 18 -
2.5.1 Sigmoid - 18 -
2.5.2 Tanh - 19 -
2.5.3 ReLU - 20 -
2.5.4 Leaky ReLU - 20 -
2.6 過擬合 (Over-fitting) 和欠擬合 (Under-fitting) - 21 -
2.7 正規化(Regularization) - 22 -
2.7.1 L1 和 L2 正規化 - 22 -
2.7.2 Dropout - 23 -
2.7.3 Batch Normalization - 24 -
2.8 殘差學習(Residual learning) - 25 -
2.8.1 RED-CNN - 26 -
3. 生成對抗神經網路 (Generative Adversarial Network) - 27 -
3.1 原始生成對抗神經網路 - 27 -
3.2 WGAN - 29 -
3.3 PatchGAN - 31 -
4. 損失函數 (Loss function) - 32 -
4.1 像素級別損失函數 (Pixel-level loss function) - 32 -
4.1.1 MAE - 33 -
4.1.2 MSE - 33 -
4.1.3 將像素級別損失函數使用於深度學習訓練 - 33 -
4.2 結構相似性損失函數 - 34 -
4.2.1 結構相似性(Structural similarity index , SSIM) - 34 -
4.2.2 將結構相似性損失函數使用於深度學習訓練 - 34 -
4.3 感知損失函數 (Perceptual loss function) - 35 -
4.3.1 VGG Net - 36 -
4.3.2 將感知損失函數使用於深度學習訓練 - 37 -
4.4 Adversarial loss function - 37 -
5. 實驗設計 - 38 -
5.1 神經網路架構 - 39 -
5.1.1 去雜訊部份 - 39 -
5.1.2 特徵提取部份 - 40 -
5.1.3 鑑別器部份 - 40 -
5.2 訓練設備 - 41 -
5.3 訓練資料集與資料預處理 - 41 -
5.3.1 Mayo Clinic data - 41 -
5.4 Quantitative Analysis - 43 -
5.4.1 PSNR - 43 -
5.4.2 RMSE - 44 -
5.4.3 PSNR-HVS - 44 -
6. 實驗結果與討論 - 46 -
6.1 實驗一: 透過使用不同損失函數的神經網路模型,探討其對低劑量電腦斷層影像去雜訊的影響 - 46 -
6.1.1 單獨使用一種損失函數,觀察其對去雜訊的影響 - 46 -
6.1.2 單獨比較MAE和MSE - 50 -
6.2 實驗二: 透過組合不同損失函數的神經網路模型,探討其對低劑量電腦斷層影像去雜訊的影響 - 53 -
6.2.1 MAE與其他兩種損失函數的組合 - 53 -
6.2.2 添加GAN是否提升性能得到更好的結果 - 60 -
6.3 實驗三: 透過改變損失函數之間的占比,探討其對低劑量電腦斷層影像去雜訊的影響 - 65 -
6.3.1 改變VGG的占比 - 65 -
6.3.2 改變MAE的占比 - 70 -
6.3.3 改變DSSIM的占比 - 75 -
6.4 將實驗三的結果套用在GAN的架構上,探討其與未使用GAN有什麼樣的差異 - 81 -
6.4.1 調整G_MAE_ALL中損失函數的比例 - 81 -
7. 結論 - 85 -
8. 未來展望 - 87 -
9. 參考文獻 - 88 -

1. Brenner, D.J. and E.J. Hall, Computed tomography—an increasing source of radiation exposure. New England journal of medicine, 2007. 357(22): p. 2277 -2284.
2. Smith-Bindman, R., et al., International variation in radiation dose for computed tomography examinations: prospective cohort study. Bmj, 2019. 364.
3. McCollough, C.H., CT dose: how to measure, how to reduce. Health physics, 2008. 95(5): p. 508-517.
4. Chesler, D.A., S.J. Riederer, and N.J. Pelc, Noise due to photon counting statistics in computed X-ray tomography. Journal of computer assisted tomography, 1977. 1(1): p. 64-74.
5. Maier, A., et al., Three‐dimensional anisotropic adaptive filtering of projection data for noise reduction in cone beam CT. Medical Physics, 2011. 38(11): p. 5896-5909.
6. Manduca, A., et al., Projection space denoising with bilateral filtering and CT noise modeling for dose reduction in CT. Medical physics, 2009. 36(11): p. 4911-4919.
7. Schindera, S.T., et al., Iterative reconstruction algorithm for CT: can radiation dose be decreased while low-contrast detectability is preserved? Radiology, 2013. 269(2): p. 511-518.
8. Kelm, Z.S., et al. Optimizing non-local means for denoising low dose CT. in 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. 2009. IEEE.
9. Li, S., L. Fang, and H. Yin, An efficient dictionary learning algorithm and its application to 3-D medical image denoising. IEEE transactions on biomedical engineering, 2011. 59(2): p. 417-427.
10. Chen, L., et al. Denoising of low dose CT image with context-based BM3D. in 2016 IEEE Region 10 Conference (TENCON). 2016. IEEE.
11. Diwakar, M. and M. Kumar, A review on CT image noise and its denoising. Biomedical Signal Processing and Control, 2018. 42: p. 73-88
12. Burger, H.C., C.J. Schuler, and S. Harmeling, Image denoising with multi-layer perceptrons, part 1: comparison with existing algorithms and with bounds. arXiv preprint arXiv:1211.1544, 2012.
13. Hijazi, S., R. Kumar, and C. Rowen, Using convolutional neural networks for image recognition. Cadence Design Systems Inc.: San Jose, CA, USA, 2015. 9.
14. Aloysius, N. and M. Geetha. A review on deep convolutional neural networks. in 2017 international conference on communication and signal processing (ICCSP). 2017. IEEE.
15. He, K., et al. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
16. Kang, E., J. Min, and J.C. Ye, A deep convolutional neural network using directional wavelets for low‐dose X‐ray CT reconstruction. Medical physics, 2017. 44(10): p. e360-e375.
17. Yang, W., et al., Improving low-dose CT image using residual convolutional network. IEEE access, 2017. 5: p. 24698-24705.
18. Chen, H., et al., Low-dose CT with a residual encoder-decoder convolutional neural network. IEEE transactions on medical imaging, 2017. 36(12): p. 2524-2535.
19. Goodfellow, I., et al., Generative adversarial nets. Advances in neural information processing systems, 2014. 27.
20. Zhu, J.-Y., et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. in Proceedings of the IEEE international conference on computer vision. 2017.
21. Bruckert, A., et al., Deep saliency models: The quest for the loss function. Neurocomputing, 2021. 453: p. 693-704.
22. Patel, Y., S. Appalaraju, and R. Manmatha, Deep perceptual compression. arXiv preprint arXiv:1907.08310, 2019.
23. LeCun, Y., et al., Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. 86(11): p. 2278-2324.
24. Krizhevsky, A., I. Sutskever, and G.E. Hinton, Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 2012. 25.
25. Ioffe, S. and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. in International conference on machine learning. 2015. PMLR.
26. Duchi, J., E. Hazan, and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 2011. 12(7).
27. Kingma, D.P. and J. Ba, Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
28. Nair, V. and G.E. Hinton. Rectified linear units improve restricted boltzmann machines. in Icml. 2010.
29. Arjovsky, M., S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. in International conference on machine learning. 2017. PMLR.
30. Gulrajani, I., et al., Improved training of wasserstein gans. Advances in neural information processing systems, 2017. 30.
31. Isola, P., et al. Image-to-image translation with conditional adversarial networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017
32. Rogowitz, B.E., et al. Perceptual image similarity experiments. in Human Vision and Electronic Imaging III. 1998. SPIE.
33. Wang, Z., et al., Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 2004. 13(4): p. 600-612.
34. Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
35. McCollough, C., TU‐FG‐207A‐04: overview of the low dose CT grand challenge. Medical physics, 2016. 43(6Part35): p. 3759-3760.
36. Rusinek, H., et al., Pulmonary nodule detection: low-dose versus conventional CT. Radiology, 1998. 209(1): p. 243-249.
37. Marshall, H.M., et al., Screening for lung cancer with low-dose computed tomography: a review of current status. Journal of thoracic disease, 2013. 5(Suppl 5): p. S524.
38. Lessmann, N., et al. Deep convolutional neural networks for automatic coronary calcium scoring in a screening study with low-dose chest CT. in Medical Imaging 2016: Computer-Aided Diagnosis. 2016. SPIE.
39. Huynh-Thu, Q. and M. Ghanbari, Scope of validity of PSNR in image/video quality assessment. Electronics letters, 2008. 44(13): p. 800-801.
40. Egiazarian, K., et al. New full-reference quality metrics based on HVS. in Proceedings of the Second International Workshop on Video Processing and Quality Metrics. 2006.
41. Wang, Z. and A.C. Bovik, A universal image quality index. IEEE signal processing letters, 2002. 9(3): p. 81-84.
42. Chen, K., et al., RIDnet: Radiologist-Inspired Deep Neural Network for Low-dose CT Denoising. arXiv preprint arXiv:2105.07146, 2021.
43. Han, M., H. Shim, and J. Baek, Low‐dose CT denoising via convolutional neural network with an observer loss function. Medical physics, 2021. 48(10): p. 5727-5742.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *