帳號:guest(3.141.192.246)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):陳佑銓
作者(外文):Chen, You-Chiuan
論文名稱(中文):使用人工神經網路及生成對抗網路產生核磁共振指紋字典
論文名稱(外文):Generation of Magnetic Resonance Fingerprinting Dictionary by Neural Network and Generative Adversarial Network
指導教授(中文):彭旭霞
指導教授(外文):Peng, Hsu-Hsia
口試委員(中文):劉益瑞
阮春榮
黃騰毅
口試委員(外文):Liu, Yi-Jui
Juan, Chun-Jung
Huang, Teng-Yi
學位類別:碩士
校院名稱:國立清華大學
系所名稱:生醫工程與環境科學系
學號:109010509
出版年(民國):112
畢業學年度:111
語文別:中文
論文頁數:86
中文關鍵詞:深度學習核磁共振影像核磁共振指紋
外文關鍵詞:DeeplearningMRIMRF
相關次數:
  • 推薦推薦:0
  • 點閱點閱:14
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
Quantitative magnetic resonance image (MRI) 是一種定量化核磁共振影像的方法,病患接受定量化掃描有助於偵測癌症、水腫、硬化等疾病。但是因為其冗長的掃描時間以及掃描時間過長容易產生artifact等缺點,從醫院角度來看是比較不符合成本的。
核磁共振指紋 magnetic resonance fingerprinting (MRF) 則是提供一個新的技術可以縮短掃描的時間,透過利用pseudo-noise的脈衝序列,使得不同組織所產生訊號的差異性變大,再透過Bloch equation simulator 創造出的字典去比對最相似的訊號。製作字典以及比對的過程會大幅度延長了影像重建的時間,導致要拿到real-time的影像是遙不可及的,雖然有許多方法致力於減少其演算法的複雜度,但仍然需要不少時間,如何減少影像重建的時間成為近年來 MRF 熱門研究項目。
深度學習近年來廣泛的利用在各個領域上且都有著極優秀的成果,如影像對位、超顯微影像製作以及醫療影像分割分類等等。透過深度學習的方式可以在不犧牲太多結果的情況下,大幅度的減少預測的時間,這樣的特質相當適合用在MRF的影像重建上。
在本研究中,我們使用四種不同的深度學習的模型,分別為最一開始的深度學習網路、生成對抗網路、基於 Wasserstein distance 生成對抗網路以及循環生成對抗網路,去自動產生MRF的字典,並使用超過1000組sequence和字典當作訓練資料。我們將脈衝序列的參數 (T1、T2、Repetition times、Echo times和flip angles) 當成網路模型的輸入,輸出則是經過Bloch equation simulator轉換後的訊號大小,希望透過深度學習模型去取代 Bloch equation simulator 創造核磁共振指紋字典。
透過本研究,我們成功的讓模型去預測從未看過的MRF sequence 所產生的訊號,並且能夠將超過16Gb的訓練資料集特徵壓縮至5Mb的深度學習模型之中,同時也將字典產生的時間從數十分鐘縮短至數秒鐘 ,大幅減少了產生字典所需的時間以及空間。
Quantitative magnetic resonance image (MRI) provides an absolute scale of each pixel, which is aiming to reduce variable outputs from an image. Quantitative MRI also provides additional pathological information for diagnosis, such as cancer, edema and sclerosis. The importance of quantitative maps has been recognized. However, the clinical practice remains a challenge because of long scan time.
Magnetic resonance fingerprinting (MRF) is a relatively novel sequence to generate quantitative MRI, which reduces long scan time. MRF uses a pseudo-noise pulse sequence that causes tissue to generate special signal depending on their tissue properties. The MRF dictionary is a lookup table of simulated signal which is made by Bloch equation simulator. Then image reconstruction is to compare measured signal with each line in the dictionary. Though MRF reduces long scan time, the image reconstruction time is long. As a result, reducing MRF dictionary reconstruction time becomes a popular issue recently.
Deep learning is a method recently widely used in lots of fields, such as image coregistration, super resolution image, medical image classification and medical image segmentation. Deep learning method reduces the prediction time without sacrificing the performance. This feature is suitable for MRF image reconstruction.
In this study, we used four kinds of deep learning models, including artificial neural network (ANN), generative adversarial network (GAN), Wasserstein generative adversarial network (WGAN) and cycle generative adversarial network (cycle GAN), to generate MRF dictionary. We generated more than one thousand MRF sequences and dictionaries as our training data. We used MRF sequence parameters as model input to mimic Bloch equation simulator output and create MRF dictionary. We found that ANN model was more suitable than GAN, WGAN, and cycle GAN because MRF dictionary task didn’t need imaginary during the training process. We also found that the performance of simpler sequence was better since it was easier to learn.
The model successfully predicted the MRF signal that it had not seen. And the model compressed the training data from 16Gb to 5 Mb. The model reduced generation time from tens of minutes to several seconds. In conclusion, the memory requirement and generation time reduced through our model.
摘要 ii
Abstract iii
序言及致謝 iv
Contents v
Figure viii
Table xix
Chapter 1. Introduction 1
1.1 Quantitative Magnetic Resonance Imaging 1
1.1.1 Magnetic Resonance Image 1
1.1.2 Traditional Quantitative Imaging 1
1.1.3 Magnetic Resonance Fingerprinting 2
1.2 Introduction to Deep Learning & Machine Learning 6
1.2.1 Optimizer Algorithm 8
1.2.2 Loss Function and Loss 12
1.2.3 Activation Function 15
1.2.4 Datasets 19
1.3 Neural Network model 20
1.3.1 Artificial Neural Network 20
1.3.2 Generative Adversarial Network 21
1.3.3 Wasserstein Generative Adversarial Network 23
1.3.4 Cycle GAN 25
1.4 Deep Learning Related Works in MRF 28
1.4.1 Dictionary Generation 29
1.4.2 Image Reconstruction 34
1.5 Motivation…. 34
1.6 Dissertation Orientation 35
Chapter 2. Methods 37
2.1 Dataset Generation 37
2.1.1 Sequence Generation 38
2.1.2 Dictionary Generation 39
2.2 Dataset Split & Environment 40
2.3 Model 40
2.3.1 Neural Network 41
2.3.2 Generative Adversarial Network 42
2.3.3 Wasserstein Generative Adversarial Network 43
2.3.4 Cycle Generative Adversarial Network 45
Chapter 3. Results 48
3.1 Five Sequences in ANN 49
3.2 Five Sequences in GAN 53
3.3 Five Sequences in WGAN 60
3.4 Five Sequences in Cycle GAN 67
Chapter 4. Discussion and Conclusion 70
4.1 MAE Loss of Five Sequences 71
4.1.1 Training Process 71
4.1.2 Difference Between Five Sequences 72
4.2 Difference of Architecture Between ANN, GAN and WGAN 74
4.2.1 GAN Lambda decision 74
4.2.2 WGAN Lambda Decision 75
4.2.3 Difference Between Three Models 75
4.2.4 Cycle GAN 76
4.3 Limitation and Future Work 76
4.4 Conclusion 77
Chapter 5. Appendix 78
5.1 Definition of Wasserstein Distance 78
5.2 Abbreviation List 78
5.3 Response of oral Defense 80
References 83


References

1. Badve, C., et al., MR fingerprinting of adult brain tumors: initial experience. American Journal of Neuroradiology, 2017. 38(3): p. 492-499.
2. Oh, J., et al., Quantitative apparent diffusion coefficients and T2 relaxation times in characterizing contrast enhancing brain tumors and regions of peritumoral edema. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine, 2005. 21(6): p. 701-708.
3. Blystad, I., et al., Quantitative MRI for analysis of active multiple sclerosis lesions without gadolinium-based contrast agent. American Journal of Neuroradiology, 2016. 37(1): p. 94-100.
4. Ma, D., et al., Magnetic resonance fingerprinting. Nature, 2013. 495(7440): p. 187-192.
5. Hamilton, J.I. and N. Seiberlich, Machine learning for rapid magnetic resonance fingerprinting tissue property quantification. Proceedings of the IEEE, 2019. 108(1): p. 69-85.
6. Jiang, Y., et al., MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout. Magnetic resonance in medicine, 2015. 74(6): p. 1621-1631.
7. Weigel, M., Extended phase graphs: dephasing, RF pulses, and echoes‐pure and simple. Journal of Magnetic Resonance Imaging, 2015. 41(2): p. 266-295.
8. Malik, S.J., R.P.A. Teixeira, and J.V. Hajnal, Extended phase graph formalism for systems with magnetization transfer and exchange. Magnetic resonance in medicine, 2018. 80(2): p. 767-779.
9. Liu, H., et al., Fast and accurate modeling of transient‐state, gradient‐spoiled sequences by recurrent neural networks. NMR in Biomedicine, 2021. 34(7): p. e4527.
10. Panda, A., et al., Magnetic resonance fingerprinting–an overview. Current opinion in biomedical engineering, 2017. 3: p. 56-66.
11. Zhao, B., et al., Improved magnetic resonance fingerprinting reconstruction with low‐rank and subspace modeling. Magnetic resonance in medicine, 2018. 79(2): p. 933-942.
12. McGivney, D.F., et al., SVD compression for magnetic resonance fingerprinting in the time domain. IEEE transactions on medical imaging, 2014. 33(12): p. 2311-2322.
13. Cohen, O., B. Zhu, and M.S. Rosen, MR fingerprinting deep reconstruction network (DRONE). Magnetic resonance in medicine, 2018. 80(3): p. 885-894.
14. Goodfellow, I., Y. Bengio, and A. Courville, Deep learning. 2016: MIT press.
15. IBM. An introduction to deep learning. [internt]; Available from: https://developer.ibm.com/articles/an-introduction-to-deep-learning/.
16. Belyadi, H. and A. Haghighat, Machine Learning Guide for Oil and Gas Using Python: A Step-by-Step Breakdown with Data, Algorithms, Codes, and Applications. 2021: Gulf Professional Publishing.
17. Sutskever, I., et al. On the importance of initialization and momentum in deep learning. in International conference on machine learning. 2013. PMLR.
18. Mausstafa. Optimizers in Deep Learning. 2021; Available from: https://medium.com/mlearning-ai/optimizers-in-deep-learning-7bf81fed78a0.
19. Duchi, J., E. Hazan, and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 2011. 12(7).
20. Hinton, G., N. Srivastava, and K. Swersky, Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Cited on, 2012. 14(8): p. 2.
21. Kingma, D.P. and J. Ba, Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
22. Reed, R. and R.J. MarksII, Neural smithing: supervised learning in feedforward artificial neural networks. 1999: Mit Press.
23. Olut, S., et al. Generative adversarial training for MRA image synthesis using multi-contrast MRI. in International workshop on predictive intelligence in medicine. 2018. Springer.
24. Lee, J., et al., Tooth instance segmentation from cone-beam CT images through point-based detection and Gaussian disentanglement. Multimedia Tools and Applications, 2022: p. 1-16.
25. Abraham, N. and N.M. Khan. A novel focal tversky loss function with improved attention u-net for lesion segmentation. in 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019). 2019. IEEE.
26. Sudre, C.H., et al., Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations, in Deep learning in medical image analysis and multimodal learning for clinical decision support. 2017, Springer. p. 240-248.
27. Murphy, K.P., Machine learning: a probabilistic perspective. 2012: MIT press.
28. Fukushima, K. and S. Miyake, Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition, in Competition and cooperation in neural nets. 1982, Springer. p. 267-285.
29. Nair, V. and G.E. Hinton. Rectified linear units improve restricted boltzmann machines. in Icml. 2010.
30. Xu, B., et al., Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
31. Clevert, D.-A., T. Unterthiner, and S. Hochreiter, Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
32. Virtue, P., X.Y. Stella, and M. Lustig. Better than real: Complex-valued neural nets for MRI fingerprinting. in 2017 IEEE international conference on image processing (ICIP). 2017. IEEE.
33. Goodfellow, I.J., et al. Generative Adversarial Nets. in 28th Conference on Neural Information Processing Systems (NIPS). 2014. Montreal, CANADA: Neural Information Processing Systems (Nips).
34. Radford, A., L. Metz, and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
35. Zhu, J.-Y., et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. in Proceedings of the IEEE international conference on computer vision. 2017.
36. Xu, K., et al., Multichannel residual conditional GAN-leveraged abdominal pseudo-CT generation via dixon MR images. IEEE Access, 2019. 7: p. 163823-163830.
37. Zhu, J., G. Yang, and P. Lio. How can we make GAN perform better in single medical image super-resolution? A lesion focused multi-scale approach. in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). 2019. IEEE.
38. Arjovsky, M., S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. in International conference on machine learning. 2017. PMLR.
39. Arjovsky, M. and L. Bottou, Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017.
40. Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in International Conference on Medical image computing and computer-assisted intervention. 2015. Springer.
41. Li, P., et al., Semantic-aware grad-gan for virtual-to-real urban scene adaption. arXiv preprint arXiv:1801.01726, 2018.
42. Hamilton, J.I., et al. MR fingerprinting with chemical exchange (MRF-X) for in vivo multi-compartment relaxation and exchange rate mapping. in 24th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM 2016). 2016.
43. Liu, Y., et al., Cardiac magnetic resonance fingerprinting: technical overview and initial results. JACC: Cardiovascular Imaging, 2018. 11(12): p. 1837-1853.
44. Yang, M., et al., Game of learning Bloch equation simulations for MR fingerprinting. arXiv preprint arXiv:2004.02270, 2020.
45. Song, P., et al. Magnetic resonance fingerprinting using a residual convolutional neural network. in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2019. IEEE.
46. Fang, Z., et al. RCA-U-Net: residual channel attention U-Net for fast tissue quantification in magnetic resonance fingerprinting. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2019. Springer.
47. Fang, Z., et al., Submillimeter MR fingerprinting using deep learning–based tissue quantification. Magnetic resonance in medicine, 2020. 84(2): p. 579-591.
48. Hamilton, J.I., et al., Deep learning reconstruction for cardiac magnetic resonance fingerprinting T1 and T2 mapping. Magnetic Resonance in Medicine, 2021. 85(4): p. 2127-2135.
49. Fang, Z., et al., Deep learning for fast and spatially constrained tissue quantification from highly accelerated data in magnetic resonance fingerprinting. IEEE transactions on medical imaging, 2019. 38(10): p. 2364-2374.
50. Assländer, J., et al., Low rank alternating direction method of multipliers reconstruction for MR fingerprinting. Magnetic resonance in medicine, 2018. 79(1): p. 83-96.
51. Gulrajani, I., et al., Improved training of wasserstein gans. Advances in neural information processing systems, 2017. 30.
52. Hochreiter, S. and J. Schmidhuber, Long short-term memory. Neural computation, 1997. 9(8): p. 1735-1780.
53. Vaswani, A., et al., Attention is all you need. Advances in neural information processing systems, 2017. 30.

 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *