帳號:guest(216.73.216.146)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):趙浚宏
作者(外文):Chao, Chun-Hung
論文名稱(中文):卷積閘控圖神經網路應用於放射治療計畫輪廓勾畫
論文名稱(外文):Radiotherapy Target Contouring with Convolutional Gated Graph Neural Network
指導教授(中文):孫民
指導教授(外文):Sun, Min
口試委員(中文):何宗易
何宗穎
口試委員(外文):Ho, Tsung-Yi
Ho, Tsung-Ying
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:106061611
出版年(民國):109
畢業學年度:108
語文別:英文
論文頁數:34
中文關鍵詞:放射治療計畫輪廓影像分割圖神經網路
外文關鍵詞:Radiotherapy Target ContourImage SegmentationGraph Neural Network
相關次數:
  • 推薦推薦:0
  • 點閱點閱:711
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
給定來自斷層掃描的切面影像,放射腫瘤科醫師往往需要花費大量人力來識別癌變組織並在每張切面圖像上進行治療區域輪廓勾畫(target contouring)。
本研究利用閘控圖神經網路有效率地考慮跨切面影像的訊息:我們提出了一種新穎的卷積閘圖傳播器(GGP),它使用了可學習的鄰接加權矩陣跨切面影像傳播訊息。此外,考量到醫生往往會修整一些切面影像的治療區域輪廓,我們的方法可以無縫地結合此種過程,以改善其他切面影像的治療區域輪廓預測。為了評測我們的方法,我們收集了81位患者的食道癌放射治療目標治療輪廓勾畫資料集。除此之外,我們於公開的PROMISE12 challenge資料集的評測結果亦表明本方法具有應用到斷層掃描影像的各種醫學任務的潛力。
由於具備有效預測與接受人工互動輸入的能力,本研究提出之方法將更適合於臨床場域的應用。
Given image slices from tomography medical imaging, radiation oncologists identify cancerous tissues and segment out treatment regions throughout all image slices (referred to as target contouring) with a huge amount of human effort.
We leverage Gated Graph Neural Network (GGNN) to efficiently consider meaningful information across slices. We propose a novel convolutional Gated Graph Propagator (GGP) with learnable adjacency weighted matrix to propagate information through slices. Furthermore, as physicians often refine a few specific slices, our method seamlessly incorporates this slice-wise interaction procedure to improve results on other slices. To evaluate our method, we collect an Esophageal Cancer Radiotherapy Target Treatment Contouring dataset of 81 patients. Moreover, from the results of organ contouring task on the public dataset of PROMISE12 challenge shows that our method has the potential to be applied to diverse kinds of medical tasks with tomography medical imaging.
Equipped with the abilities to efficiently make predictions and to incorporate human interactive inputs, our proposed method is well-suited for clinical scenarios.
摘要 v
Abstract vii
1 Introduction 1
1.1 Motivations and Problem Description . . . . . . . . . . . . . . . . . . 1
1.2 Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Approach 7
2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Encoder and Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Gated Graph Propagator . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Static Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 Interactive Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Dataset 13
3.1 Esophageal Cancer Radiotherapy Treatment Target Contouring Dataset 13
3.2 Anatomical 3D Registration . . . . . . . . . . . . . . . . . . . . . . . 13
4 Experiments 17
4.1 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Baseline methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4 Segmentation comparison . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.5 Interactive comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.6 PROMISE12 challenge. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.7 Human evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5 Conclusion 27
References 29
[1] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for
biomedical image segmentation,” in International Conference on Medical Image
Computing and Computer Assisted Intervention (MICCAI) (N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds.), pp. 234–241, Springer International
Publishing, 2015. 2, 18, 19
[2] F. Tobias, A. Sonja, B. Dimos, B. A. Ismail, D. Christian, and D. Jose, “Esophagus
segmentation in ct via 3d fully convolutional neural network and random walk,”
Medical Physics, vol. 44, no. 12, pp. 6341–6352. 2
[3] R. Trullo, C. Petitjean, D. Nie, D. Shen, and S. Ruan, “Fully automated esophagus segmentation with a hierarchical deep learning approach,” in 2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA),
pp. 503–506, Sept 2017. 2
[4] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d unet: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 424–432, Springer, 2016. 2, 5, 19
[5] F. Milletari, N. Navab, and S. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in International Conference
on 3D Vision (3DV), pp. 565–571, Oct 2016. 2, 5
[6] K.-L. Tseng, Y.-L. Lin, W. Hsu, and C.-Y. Huang, “Joint sequence learning and
cross-modality convolution for 3d biomedical segmentation,” in Computer Vision
and Pattern Recognition (CVPR), 2017. 3, 5
[7] J. Cai, L. Lu, Y. Xie, F. Xing, and L. Yang, “Improving deep pancreas segmentation in ct and mri images via recurrent neural contextual learning and direct loss
function,” 2017. 3, 5
[8] G. Litjens, R. Toth, W. van de Ven, C. Hoeks, S. Kerkstra, B. van Ginneken,
G. Vincent, G. Guillard, N. Birbeck, J. Zhang, et al., “Evaluation of prostate segmentation algorithms for mri: the promise12 challenge,” Medical image analysis,
vol. 18, no. 2, pp. 359–373, 2014. 3, 4, 22
[9] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. F. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, Çaglar
Gülçehre, H. F. Song, A. J. Ballard, J. Gilmer, G. E. Dahl, A. Vaswani, K. R.
Allen, C. Nash, V. Langston, C. Dyer, N. M. O. Heess, D. Wierstra, P. Kohli, M. M. Botvinick, O. Vinyals, Y. Li, and R. Pascanu, “Relational inductive biases,
deep learning, and graph networks,” ArXiv, vol. abs/1806.01261, 2018. 4
[10] J. Bruna, W. Zaremba, A. Szlam, and Y. Lecun, “Spectral networks and locally
connected networks on graphs,” in International Conference on Learning Representations (ICLR), 2014. 4
[11] M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks
on graphs with fast localized spectral filtering,” in Advances in Neural Information
Processing Systems 29 (NIPS 2016) (D. D. Lee, M. Sugiyama, U. V. Luxburg,
I. Guyon, and R. Garnett, eds.), pp. 3844–3852, Curran Associates, Inc., 2016. 4
[12] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Representations
(ICLR), 2017. 4
[13] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel, “Gated graph sequence neural networks,” in International Conference on Learning Representations (ICLR),
2015. 4, 5, 9
[14] K. S. Tai, R. Socher, and C. D. Manning, “Improved semantic representations from
tree-structured long short-term memory networks,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL) and the 7th
International Joint Conference on Natural Language Processing (IJCNLP) (Volume 1: Long Papers), pp. 1556–1566, Association for Computational Linguistics,
2015. 4
[15] D. Acuna, H. Ling, A. Kar, and S. Fidler, “Efficient interactive annotation of segmentation datasets with polygon-rnn++,” in Computer Vision and Pattern Recognition (CVPR), 2018. 4
[16] X. Qi, R. Liao, J. Jia, S. Fidler, and R. Urtasun, “3d graph neural networks for rgbd
semantic segmentation,” in IEEE International Conference on Computer Vision
(ICCV), pp. 5209–5218, 2017. 4
[17] G. Cucurull, K. Wagstyl, A. Casanova, P. Velickovic, E. Jakobsen, M. Drozdzal,
A. Romero, A. Evans, and Y. Bengio, “Convolutional neural networks for meshbased parcellation of the cerebral cortex,” in Medical Imaging with Deep Learning
(MIDL), 2018. 4
[18] L. Zhang, X. Li, A. Arnab, K. Yang, Y. Tong, and P. H. S. Torr, “Dual graph
convolutional network for semantic segmentation,” ArXiv, vol. abs/1909.06121,
2019. 4
[19] S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks for
skeleton-based action recognition,” in AAAI Conference on Artificial Intelligence
(AAAI), 2018. 4
[20] M. Ren, Y. Wang, Z. Sun, and T. Tan, “Dynamic graph representation for partially
occluded biometrics,” arXiv preprint arXiv:1912.00377, 2019. 5
[21] R. Chen, T. Chen, X. Hui, H. Wu, G. Li, and L. Lin, “Knowledge graph transfer
network for few-shot recognition,” ArXiv, vol. abs/1911.09579, 2019. 5
[22] X. Xu, I. W.-H. Tsang, X. Cao, R. Zhang, and C. Liu, “Learning image-specific
attributes by hyperbolic neighborhood graph propagation,” in International Joint
Conferences on Artificial Intelligence (IJCAI), 2019. 5
[23] R. Selvan, T. N. Kipf, M. Welling, J. H. Pedersen, J. Petersen, and M. de Bruijne,
“Extraction of airways using graph neural networks,” Medical Imaging with Deep
Learning (MIDL), 2018. 5
[24] P. F. Christ, M. E. A. Elshaer, F. Ettlinger, S. Tatavarty, M. Bickel, P. Bilic,
M. Rempfler, M. Armbruster, F. Hofmann, M. DͳAnastasi, et al., “Automatic
liver and lesion segmentation in ct using cascaded fully convolutional neural networks and 3d conditional random fields,” in International Conference on Medical
Image Computing and Computer Assisted Intervention (MICCAI), pp. 415–423,
Springer, 2016. 5
[25] P. F. Christ, F. Ettlinger, F. Grün, M. E. A. Elshaera, J. Lipkova, S. Schlecht,
F. Ahmaddy, S. Tatavarty, M. Bickel, P. Bilic, et al., “Automatic liver and tumor
segmentation of ct and mri volumes using cascaded fully convolutional neural networks,” arXiv preprint arXiv:1702.05970, 2017. 5
[26] C. Sun, S. Guo, H. Zhang, J. Li, M. Chen, S. Ma, L. Jin, X. Liu, X. Li, and X. Qian,
“Automatic segmentation of liver tumors from multiphase contrast-enhanced ct
images based on fcns,” Artificial intelligence in medicine, vol. 83, pp. 58–66, 2017.
5
[27] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic
segmentation,” in Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 3431–3440, 2015. 5
[28] D. Zikic, Y. Ioannou, M. Brown, and A. Criminisi, “Segmentation of brain tumor tissues with convolutional neural networks,” Proceedings MICCAI-BRATS,
pp. 36–39, 2014. 5
[29] M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal,
P.-M. Jodoin, and H. Larochelle, “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2017. 5, 19
[30] X. Ren, L. Xiang, D. Nie, Y. Shao, H. Zhang, D. Shen, and Q. Wang, “Interleaved
3d-cnns for joint segmentation of small-volume structures in head and neck ct images,” Medical Physics, vol. 45, no. 5, pp. 2063–2075, 2018. 5
[31] Z. Zhu, Y. Xia, W. Shen, E. K. Fishman, and A. L. Yuille, “A 3d coarse-to-fine
framework for automatic pancreas segmentation,” in International Conference on
3D Vision (3DV), 2018. 5
[32] S. Yousefi, H. Sokooti, M. S. Elmahdy, F. P. Peters, M. T. M. Shalmani, R. T.
Zinkstok, and M. Staring, “Esophageal gross tumor volume segmentation using a
3d convolutional neural network,” in International Conference on Medical Image
Computing and Computer Assisted Intervention (MICCAI), pp. 343–351, Springer,
2018. 5, 13
[33] J. Chen, L. Yang, Y. Zhang, M. Alber, and D. Z. Chen, “Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation,” in
Advances in Neural Information Processing Systems 29 (NIPS 2016) (D. D. Lee,
M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, eds.), pp. 3036–3044,
Curran Associates, Inc., 2016. 5, 19
[34] X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: Hybrid
densely connected unet for liver and tumor segmentation from ct volumes,” IEEE
Transactions on Medical Imaging, 2018. 5
[35] Q. Yu, Y. Xia, L. Xie, E. K. Fishman, and A. L. Yuille, “Thickened 2d networks
for 3d medical image segmentation,” CoRR, vol. abs/1904.01150, 2019. 6
[36] N. Khosravan, A. Mortazi, M. Wallace, and U. Bagci, “Pan: Projective adversarial
network for medical image segmentation,” in International Conference on Medical
Image Computing and Computer Assisted Intervention (MICCAI), 2019. 6
[37] Z. Zhu, C. Liu, D. Yang, A. L. Yuille, and D. Xu, “V-nas: Neural architecture
search for volumetric medical image segmentation,” 2019 International Conference on 3D Vision (3DV), pp. 240–248, 2019. 6
[38] A. Criminisi, T. Sharp, and A. Blake, “Geos: Geodesic image segmentation,”
in European Conference on Computer Vision (ECCV), vol. 5302, pp. 99–112,
Springer, January 2008. 6
[39] A. Top, G. Hamarneh, and R. Abugharbieh, “Active learning for interactive 3d
image segmentation,” in International Conference on Medical Image Computing
and Computer Assisted Intervention (MICCAI), 2011. 6
[40] L. Zhu, I. Kolesov, Y. Q. Gao, R. Kikinis, and A. Tannenbaum, “An effective
interactive medical image segmentation method using fast growcut,” in International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Interactive Medical Image Computing (IMIC) Workshop, 2014. 6
[41] J. Egger, H. Busse, P. Brandmaier, D. Seider, M. Gawlitza, S. Strocka, P. Voglreiter,
M. Dokter, M. Hofmann, B. Kainz, et al., “Interactive volumetry of liver ablation
zones,” Scientific reports, vol. 5, p. 15373, 2015. 6
[42] G. Wang, M. A. Zuluaga, R. Pratt, M. Aertsen, T. Doel, M. Klusmann, A. L. David,
J. Deprest, T. Vercauteren, and S. Ourselin, “Slic-seg: A minimally interactive segmentation of the placenta from sparse and motion-corrupted fetal mri in multiple
views,” in Medical Image Analysis, 2016. 6
[43] J. Egger, D. Schmalstieg, X. Chen, W. G. Zoller, and A. Hann, “Interactive outlining of pancreatic cancer liver metastases in ultrasound images,” Scientific reports,
vol. 7, no. 1, p. 892, 2017. 6
[44] X. Liao, W. Li, Q. Xu, X. Wang, B. Jin, X. Zhang, Y. Zhang, and Y. Wang,
“Iteratively-refined interactive 3d medical image segmentation with multi-agent
reinforcement learning,” arXiv preprint arXiv:1911.10334, 2019. 6
[45] G. Wang, M. A. Zuluaga, W. Li, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L.
David, J. Deprest, S. Ourselin, and T. Vercauteren, “Deepigeos: A deep interactive geodesic framework for medical image segmentation,” IEEE transactions on
pattern analysis and machine intelligence, 2018. 6
[46] M. Amrehn, S. Gaube, M. Unberath, F. Schebesch, T. Horz, M. Strumia, S. Steidl,
M. Kowarschik, and A. Maier, “Ui-net: Interactive artificial neural networks for
iterative image segmentation based on a user model,” in Eurographics Workshop
on Visual Computing for Biology and Medicine, 2017. 6
[47] E. Agustsson, J. R. Uijlings, and V. Ferrari, “Interactive full image segmentation
by considering all regions jointly,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pp. 11622–11631, 2019. 6
[48] G. Wang, W. Li, M. A. Zuluaga, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L.
David, J. Deprest, S. Ourselin, et al., “Interactive medical image segmentation using deep learning with image-specific fine-tuning,” IEEE Transactions on Medical
Imaging, 2018. 6
[49] K. Cho, B. van Merrienboer, Ç. Gülçehre, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using RNN encoder-decoder for statistical
machine translation,” arXiv preprint, vol. abs/1406.1078, 2014. 9
[50] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,
N. Gimelshein, L. Antiga, A. Desmaison, A. D.-I. Kopf, E. Yang, Z. DeVito,
M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in
Advances in Neural Information Processing Systems (NeurIPS), pp. 8024–8035,
2019. 19
[51] L. R. Dice, “Measures of the amount of ecologic association between species,”
Ecology, vol. 26, no. 3, pp. 297–302, 1945. 19
[52] L. Yu, X. Yang, H. Chen, J. Qin, and P.-A. Heng, “Volumetric convnets with mixed
residual connections for automated prostate segmentation from 3d mr images,” in
AAAI Conference on Artificial Intelligence (AAAI), 2017. 23
[53] Q. Zhu, B. Du, J. Wu, and P. Yan, “A deep learning health data analysis approach:
Automatic 3d prostate mr segmentation with densely-connected volumetric convnets,” in 2018 International Joint Conference on Neural Networks (IJCNN),
pp. 1–6, IEEE, 2018. 23
[54] F. Isensee, J. Petersen, A. Klein, D. Zimmerer, P. F. Jaeger, S. Kohl, J. Wasserthal,
G. Koehler, T. Norajitra, S. Wirkert, et al., “nnu-net: Self-adapting framework
for u-net-based medical image segmentation,” arXiv preprint arXiv:1809.10486,
2018. 23
[55] J. Brooke, “SUS: A quick and dirty usability scale,” Usability evaluation in industry, 1996. 23
[56] N. Sultanum, M. Brudno, D. Wigdor, and F. Chevalier, “More text please! understanding and supporting the use of visualization for clinical text overview,” in
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, (New York, NY, USA), pp. 422:1–422:13, ACM, 2018. 24
[57] J. Sauro, A Practical Guide to the System Usability Scale: Background, Benchmarks & Best Practices. CreateSpace Independent Publishing Platform, 2011. 24
[58] A. Bangor, P. Kortum, and J. Miller, “Determining what individual sus scores
mean: Adding an adjective rating scale,” J. Usability Studies, vol. 4, pp. 114–123,
May 2009. 24
(此全文未開放授權)
電子全文
中英文摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *