帳號:guest(18.118.163.255)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):黃琦雯
作者(外文):Huang, Chi-Wen
論文名稱(中文):基於電腦斷層影像之食道癌放射治療計劃劑量區域擷取系統
論文名稱(外文):Using Computed Tomography Images for Dose Contour Segmentation in Esophageal Cancer Radiotherapy Treatment Plan
指導教授(中文):孫民
指導教授(外文):Sun, Min
口試委員(中文):黃仲菁
何宗穎
口試委員(外文):Huang, Chung-Ching
Ho, Tsung-Ying
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:105061590
出版年(民國):108
畢業學年度:107
語文別:英文
論文頁數:41
中文關鍵詞:食道癌電腦斷層影像劑量區域切割
外文關鍵詞:Esophageal CancerComputed TomographyDose Contour Segmentation
相關次數:
  • 推薦推薦:0
  • 點閱點閱:341
  • 評分評分:*****
  • 下載下載:17
  • 收藏收藏:0
食道癌是世界上第六大致命的癌症,因為他很難在初期被診斷出來,所以超過80%以上的患者被診斷出患有食道癌是都已經是二期以上的癌症,因此無法單靠手術就能切除,使得放射治療成為食道癌主要的治療方法之一。
而在繪製食道癌放射治療計畫中,最關鍵的瓶頸在於放射腫瘤師必須花很多時間在繪製治療計畫,這項工作需要耗費經驗豐富的放射腫瘤師大約30-60分鐘,而經驗較少的放射腫瘤師大約要花上1-2個小時來繪製。這是一個十分耗時又耗力的工作,而且治療開始的時間影響了病患術後的復原程度,因此我們認為創造出一個系統來有效的減少放射腫瘤師繪製食道癌放射治療計畫的時間,又能確保高準確度是非常重要的。
因此本研究提出一個食道癌放射治療計劃劑量區域擷取系統,輸入病患的PET以及CT影像後,系統會自動分割出腫瘤區域(GTV)、臨床目標區域(CTV)和計畫目標區域(PTV)這三種治療範圍,讓放射腫瘤師可以參考系統預測出的結果進行修改,不需要重新繪製食道癌放射治療計畫。
和以往的食道癌研究不同,本研究蒐集了包含RT-CT、PET-CT以及PET三種造影影像的資料當成系統輸入,讓我們系統的判斷能夠更貼近腫瘤師的判斷。本研究也將劑量區域擷取系統分割後的結果實際匯入到醫院協助放射腫瘤師,透過事後的訪問我們可以知道,食道癌放射治療計劃劑量區域擷取系統確實可以簡化繪製放射治療計劃的流程並減少繪製的時間,也減少放射腫瘤師們工作上的疲勞與壓力,讓腫瘤師能更及時的給予病患治療高品質的治療。
Esophageal cancer is the sixth deadliest cancer worldwide, and less than 20% of its patients were diagnosed at surgically resectable stage I disease, making chemoradiation its primary treatment. The critical bottleneck in the treatment flow is the delineation of a patient’s radiotherapy plan which takes radiation oncologists with more than 10 years of experience 30-60 minutes and 1-2 hours for radiation oncologists with fewer years of experience. We argue that it is critical to address the bottleneck since the time point when treatment actually begins has been shown to affect the outcome of patients. We treat radiotherapy planning as a contour segmentation task - given CT and PET images, we segment out Gross Tumor Volume (GTV), Clinical Target Volume (CTV), and Planning Target Volume (PTV). A cascade model architecture is proposed to leverage location prior to the tumor for achieving state-of-the-art volume segmentation accuracy on our newly collected esophageal cancer dataset. Different from previous studies, we collect 3 kinds of the medical image from 81 patients. The data were RT-CT, PET- CT, and PET, respectively. By referring to a variety of medical images, the results of our system predicted can be closer to the doctor’s judgment. According to the results of the questionnaire and interview, with our system assist can effectively reduce the time of contouring, and reduce the fatigue and stress caused by contouring, so patients can obtain a faster and higher quality radiation therapy treatment plan.
摘要 ii
Abstract iii
誌謝 iv
1 Introduction 1
2 Related Work 5
2.1 Medical Image Segmentation. . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Esophageal segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 RadioTherapy (RT) planning. . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Location prior for segmentation. . . . . . . . . . . . . . . . . . . . . . 7
3 Dataset and Data Preprocessing 8
3.1 Esophageal cancer dataset . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.1 Data anonymization . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.2 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 System and result 16
4.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Training Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 Experiment Setup and Baseline Methods . . . . . . . . . . . . . . . . . 19
4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5 User Studies 22
5.1 Defining requirements and Ideation Methods . . . . . . . . . . . . . . 22
5.2 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2.1 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2.2 System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2.3 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2.4 Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2.5 Research limitation . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3 Quantitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.4 Post-study Interview . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
v6 Conclusion and Future Work 32
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
References 35
Appendix 39
[1] J. Feulner, S. Kevin Zhou, M. Huber, A. Cavallaro, J. Hornegger, and D. Comani-
ciu, “Model-based esophagus segmentation from ct scans using a spatial probabil-
ity map,” in International Conference on Medical Image Computing and Computer
Assisted Intervention (MICCAI) (T. Jiang, N. Navab, J. P. W. Pluim, and M. A.
Viergever, eds.), (Berlin, Heidelberg), pp. 95–102, Springer Berlin Heidelberg,
2010. 2, 5, 6
[2] S. Yousefi, H. Sokooti, M. S. Elmahdy, F. P. Peters, M. T. M. Shalmani, R. T.
Zinkstok, and M. Staring, “Esophageal gross tumor volume segmentation using a
3d convolutional neural network,” in International Conference on Medical Image
Computing and Computer Assisted Intervention (MICCAI), pp. 343–351, Springer,
2018. 2, 5, 6
[3] F. Tobias, A. Sonja, B. Dimos, B. A. Ismail, D. Christian, and D. Jose, “Esophagus
segmentation in ct via 3d fully convolutional neural network and random walk,”
Medical Physics, vol. 44, pp. 6341–6352, Sept 2017. 2, 6
[4] R. Trullo, C. Petitjean, D. Nie, D. Shen, and S. Ruan, “Fully automated esopha-
gus segmentation with a hierarchical deep learning approach,” in 2017 IEEE In-
ternational Conference on Signal and Image Processing Applications (ICSIPA),
pp. 503–506, Sept 2017. 2, 6
[5] P. F. Christ, M. E. A. Elshaer, F. Ettlinger, S. Tatavarty, M. Bickel, P. Bilic,
M. Rempfler, M. Armbruster, F. Hofmann, M. DžAnastasi, et al., “Automatic
liver and lesion segmentation in ct using cascaded fully convolutional neural net-
works and 3d conditional random fields,” in International Conference on Medical
Image Computing and Computer Assisted Intervention (MICCAI), pp. 415–423,
Springer, 2016. 5, 6
[6] P. F. Christ, F. Ettlinger, F. Grün, M. E. A. Elshaera, J. Lipkova, S. Schlecht,
F. Ahmaddy, S. Tatavarty, M. Bickel, P. Bilic, et al., “Automatic liver and tumor
segmentation of ct and mri volumes using cascaded fully convolutional neural net-
works,” arXiv preprint arXiv:1702.05970, 2017. 5, 6
[7] C. Sun, S. Guo, H. Zhang, J. Li, M. Chen, S. Ma, L. Jin, X. Liu, X. Li, and X. Qian,
“Automatic segmentation of liver tumors from multiphase contrast-enhanced ct
images based on fcns,” Artificial intelligence in medicine, vol. 83, pp. 58–66, 2017.
5, 6
[8] D. Zikic, Y. Ioannou, M. Brown, and A. Criminisi, “Segmentation of brain tu-
mor tissues with convolutional neural networks,” Proceedings MICCAI-BRATS,
pp. 36–39, 2014. 5, 6
35[9] M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal,
P.-M. Jodoin, and H. Larochelle, “Brain tumor segmentation with deep neural net-
works,” Medical Image Analysis, vol. 35, pp. 18–31, 2017. 5, 6
[10] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic
segmentation,” in Proceedings of the IEEE conference on computer vision and
pattern recognition, pp. 3431–3440, 2015. 6, 19
[11] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-
net: learning dense volumetric segmentation from sparse annotation,” in Interna-
tional Conference on Medical Image Computing and Computer Assisted Interven-
tion (MICCAI), pp. 424–432, Springer, 2016. 6
[12] F. Milletari, N. Navab, and S. Ahmadi, “V-net: Fully convolutional neural net-
works for volumetric medical image segmentation,” in International Conference
on 3D Vision (3DV), pp. 565–571, Oct 2016. 6
[13] X. Ren, L. Xiang, D. Nie, Y. Shao, H. Zhang, D. Shen, and Q. Wang, “Interleaved
3d-cnns for joint segmentation of small-volume structures in head and neck ct im-
ages,” Medical Physics, vol. 45, no. 5, pp. 2063–2075, 2018. 6
[14] Z. Zhu, Y. Xia, W. Shen, E. K. Fishman, and A. L. Yuille, “A 3d coarse-to-fine
framework for automatic pancreas segmentation,” in 3DV, 2018. 6
[15] D. Nguyen, X. Jia, D. Sher, M.-H. Lin, Z. Iqbal, H. Liu, and S. Jiang, “Three-
Dimensional Radiotherapy Dose Prediction on Head and Neck Cancer Patients
with a Hierarchically Densely Connected U-net Deep Learning Architecture,”
ArXiv e-prints, May 2018. 7
[16] E. Vorontsov, G. Chartrand, A. Tang, C. Pal, and S. Kadoury, “Liver lesion seg-
mentation informed by joint liver segmentation,” CoRR, vol. abs/1707.07734,
2017. 7
[17] X. Li, H. Chen, X. Qi, Q. Dou, C. Fu, and P. Heng, “H-denseunet: Hybrid densely
connected unet for liver and tumor segmentation from ct volumes,” IEEE Trans-
actions on Medical Imaging, vol. 37, pp. 2663–2674, Dec 2018. 7
[18] K. Tseng, Y. Lin, W. Hsu, and C. Huang, “Joint sequence learning and cross-
modality convolution for 3d biomedical segmentation,” pp. 3739–3746, July 2017.
7
[19] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Computer Vision
(ICCV), 2017 IEEE International Conference on, pp. 2980–2988, IEEE, 2017. 7
[20] R. Girshick, “Fast r-cnn,” arXiv preprint arXiv:1504.08083, 2015. 7
[21] Y.-T. Hu, J.-B. Huang, and A. Schwing, “Maskrnn: Instance level video object
segmentation,” in Advances in Neural Information Processing Systems, pp. 324–
333, 2017. 7, 18
[22] A. Arnab and P. H. S. Torr, “Pixelwise instance segmentation with a dynamically
instantiated network,” in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), July 2017. 7
36[23] I. R. Radiological Society of North America, “Positron emission tomography -
computed tomography (pet/ct).” https://www.radiologyinfo.org/. 2018. 9
[24] J. D. J. P. G. G. H. J. C. H. K. A. J. T. M. J. P. N. S. H. S. T. Landberg, J. Chavaudra,
“Report 62,” International Commission on Radiation Units and Measurements,
vol. os32, pp. 153–161, Oct 1999. 10
[25] N. G. Burnet, S. J. Thomas, K. E. Burton, and S. J. Jefferies, “Defining the tumour
and target volumes for radiotherapy,” Cancer Imaging, vol. 4, p. NP, Nov 2004.
10
[26] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object de-
tection with region proposal networks,” in Advances in Neural Information Pro-
cessing Systems 28 (C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and
R. Garnett, eds.), pp. 91–99, Curran Associates, Inc., 2015. 18
[27] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for
biomedical image segmentation,” in International Conference on Medical image
computing and computer-assisted intervention, pp. 234–241, Springer, 2015. 18,
19
[28] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net:
learning dense volumetric segmentation from sparse annotation,” in International
Conference on Medical Image Computing and Computer-Assisted Intervention,
pp. 424–432, Springer, 2016. 18
[29] L. Zhang, V. Gopalakrishnan, L. Lu, R. M. Summers, J. Moss, and J. Yao, “Self-
learning to detect and segment cysts in lung ct images without manual annotation,”
CoRR, vol. abs/1801.08486, 2018. 18
[30] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural net-
works for volumetric medical image segmentation,” in 3D Vision (3DV), 2016
Fourth International Conference on, pp. 565–571, IEEE, 2016. 19
[31] C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, “Generalised
dice overlap as a deep learning loss function for highly unbalanced segmentations,”
in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical
Decision Support, pp. 240–248, Springer, 2017. 19
[32] M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. J. Pal, “The
importance of skip connections in biomedical image segmentation,” in LA-
BELS/DLMIA@MICCAI, 2016. 20
[33] J. Brooke, “SUS: A quick and dirty usability scale,” Usability evaluation in indus-
try, 1996. 24, 26
[34] N. Sultanum, M. Brudno, D. Wigdor, and F. Chevalier, “More text please! un-
derstanding and supporting the use of visualization for clinical text overview,” in
Proceedings of the 2018 CHI Conference on Human Factors in Computing Sys-
tems, CHI ’18, (New York, NY, USA), pp. 422:1–422:13, ACM, 2018. 26
[35] J. Sauro, A Practical Guide to the System Usability Scale: Background, Bench-
marks & Best Practices. CreateSpace Independent Publishing Platform, 2011. 28
37
[36] A. Bangor, P. Kortum, and J. Miller, “Determining what individual sus scores
mean: Adding an adjective rating scale,” J. Usability Studies, vol. 4, pp. 114–123,
May 2009. 28
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *