帳號:guest(3.144.252.224)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):彭兆翊
作者(外文):Peng, Chao-Yi
論文名稱(中文):晶片製造的虛擬量測:基於卷積神經網路之光學蝕刻模擬和遮罩優化
論文名稱(外文):Virtual Metrology for IC Fabrication: A CNN-Based Lithography Simulation and Mask Optimization
指導教授(中文):林嘉文
指導教授(外文):Lin, Chia-Wen
口試委員(中文):林永隆
杜維洲
方劭云
口試委員(外文):Lin, Youn-Long
Du, Wei-Zhou
Fang, Shao-Yun
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:106061517
出版年(民國):108
畢業學年度:107
語文別:英文
論文頁數:39
中文關鍵詞:虛擬量測光學蝕刻模擬遮罩優化卷積神經網路深度學習
外文關鍵詞:Virtual metrologyLithography simulationMask optimizationConvolutional neural networksDeep learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:428
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
在半導體製造產業中,要驗證一個已經設計好的電路是否有瑕疵需要實際等到晶片經過製程後再用掃描式電子顯微鏡(SEM)拍攝分析,這個過程需要耗費許多時間和金錢成本,所以發展出一套能夠基於給定的製程參數並預測出晶片實際經過對應製程蝕刻後結果的虛擬量測工具是非常必要的。現有的光學蝕刻模擬因為計算複雜度很高,因此所耗費的時間非常久,而本篇論文同樣是針對光學蝕刻模擬,期望能夠達到與現有光學蝕刻模擬器同樣準確度並同時加快模擬速度。
在本篇論文中我們提出了一個基於深度學習的網路用來模擬實際製程,藉由學習電路圖與SEM影像之間形狀的變形關係,最後產生出一張合成的SEM影像,藉此來提供虛擬量測及分析。除此之外,我們也藉由加入製程參數來模擬不同的製程環境下會對蝕刻結果產生的影響。
在ㄧ般情況下,ㄧ個電路圖在實際經過製程後的結果會因為曝光、化學反應及物理特性的影響下,使得結果無法跟原本預期的電路圖一模一樣。因此我們藉由前面已經訓練好的光學蝕刻模擬網路並額外加入ㄧ個影像生成網路,希望此網路可以學習如何修改原始電路圖並得到ㄧ個新的遮罩,使得此遮罩經過實際製程後能夠得到與原始電路圖相近的結果,以達到非監督式遮罩優化的目的。
最後我們藉由測試在許多不同基準的電路圖並與先前的方法做比較後計算出準確度,證實了我們方法的有效性和穩健性。
In semiconductor manufacturing, we cannot identify whether an already designed IC layout have defects until capturing and analyzing the scanning electron microscope (SEM) images of metal layers after a wafer fabrication process, making the IC layout verification very costly and time-consuming. Thus it is essential in semiconductor manufacturing to develop virtual metrology tools that can predict the properties of a wafer based on fabrication configurations. Existing physics-based lithography simulation schemes are very time-consuming because its high computational complexity. This thesis focuses on data-driven lithography simulation and desire to achieve the same accuracy as existing lithography simulators while speeding up the time of simulation.
In this thesis we first propose a convolutional neural network called LithoNet that mimics the manufacturing procedure to generate a synthetic image for virtual metrology. By learning the shape correspondence between the layout image and the corresponding SEM image, the proposed network can synthesize a SEM-styled image given an input layout. In addition, we use the wafer fabrication parameters to condition the generator so as to model the parametric product variation that can be inspected on the SEM images.
Moreover, existing lithography simulation algorithms used for suggesting a correction on a lithographic photomask according to a given layout by checking whether the shape of a fabricated IC circuitry matches exactly the layout design, is computationally very expensive. Thus, we propose an OPCNet, cooperating with a pre-trained LithoNet, to mimic the OPC (optical proximity correction) procedure used to correct the layout design and generate a photomask.
We evaluate our method using various benchmark layout patterns and compare with some existing method. The experimental results demonstrate the effectiveness and robustness of the proposed method.
摘 要 i
Abstract ii
Content iii
Chapter 1 Introduction 5
1.1 Research Background 5
1.2 Motivation 6
1.3 Contribution 9
Chapter 2 Related Work 11
2.1 Virtual Metrology 11
2.2 Lithography Simulation 12
2.3 Mask Optimization 13
Chapter 3 Proposed Method 14
3.1 Lithography Network 14
3.1.1 Overview of Architecture 14
3.1.2 Step I: Image Domain Transfer 15
3.1.3 Step II: Shape Deformation Estimation 16
3.1.4 Loss Functions for Training 17
3.2 Optical Proximity Correction Network 19
3.2.1 Architecture of OPCNet 19
3.2.2 Loss Functions for Training 20
Chapter 4 Experiments and Discussions 22
4.1 Lithography Network 22
4.1.1 Dataset 22
4.1.2 Metrics 22
4.1.3 Evaluation of LithoNet 23
4.2 Optical Proximity Correction Network 32
4.2.1 Dataset 32
4.2.2 Comparison among Different Loss Functions 32
4.2.3 Mask Prediction Results 33
Chapter 5 Conclusion 35
References 36
[1] O. Otto, J. Garofalo, K. K. Low, C.-M. Yuan, R. Henderson, C. Pierrat, R. Kostelak, S. Vaidya, and P. K. Vasudev. Automated optical proximity correction: a rules-based approach. In Optical/Laser Microlithography VII, volume 2197, pages 278–294. International Society for Optics and Photonics, 1994.
[2] T.-J. Hsu. Optical proximity correction (opc) method for improving lithography process window, Feb. 27 2001. US Patent 6,194,104.
[3] Synopsys, Inc. https://www.synopsys.com/.
[4] K. Aberman, J. Liao, M. Shi, D. Lischinski, B. Chen, and D. Cohen-Or. Neural best-buddies: sparse cross-domain correspondence. ACM Trans. Graphics, 37(4):69, 2018.
[5] T. Zhou, P. Krahenbuhl, M. Aubry, Q. Huang, and A. A. Efros. Learning dense correspondence via 3d-guided cycle consistency. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 117–126, 2016.
[6] H. Peng, P. Chung, F. Long, L. Qu, A. Jenett, A. M. Seeds, E. W. Myers, and J. H. Simpson. Brainaligner: 3d registration atlases of Drosophila brains. Nature methods, 8(6):493, 2011.
[7] H.-C. Shao, C.-C. Wu, G.-Y. Chen, H.-M. Chang, A.-S. Chiang, and Y.-C. Chen. Developing a stereotypical Drosophila brain atlas. IEEE Transactions on Biomedical Engineering, 61(12):2848–2858, 2014
[8] M.-H. Hung, T.-H. Lin, F.-T. Cheng, and R.-C. Lin. A novel virtual metrology scheme for predicting cvd thickness in semiconductor manufacturing. IEEE/ASME Trans. Mechatronics, 12(3):308–316, 2007.
[9] H. Purwins, B. Barak, A. Nagi, R. Engel, U. H¨ockele, A. Kyek, S. Cherla, B. Lenz, G. Pfeifer, and K. Weinzierl. Regression methods for virtual metrology of layer thickness in chemical vapor deposition. IEEE/ASME Trans. Mechatronics, 19(1):1–8, 2014.
[10] A. Poonawala and P. Milanfar. Mask design for optical microlithographyan inverse imaging problem. IEEE Trans. Image Process., 16(3):774–788, 2007.
[11] D. Z. Pan, B. Yu, and J.-R. Gao. Design for manufacturing with emerging nanolithography. IEEE Trans. Comput. Aided Design Integ. Circuits Syst., 32(10):1453–1472, 2013.
[12] L. Cao, J. Zhang, D. N. Power, and E. S. Parent. Prediction of process-sensitive geometries with machine learning, Nov. 8 2018. US Patent App. 15/588,984.
[13] A. B. Kahng. Reducing time and effort in ic implementation: a roadmap of challenges and solutions. In 2018 55th ACM/ESDA/IEEE Design Autom. Conf., pages 1–6. IEEE, 2018.
[14] H. Yang, S. Li, Y. Ma, B. Yu, and E. F. Young. GAN-OPC: Mask optimization with lithography-guided generative adversarial nets. In Proc. ACM/ESDA/IEEE Design Autom. Conf., pages 1–6. IEEE, 2018.
[15] W. Ye, M. B. Alawieh, Y. Lin, D. Z. Pan. LithoGAN: End-to-End Lithography Modeling with Generative Adversarial Networks. ACM/IEEE Design Automation Conference (DAC), 2019.
[16] B. Y. Yu, Y. Zhong, S. Y. Fang, H. F. Kuo. Deep Learning-Based Framework for Comprehensive Mask Optimization. Asia and South Pacific Design Automation Conference (ASP-DAC), 2019
[17] Y. Watanabe, T. Kimura, T. Matsunawa, and S. Nojima, Accurate lithography simulation model based on convolutional neural networks, in SPIE Advanced Lithography. International Society for Optics and Photonics, 2017, pp. 101 470K–101 470K.
[18] A. Taflove and S. C. Hagness, Computational electrodynamics: the finite-difference time-domain method. Artech house, 2005.
[19] K. D. Lucas, H. Tanabe, and A. J. Strojwas, Efficient and rigorous three-dimensional model for optical lithography simulation. Journal of the Optical Society of America A, vol. 13, no. 11, pp. 2187–2199, 1996.
[20] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 1125–1134, 2017.
[21] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pages 8798–8807, 2018.
[22] M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In Proc. Adv. Neural Inf. Process. Syst., pages 700–708, 2017.
[23] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D.Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proc. Adv. Neural Inf. Process. Syst., pages 2672–2680, 2014.
[24] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[25] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. IEEE Int. Conf. Comput. Vis., pages 2223– 2232, 2017.
[26] Z. Yi, H. Zhang, P. Tan, and M. Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In Proc. IEEE Int. Conf. Comput. Vis., pages 2849–2857, 2017.
[27] M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Proc. Adv. Neural Inf. Process. Syst., pages 469– 477, 2016.
[28] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. In Proc. European Conf. Comput. Vis., pages 172–189, 2018.
[29] J.-R. Gao, X. Xu, B. Yu, and D. Z. Pan, MOSAIC: Mask optimizing solution with process window aware inverse correction. in Proc. DAC, 2014, pp. 52:1–52:6.
[30] A. H. Gabor, J. A. Bruce,W. Chu, R. A. Ferguson, C. A. Fonseca, R. L. Gordon, K. R. Jantzen, M. Khare, M. A. Lavin, W. Lee, L. W. Liebmann, K. P. Muller, J. H. Rankin, P. Varekamp, and F. X. Zach, Subresolution assist feature implementation for high-performance logic gate-level lithography. in Proceedings of SPIE, vol. 4691, pp. 418–425, 2002.
[31] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Proc. Adv. Neural Inf. Process. Syst., pages 2017–2025, 2015.
[32] C. Wang, H. Zheng, Z. Yu, Z. Zheng, Z. Gu, and B. Zheng. Discriminative region proposal adversarial networks for high-quality image-to-image translation. In Proc. European Conf. Comput. Vis., pages 770–785, 2018.
[33] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process., 13(4):600–612, 2004.
[34] N. Otsu. A threshold selection method from gray-level histograms. IEEE Trans. Syst., Man, Cybern., 9(1):62–66, 1979.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *