帳號:guest(3.146.152.23)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):董 淇
作者(外文):Tung, Chi
論文名稱(中文):以憶阻器建構之脈衝式神經網路內建自我校準方案
論文名稱(外文):A Built-In Self-Calibration Scheme for Memristor-Based Spiking Neural Networks
指導教授(中文):吳誠文
指導教授(外文):Wu, Cheng-Wen
口試委員(中文):黃錫瑜
劉靖家
呂學坤
口試委員(外文):Huang, Shi-Yu
Liou, Jing-Jia
Lu, Shyue-Kung
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:109061559
出版年(民國):112
畢業學年度:111
語文別:英文
論文頁數:48
中文關鍵詞:人工智慧校準憶阻器仿神經型態運算製程偏移電阻式記憶體脈衝式神經網路
外文關鍵詞:AIcalibrationmemristorneuromorphic computingprocess variationRRAMspiking neural network(SNN)
相關次數:
  • 推薦推薦:0
  • 點閱點閱:579
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
近年來,基於憶阻器(Memristor)所建構之神經型態運算架構被廣泛研究並實現於電阻式隨機存取記憶體(RRAM)上。為了提升人工智慧(AI)運算的能源效率,這樣的硬體運算架構已被運用於實現深度神經網路(DNN)以及脈衝式神經網路(SNN)模型。在以憶阻器建構之脈衝式神經網路晶片中,突觸的權重值通常是由憶阻器陣列所儲存,而其中神經元的運算則是以類比電路來實現。由於上述所使用的憶阻器元件以及類比電路對於製程參數的偏移都是相當敏感且易受影響的,即使已通過生產測試,神經網路晶片的推論(Inference)準確率仍然會因為製程偏移而有所衰減。在這篇研究中,我們探究製程偏移對於脈衝式神經網路運算的影響,如憶阻器元件電阻值偏移和電晶體元件參數的偏移。在給定的製程偏移範圍內,我們提出了一個能夠有效回復推論準確率的電路校準方案,我們也開發了內建式自我校準(BISC)電路架構。實驗結果顯示,在使用本篇論文所提出之校準方法的脈衝式神經網路晶片當中,圖像分類的推論準確率被提升了76.8%,而晶片面積成本只多出原有的1%。
The memristor-based neuromorphic computing architectures, which are based on the resistive random-access memory (RRAM) cell array, have been widely investigated recently. They are used for implementing both the deep neural network (DNN) and spiking neural network (SNN) models, trying to achieve better energy efficiency in AI computing. In the memristor-based SNNs, the synaptic weights are normally implemented by a memristor cell array, and the neurons are mainly analog circuits. Since the memristors and analog circuits are sensitive to variation in process parameters, the inference accuracy of the SNNs can degrade due to process variation, even for the SNNs that pass the production test. In this work, we investigate the impact of process variation on the SNNs, such as the memristor resistance variation and device variation. We propose a calibration scheme which can effectively recover the inference accuracy, given process variation in the specified range. Also, we develop a built-in self-calibration (BISC) architecture based on an SNN chip that we have designed. Experimental results show that the inference accuracy of the SNN ASIC can be improved by up to 76.8%, with only 1% silicon area overhead.
List of Figures......................................................v
List of Tables....................................................viii
Chapter 1 Introduction...............................................1
1.1 Motivation....................................................1
1.2 Introduction to Memristor-Based Spiking Neural Network (SNN)..2
1.2.1 SNN Model...................................................2
1.2.2 Memristor Cell Array Model..................................2
1.2.3 Memristor-Based SNN Architecture............................4
1.3 Related Works and Proposed Method.............................7
1.4 Organization..................................................8
Chapter 2 Hardware Non-Ideal Effects of Memristor-Based SNNs.........9
2.1 Circuit-Level Simulation of SNN Circuit.......................9
2.2 Impact of Process Variation..................................12
2.2.1 Device Variation...........................................12
2.2.2 Memristor Resistance Variation.............................15
2.3 Inference Accuracy of SNN Hardware...........................18
Chapter 3 Proposed Built-In Self-Calibration (BISC) Scheme..........21
3.1 Calibration Methodology......................................21
3.2 Calibration Algorithm........................................23
3.3 BISC Architecture............................................24
3.3.1 BISC Module................................................26
3.3.2 Control of BISC Module.....................................27
3.3.3 Buck Regulator and Resistor Path...........................29
Chapter 4 Experimental Results......................................31
4.1 Calibration Dataset and Cost Analysis........................31
4.2 Calibration Results..........................................32
Chapter 5 Conclusion and Future Work................................44
5.1 Conclusion...................................................44
5.2 Future Work..................................................44
Bibliography........................................................46
[1] C.-X. Xue, W.-H. Chen, J.-S. Liu, J.-F. Li, W.-Y. Lin, W.-E. Lin, J.-H. Wang, W.-C. Wei, T.-Y. Huang, T.-W. Chang, T.-C. Chang, H.-Y. Kao, Y.-C. Chiu, C.-Y. Lee, Y.-C. King, C.-J. Lin, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, and M.-F. Chang, “Embedded 1-Mb ReRAM-Based Computing-in-Memory Macro with Multibit Input and Weight for CNN-based AI Edge Processors,” IEEE Jour. Solid-State Circuits, vol. 55, no. 1, pp. 203-215, Jan. 2020.
[2] B. Yan, Q. Yang, W.-H. Chen, K.-T. Chang, J.-W. Su, C.-H. Hsu, S.-H. Li, H.-Y. Lee, S.-S. Sheu, M.-S. Ho, Q. Wu, M.-F. Chang, Y. Chen, and H. Li, “RRAM-Based Spiking Nonvolatile Computing-in-Memory Processing Engine with Precision-Configurable in situ Nonlinear Activation,” in Proc. 2019 Symp. VLSI Technology, pp. T86-T87, June 2019.
[3] F. N. Buhler, P. Brown, J. Li, T. Chen, Z. Zhang and M. P. Flynn, “A 3.43 TOPS/W 48.9 pJ/pixel 50.1 nJ/classification 512 Analog Neuron Sparse Coding Neural Network with On-Chip Learning and Classification in 40nm CMOS,” in Proc. 2017 Symp. VLSI Circuits, pp. C30-C31, June 2017.
[4] K.-W. Hou, H.-H. Cheng, C. Tung, C.-W. Wu, and J.-M. Lu, “Fault Modeling and Testing of Memristor-Based Spiking Neural Networks,” in Proc. IEEE Int. Test Conf. (ITC), Sept. 2022.
[5] C. Tung, K.-W. Hou, and C.-W. Wu, "A Built-In Self-Calibration Scheme for Memristor-Based Spiking Neural Networks," in Proc. Int. Symp. on VLSI Design, Automation, and Test (VLSI-DAT), Hsinchu, Apr. 2023 (to appear).
[6] K.-W. Hou, "A Power-Efficient Memristor-Based Spiking Neural Network for Real-Time Object Classification," Ph.D. Dissertation, Dept. Electrical Engineering, National Tsing Hua Univ., Hsinchu, Taiwan (in preparation).
[7] P.-Y. Chuang, P.-Y. Tan, C.-W. Wu, and J.-M. Lu, “A 90nm 103.14 TOPS/W Binary-Weight Spiking Neural Network CMOS ASIC for Real-Time Object Classification,” in Proc. IEEE/ACM Design Automation Conf. (DAC), July 2020.
[8] P.-Y. Tan, P.-Y. Chuang, Y.-T. Lin, C.-W. Wu, and J.-M. Lu, “A Power Efficient Binary-Weight Spiking Neural Network Architecture for Real-Time Object Classification,” arXiv:2003.06310, 2020.
[9] M. E. Fouda, S. Lee, J. Lee, G. H. Kim, F. Kurdahi and A. M. Eltawi, "IR-QNN Framework: An IR Drop-Aware Offline Training of Quantized Crossbar Arrays," IEEE Access, vol. 8, pp. 228392-228408, Dec. 2020.
[10] S. Lee, G. Jung, M. E. Fouda, J. Lee, A. Eltawil and F. Kurdahi, "Learning to Predict IR Drop with Effective Training for ReRAM-Based Neural Network Hardware," in Proc. 57th ACM/IEEE Design Automation Conf. (DAC), pp. 1-6, Jul. 2020.
[11] Y.-H. Chiang, C.-E. Ni, Y. Sung, T.-H. Hou, T.-S. Chang and S.-J. Jou, "Hardware-Robust In-RRAM-Computing for Object Detection," IEEE Jour. Emerging and Selected Topics in Circuits and Systems, vol. 12, no. 2, pp. 547-556, June 2022.
[12] W. Shim, J.-S. Seo and S. Yu, "Two-Step Write-Verify Scheme and Impact of the Read Noise in Multilevel RRAM-Based Inference Engine," Semiconductor Science and Technology, vol. 35, no. 11, Oct. 2020.
[13] J.-H. Yoon, M. Chang, W.-S. Khwa, Y.-D. Chih, M.-F. Chang and A. Raychowdhury, "A 40-nm, 64-Kb, 56.67 TOPS/W Voltage-Sensing Computing-In-Memory/Digital RRAM Macro Supporting Iterative Write with Verification and Online Read-Disturb Detection," IEEE Jour. Solid-State Circuits, vol. 57, no. 1, pp. 68-79, Jan. 2022.
[14] W. He, W. Shim, S. Yin, X. Sun, D. Fan, S. Yu, and J.-S. Seo, "Characterization and Mitigation of Relaxation Effects on Multi-Level RRAM based In-Memory Computing," in Proc. 2021 IEEE Int. Reliability Physics Symp. (IRPS), pp. 1-7, Mar. 2021.
[15] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, and A. Lerer, “Automatic Differentiation in Pytorch,” in Proc. NIPS 2017 Workshop, Oct. 2017.
[16] Draw_convnet: https://github.com/gwding/draw_convnet.
[17] LTC3623: https://www.analog.com/en/products/ltc3623.html.
[18] L.-T. Wang, C.-W. Wu, and X. Wen, Design for Testability: VLSI Test Principles and Architectures, Elsevier (Morgan Kaufmann), San Francisco, 2006.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *