帳號:guest(18.227.107.59)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):謝柏儀
作者(外文):Hsieh, Bo-Yi
論文名稱(中文):基於共生系統的時域脈衝神經網路晶片線上補償方法
論文名稱(外文):A Symbiotic System-Based On-Line Compensation Scheme for a Time-Domain Spiking Neural Network Chip
指導教授(中文):吳誠文
指導教授(外文):Wu, Cheng-Wen
口試委員(中文):劉靖家
黃錫瑜
呂學坤
口試委員(外文):Liou, Jing-Jia
Huang, Shi-Yu
Lu, Shyue-Kung
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:109061584
出版年(民國):112
畢業學年度:111
語文別:英文
論文頁數:52
中文關鍵詞:人工智慧加速器電路補償錯誤容忍時域神經網路脈衝神經網路共生系統
外文關鍵詞:AI acceleratorcircuit compensationerror tolerancetime-domain neural networkpiking neural network (SNN)symbiotic system
相關次數:
  • 推薦推薦:0
  • 點閱點閱:442
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
類比神經網絡計算硬體的主要問題之一是在其保持所需準確度方面的穩定性。由於
在製造過程和出廠使用過程中很難減少關鍵參數的變化,因此需要開發可行的變化容
忍方案。此外,由於電晶體超大規模積體電路的密度和複雜度呈指數級增長,傳統測
試方法越來越難以檢測超大規模積體電路中的細微缺陷。這些缺陷可能會導致功能錯
誤而造成系統故障,從而降低可靠性。在本文中,基於我們設計的時域脈衝神經網絡
架構,我們提出了一個錯誤模型,總結了脈衝神經網絡的功能錯誤,包括短延遲錯誤
和長延遲錯誤是由緊密型突觸陣列中的細微缺陷所引起的,這可能導致脈衝神經網絡
晶片的推理準確度損失。為了實現高可靠性時域脈衝神經網絡,我們提出了一種基於
共生系統的在線補償方案,其中開發了次要脈衝神經網絡以實時檢測主要脈衝神經網
絡錯誤。實驗結果表明,次要脈衝神經網絡實現了百分之八十以上的命中率和百分之
十以下的誤殺率。當檢測到錯誤時,我們的補償方法可以幫助恢復脈衝神經網絡推理
準確度的下降。
One of the main issues for analog neural-network computing hardware is its stability in maintaining the required accuracy. As it is hard to reduce the variation of key parameters during the fabrication process and field use, there is a need to develop feasible variation-tolerant
schemes. Furthermore, as a result of exponential growth in the density and complexity of VLSI circuits of transistors, the detection of subtle defects in VLSI circuits with conventional testing methods is increasingly difficult. These defects may cause functional faults and lead to system failure, so reliability will be degraded. In this paper, based on a time-domain spiking neural network (SNN) architecture that we have designed, we propose a fault model that summarizes the functional faults of the SNN, including short delay fault (SDF) and long delay fault (LDF) resulting from subtle defects in the compact synapse array, which can lead to inference accuracy loss on the SNN chips. To achieve high-reliability time-domain SNN, we propose a symbiotic system-based on-line compensation scheme, where the secondary SNN is developed to detect the primary SNN failures in real time. The simulation results show that the secondary SNN achieves a hit rate of more than 80% and an overkill rate of less than 10%. When failures are detected, our compensation method can help recover the degradation in inference accuracy of the SNN.
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX
Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Chapter 2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Symbiotic System (SS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Spiking Neural Network (SNN) . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Time-Domain SNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Delay Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Functional Circuits for SNN IF Operation . . . . . . . . . . . . . . . . 12
2.3.3 Basic Module for a layer of SNN . . . . . . . . . . . . . . . . . . . . 12
Chapter 3 Fault Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1 Fault-Free Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Faulty Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Abstraction of Hardware Behavior . . . . . . . . . . . . . . . . . . . . . . . . 16
Chapter 4 Proposed On-Line Compensation Scheme . . . . . . . . . . . . . . . . . . . 18
4.1 Primary SNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.1.1 Variation-Tolerant Training . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1.2 Fault Injection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1.3 Dataset Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 Secondary SNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.1 Primary and Secondary SNNs Interaction . . . . . . . . . . . . . . . . 27
4.2.2 Compensation Methodology . . . . . . . . . . . . . . . . . . . . . . . 29
Chapter 5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 30
5.1 Fault Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2 Dataset Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3 Secondary SNN Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4 System Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Chapter 6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Appendix A Variation Distribution for Other Corners . . . . . . . . . . . . . . . . . . 41
A.1 SF Corner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.2 FS Corner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
A.3 SS Corner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A.4 FF Corner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

[1] W. Haensch, T. Gokmen, and R. Puri, “The Next Generation of Deep Learning Hardware: Analog Computing,” Proceedings of the IEEE, vol. 107, no. 1, pp. 108–122, Jan. 2019.
[2] N. Verma, H. Jia, H. Valavi, Y. Tang, M. Ozatay, L.-Y. Chen, B. Zhang, and P. Deaville, “In-Memory Computing: Advances and Prospects,” IEEE Solid-State Circuits Magazine, vol. 11, no. 3, pp. 43–55, Aug. 2019.
[3] J.-M. Hung, C.-J. Jhang, P.-C. Wu, Y.-C. Chiu, and M.-F. Chang, “Challenges and Trends of Nonvolatile In-Memory-Computation Circuits for AI Edge Devices,” IEEE Open Journal of the Solid-State Circuits Society, vol. 1, pp. 171–183, Oct. 2021.
[4] C. J. Xue, Y. Zhang, Y. Chen, G. Sun, J. J. Yang, and H. Li, “Emerging Non-Volatile Memories: Opportunities and Challenges,” ser. CODES+ISSS ’11. New York, NY, USA: Association for Computing Machinery, Oct. 2011, p. 325–334. [Online]. Available: https://doi.org/10.1145/2039370.2039420
[5] D. Miyashita, S. Kousai, T. Suzuki, and J. Deguchi, “Time-Domain Neural Network: A 48.5 TSOp/s/W Neuromorphic Chip Optimized for Deep Learning and CMOS Technology,” in Proc. 2016 IEEE Asian Solid-State Circuits Conf. (A-SSCC), Nov. 2016, pp.25–28.
[6] M. Courbariaux, Y. Bengio, and J.-P. David, “BinaryConnect: Training Deep Neural Networks with binary weights during propagations,” in Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Eds., vol. 28. Curran Associates, Inc., Dec. 2015. [Online]. Available: https://proceedings.neurips.cc/paper/2015/file/3e15cc11f979ed25912dff5b0669f2cd-Paper.pdf
[7] P.-Y. Chen, B. Lin, I.-T. Wang, T.-H. Hou, J. Ye, S. Vrudhula, J.-s. Seo, Y. Cao, and S. Yu, “Mitigating Effects of Non-ideal Synaptic Device Characteristics for On-chip Learning,” in Proc. 2015 IEEE/ACM Int. Conf. Computer-Aided Design (ICCAD), Nov. 2015, pp.194–199.
[8] C.-C. Chang, M.-H. Wu, J.-W. Lin, C.-H. Li, V. Parmar, H.-Y. Lee, J.-H. Wei, S.-S. Sheu, M. Suri, T.-S. Chang, and T.-H. Hou, “NV-BNN: An Accurate Deep Convolutional Neural Network Based on Binary STT-MRAM for Adaptive AI Edge,” in Proc. 2019 56th ACM/IEEE Des. Autom. Conf. (DAC), June 2019, pp. 1–6.
[9] B. Zhang, L.-Y. Chen, and N. Verma, “Stochastic Data-driven Hardware Resilience to Efficiently Train Inference Models for Stochastic Hardware Implementations,” in Proc. 2019 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), May 2019, pp.1388–1392.
[10] M. E. Fouda, S. Lee, J. Lee, G. H. Kim, F. Kurdahi, and A. M. Eltawi, “IR-QNN Framework: An IR Drop-Aware Offline Training of Quantized Crossbar Arrays,” IEEE Access, vol. 8, pp. 228 392–228 408, Dec. 2020.
[11] C.-W. Wu, “Symbiotic-System Approach for IOT Devices,” in Proc. 25th IEEE Asian Test Symp. (ATS), Nov. 2016.
[12] C.-W. Wu, B.-Y. Lin, H.-W. Hung, S.-M. Tseng, and C. Chen, “Symbiotic System Models for Efficient IOT System Design and Test,” in Proc. 2017 Int. Test Conf. in Asia (ITC-Asia), Sept. 2017, pp. 71–76.
[13] B.-Y. Lin, H.-W. Hung, S.-M. Tseng, C. Chen, and C.-W. Wu, “Highly Reliable and Low-Cost Symbiotic IOT Devices and Systems,” in Proc. 2017 IEEE Int. Test Conf. (ITC), Oct.2017, pp. 1–10.



 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *