帳號:guest(13.58.101.151)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):胡加昀
作者(外文):Hu, Jia-Yun
論文名稱(中文):基於電阻式隨機存取記憶體之神經網路硬體的缺陷模擬、自我修復和錯誤更正方法
論文名稱(外文):RRAM-Based Neuromorphic Hardware Fault Simulation and Reliability Improvement by Self-Healing and Error Correction
指導教授(中文):吳誠文
指導教授(外文):Wu, Cheng-Wen
口試委員(中文):李昆忠
黃錫瑜
李進福
口試委員(外文):Lee, Kuen-Jong
Huang, Shi-Yu
Li, Jin-Fu
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:105061543
出版年(民國):107
畢業學年度:106
語文別:英文
論文頁數:41
中文關鍵詞:神經網絡記憶體內建自我修復錯誤偵測與更正仿神經運算電阻式隨機存取記憶體記憶體測試
外文關鍵詞:neural networkbuilt-in self repairerror detection and correctionneuromorphic computingRRAMmemory testing
相關次數:
  • 推薦推薦:0
  • 點閱點閱:745
  • 評分評分:*****
  • 下載下載:8
  • 收藏收藏:0
近幾年神經網路被認為是推動人類智慧的重要推手,可是使用傳統的計算機架構進行運算的速度過慢,因此許多研究在研發新的硬體架構供神經網路使用。而電阻式隨機存取記憶體可以用來加速神經網路,因為此架構可以模仿人類大腦運作方式,將記憶與執行計算的功能在同一個記憶單元上完成,但記憶體可能因為製程偏移造成記憶體有缺陷,並使神經網路的可靠度下降。實驗結果顯示若是在記憶體陣列裡10%的記憶單元損壞,MLP的辨識率會下降10%,LeNet 300-100和LeNet 5的辨識率都下降超過65%。為了解決這個問題,首先,我們整理出記憶體內的所有錯誤的種類,並設計模擬器驗證測試演算法是否可以偵測所有錯誤的種類。此外,我們提出自我修復與錯誤更正的方法避免神神網路的辨識率大幅下降,並提升神經網路的壽命。 實驗結果顯示,如果神經網路下降幅度在5%內,MLP使用錯誤更正的方法可以容忍40%的錯誤在記憶體陣列裡,LeNet 300-100和LeNet 5則可以容忍高達60%。此外,使用錯誤更正的方法可以多延長硬體5%的壽命。比較兩種方法,錯誤更正的方法是更有效的,因為使用自我修復並加上備援記憶體的效能仍舊無法像只用錯誤更正的方法來得好。
Neural network (NN) has been considered as an important factor for the success of many AI applications. As the von Neumann architecture is inefficient for NN computation, researchers have been investigating new semiconductor devices and architectures for neuromorphic computing. The crossbar RRAM, which is an emerging non-volatile memory composed of memristor devices, can be used to accelerate or emulate the NN computation. Using memristors can achieve memorizing and computing on the same cell, which is the same architecture as human brain. However, the memristor device defects exposed during manufacturing or field use may cause performance degradation in the NN, causing reliability issues to the neuromorphic hardware. In this work, we summarize the fault model and design fault simulator for crossbar multi-level RRAM. Besides, we consider two existing fault models for the 1T1R RRAM cell, i.e., the stuck-at fault and transistor stuck-on fault. We simulate 3 kinds of models on MNIST data set, which contains grey scale hand written digits, from 0 to 9. Evaluation of the faults effect on the NN shows that for about 10% faulty cells in the memristor array, the accuracy for the MLP (Multi-layer perception) model degrades about 10%, and that for the LeNet 300-100 and LeNet 5 degrades by more than 65%. Therefore, we propose a self-healing and an error correction approach to reduce the accuracy degradation, and improve the reliability (lifetime) of the neuromorphic hardware. Our simulation results show that if we limit the accuracy degradation to within 5%, then the proposed self-healing approach for the MLP model will be able to tolerate up to 20% faulty cells, and up to 40% faulty cells for LeNet 300-100 and LeNet 5 models. On the other hand, proposed error correction approach for the MLP model will be able to tolerate up to 40% faulty cells, and even up to 60% faulty cells for LeNet 300-100 and LetNet 5 models. Also, the error correction method can extend the lifetime of the neuromorphic hardware by 5% for the MLP model and 10% for LeNet 300-100 and LeNet 5 models. Finally, we compare the chips after repaired by spare rows/columns, error correction is more powerful than self-healing, because the self-healing with redundancy still tolerate less faults than error correction without redundancy.
摘要 i
Abstract ii
List of Figures v
List of Tables vi
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Related Work 2
1.3 Proposed Approach 3
1.4 Thesis Organization 4
Chapter 2 Background 5
2.1 Multi-level Crossbar RRAM Architecture 5
2.2 Fully Connected Layer 8
2.3 Convolutional Layer 9
Chapter 3 Multi-level RRAM Fault Simulator 11
3.1 Crossbar RRAM Fault Model 11
3.2 Multi-level RRAM Fault Descriptors 15
Chapter 4 Proposed Approach for Improving Neural Network Reliability 18
4.1 Self-Healing 18
4.2 Error Correction 23
Chapter 5 Experimental Results 25
5.1 Apply March Test Algorithm on Fault Simulator 25
5.2 Accuracy Comparison between Self-Healing and Error Correction 28
5.3 Repair Resource Comparison 30
5.4 Life Time Evaluation 34
Chapter 6 Conclusion and Future Work 37
6.1 Conclusion 37
6.2 Future Work 38
Bibliography 39

[1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp, 436-444, May 2015.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convoltional neural networks,” in Proc. Advances in Neural Information Processing Systems 25 (NIPS 2012), pp. 1106–1114, 2012.
[3] H. Sharma, J. Park, D. Mahajan, E. Amaro, J. K. Kim, C. Shao, A. Mishra, and H. Esmaeilzadeh, “From high-level deep neural models to FPGAs,” in Proc. 49th Ann. IEEE/ACM Int. Symp. on Microarchitecture (MICRO), pp. 1-12, Oct. 2016.
[4] N. P. Jouppi, C. Yong, N. Patil, D. Patterson, et al., “In-Datacenter performance analysis of a tensor processing unit,” in Proc. Int. Symp. on Computer Architecture (ISCA), pp. 1-12, June 2017.
[5] C. Merkel, R. Hasan, N. Soures, D. Kudithipudi, T. Taha, S. Agarwal, and M. Marinella, "Neuromemristive Systems: Boosting Efficiency through Brain-Inspired Computing," IEEE Computer, vol. 49, no. 10, pp. 56-64, Oct. 2016.
[6] C. D. Schuman, T. E. Potok, R. M. Patton, J. D. Birdwell, M. E. Dean, G. S. Rose, and J. S. Plank, "A survey of neuromorphic computing and neural networks in hardware" in arXiv:1705.06963v1[cs.NE], May, 2017
[7] B. Li, Y. Shan, M. Hu, Y. Wang, Y. Chen, and H. Yang, “Memristor-based approximated computation,” in Proc. Int. Symp. on Low Power Electronics and Design (ISLPED), pp. 242–247, Sep. 2013.
[8] P. Chi, S. Li, C. Xu, T. Zhang, J. Zhao, Y. Liu, Y. Wang, and Y. Xie, “Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory,” in Proc. 43rd Int. Symp. on Computer Architecture (ISCA), pp. 27-39, Jun. 2016.
[9] A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian, J. P. Strachan, M. Hu, R. S. Williams, V. Srikumar, “ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars,” in Proc. 43rd Int. Symp. on Computer Architecture (ISCA), pp. 14-26, Jun. 2016.
[10] P. Mazumder, S. Kang, and R. Waser, “Memristors: Devices, models and applications,” Proc. of the IEEE, vol. 100, no. 6, pp. 1911–1919, Jun. 2012.
[11] D. Walczyk, T. Bertaud, M. Sowinska, M. Lukosius, et al., “Resistive switching behavior in tin/hfo2/ti/tin devices,” in Proc. Int. Semiconductor Conference Dresden-Grenoble (ISCDG), pp. 143–146, Sept. 2012.
[12] C. Y. Chen, H. C. Shih, C. W. Wu, C. H. Lin, P. F. Chiu, S. S. Sheu, and F. T. Chen, “RRAM defect modeling and failure analysis based on march test and a novel squeeze-search scheme,” IEEE Transactions on Computers, vol. 64, no. 1, pp. 180-190, Jan. 2015.
[13] L. Xia, M. Liu, X. Ning, K. Chakrabarty, and Y. Wang, “Fault-tolerant training with on-line fault detection for RRAM-based neural computing systems,” in Proc. 54th Design Automation Conference (DAC), p. 33, Jun. 2017.
[14] W. Huangfu, L. Xia, M. Cheng, X. Yin, T. Tang, B. Li, K. Chakrabarty, Y. Xie, Y. Wang, and H. Yang, “Computation-oriented fault-tolerace schemes for RRAM computing systems,” in Proc. Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 794-799, Jan. 2017.
[15] L. Chen, J. Li, Y. Chen, Q. Deng, J. Shen, X. Liang, and L. Jiang, “Accelerator-friendly neural-network training: learning variations and defects in RRAM crossbar,” in Proc. Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 19-24, March 2017
[16] M. Hu, H. Li, Q. Wu, and G. S. Rose, “Hardware realization of BSB recall function using memristor crossbar arrays,” in Proc. Design Automatic Conference (DAC), pp. 498-503, Jun. 2012.
[17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
[18] L.-T. Wang, C.-W. Wu, and X. Wen, Design for Testability: VLSI Test Principles and Architectures, Elsevier (Morgan Kaufmann), San Francisco, 2006.
[19] C.-F. Wu, C.-T. Huang, and C.-W. Wu, “RAMSES: a fast memory fault simulator,” in Proc. IEEE International Symp. on Defect and Fault Tolerance in VLSI System (DFT), pp. 165-173, Nov. 1999
[20] Y. LeCun, C. Cortes, and C. J.C. Burges, “The MNIST database of handwritten digits,” 1998.
[21] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding,” in Proc. Int. Conference of Learning Representation (CoRR), vol. 2, arXiv preprint arXiv:1510.00149, Oct. 2015.
[22] P. Pouyan, E. Amat, and A. Rubio, “Memristive crossbar memory lifetime evaluation and reconfiguration strategies,” IEEE Trans. on Emerging Topics in Computing, pp. 1, June 2016.
[23] P.-Y. Chung, C.-W. Wu, and H. H. Chen, “Covering hard-to-detect defects by thermal quorum sensing,” in Proc. European Test Symp. (ETS), pp. 1-2, May 2018.
[24] B.-Y. Lin, H.-W. Hung, S.-M. Tseng, C. Chen, and C.-W. Wu, “Highly reliable and low-cost symbiotic IOT devices and systems,” in Proc. Int. Test Conference (ITC), pp. 1-10, Oct. 2017.
[25] C.-W. Wu, B.-Y. Lin, H.-W. Hung, S.-M. Tseng, and C. Chen, “Symbiotic system models for efficient IOT system design and test,” in Proc. Int. Test Conference in Asia (ITC-Asia), pp. 71-76, Nov. 2017.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *