帳號:guest(3.147.27.131)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):王祈恩
作者(外文):Wang, Chi-En
論文名稱(中文):使用低位寬數值及低功耗記憶體進行神經網路運算
論文名稱(外文):Neural Network Computation Using Low Bitwidth Numbers and Low Power Memory
指導教授(中文):呂仁碩
指導教授(外文):Liu, Ren-Shuo
口試委員(中文):黃稚存
劉靖家
口試委員(外文):Huang, Chih-Tsun
Liou, Jing-Jia
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:106061548
出版年(民國):108
畢業學年度:108
語文別:中文
論文頁數:37
中文關鍵詞:神經網路低位寬數值低功耗記憶體硬錯誤錯誤修正指標
外文關鍵詞:Neural networklow bitwidth numberlow power memoryhard errorerror correction pointer
相關次數:
  • 推薦推薦:0
  • 點閱點閱:228
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
隨著神經網路技術的成熟,可以想見未來多數的晶片中都將有神經網路之架構用以提升運作之效率。為了降低神經網路晶片運算的功率消耗,降低表示數值的位元數是個可行的方法。除了降低表示數值的位元數之外,也能藉由降低記憶體的功耗,對晶片整體做出優化,例如本論文中所提出的降低SRAM操作電壓或將DRAM換為PCM。然而這些優化方法會產生一個共同問題,即記憶體中硬錯誤的增加。過去曾有人提出過ECP的概念,用以更正記憶體內的硬錯誤。但傳統的ECP機制缺乏對神經網路運算特性的利用,使其在神經網路應用時缺乏效率。為了提升其效率,需要重新去檢視傳統ECP的機制,調整並進一步改良設計過往的ECP機制。

本論文為降低神經網路運算之功耗和存放在記憶體中的資料量,使用一套流程分別降低神經網路中權重 (weight)和特徵值 (feature map)的位元數,同時提出了數種優化整體系統的方法,並描述其會付出之額外成本。對於硬錯誤增加的這類額外成本,我們運用傳統ECP機制修正記憶體中的硬錯誤,並觀察其更正錯誤之效果及所能更正錯誤數量的極限。在觀察神經網路運算之特性後,我們提出了改良式神經網路ECP,與傳統ECP相比,得以使神經網路應用在較高的記憶體硬錯誤率之下,依然能保持較高之辨識準確率。
With neural network technology becoming more and more mature, most chips in future may include a neural network architecture to improve operational efficiency. In order to reduce the power consumption of a neural network chip, cutting down the bit number representing a value is a decent method. In addition to reducing the number of bits representing a value, methods of optimizing the overall system can be designed by observing the interaction between the chip and the memory system, which include reducing SRAM operating voltage or changing DRAM to PCM as proposed in this paper. However, these optimization methods may lead to the increase of hard error in memory. The concept of ECP has been proposed to correct hard errors in memory, but traditional ECP lack acknowledgement of neural network computing features, making it inefficient in neural network applications.

This work proposed a set of procedure, cutting down bit number representing weight and feature maps to reduce the power consumption and the amount of data stored in memory. On the other hand, we propose several ways to optimize the overall system and describe the overhead it may cost. In order to mitigate the overhead, we apply traditional ECP mechanism to correct hard error in memory and discover its shortcomings and limitations. We than proposed imporoved-ECP, imporoved-ECP takes advantages of the unique feature in neural network computation and can be applied under higher memory hard error rate while maintaining high level of accuracy comparing to conventional ECP.
誌謝..........................................................1
摘要..........................................................2
Abstract......................................................3
1 Introduction...............................................9
2 Background and Related Works..............................12
2.1 IEEE 754格式...........................................12
2.1.1 IEEE 754 32位元轉為數值.............................12
2.1.2 Denormalization ...................................13
2.2 SRAM在降低操作電壓時產生硬錯誤的原因......................13
2.3 PCM增加使用次數後產生硬錯誤的原因.........................14
2.4 ECP (Error Correction Pointer) ........................14
2.5 Related Works .........................................15
3 Design.....................................................16
3.1 以較少位元的浮點數來表示權重及特徵圖.......................16
3.1.1 將Resnet18架構中的權重及特徵圖以較少之位元來表示.......16
3.1.2 減少浮點數位元數後之數值表示區間......................17
3.1.3 更改bias之原因及必要性..............................18
3.1.4 減少浮點數位元數之流程...............................18
3.2 使用低功耗記憶體優化神經網路晶片系統之方法.................19
3.2.1 降低on chip SRAM之操作電壓..........................20
3.2.2 將off chip memory從DRAM改為PCM.....................20
3.2.3 優化設計所需付出的成本...............................21
3.3 使用傳統ECP修正特徵圖中的硬錯誤..........................21
3.3.1 神經網路對於運算過程中錯誤之容忍性................... 21
3.3.2 傳統ECP修正特徵圖中硬錯誤的架構......................21
3.4 使用改良之神經網路ECP修正特徵圖中的硬錯誤..................22
3.4.1 神經網路運算對於不同類型硬錯誤之敏感程度...............23
3.4.2 傳統ECP設計在神經網路應用中之缺點.....................23
3.4.3 改良之神經網路ECP架構...............................24
4 Experiment................................................26
4.1 實驗環境設置與實驗方法...................................26
4.2 以較少位元數表示權重和特徵圖數值後之辨識準確率變化..........26
4.2.1 權重和特徵圖之數值分布區間...........................26
4.2.2 維持特徵圖數值之格式並縮減權重之位元數................27
4.2.3 維持權重之格式並縮減特徵圖之位元數....................28
4.2.4 同時縮減權重及特徵圖位元數後之辨識準確率..............29
4.3 神經網路對於特徵圖數值改變之容忍能力......................30
4.4 傳統ECP修正特徵圖中硬錯誤之效果..........................31
4.5 神經網路運算對於不同硬錯誤之敏感度差別....................32
4.6 使用改良式的神經網路ECP修正stuck error .................33
5 Conclusion...............................................35
References..................................................36
1. K. Agarwal and S. Nassif, “Statistical analysis of sram cell stability,” inPro-ceedings of the 43rd Annual Design Automation Conference, DAC ’06, (NewYork, NY, USA), pp. 57–62, ACM, 2006.
2. H.-S. P. Wong, S. Raoux, S. Kim, J. Liang, J. P. Reifenberg, B. Rajendran,M. Asheghi, and K. E. Goodson, “Phase change memory,”Proceedings of theIEEE, vol. 98, no. 12, pp. 2201–2227, 2010.
3. S. Schechter, G. H. Loh, K. Strauss, and D. Burger, “Use ecp, not ecc, for hardfailures in resistive memories,” inProceedings of the 37th Annual InternationalSymposium on Computer Architecture, ISCA ’10, (New York, NY, USA),pp. 141–152, ACM, 2010.
4. V. Vanhoucke, A. Senior, and M. Z. Mao, “Improving the speed of neu-ral networks on cpus,” inDeep Learning and Unsupervised Feature LearningWorkshop, NIPS 2011, 2011.
5. N. Wang, J. Choi, D. Brand, C.-Y. Chen, and K. Gopalakrishnan, “Trainingdeep neural networks with 8-bit floating point numbers,” inAdvances in Neu-ral Information Processing Systems 31(S. Bengio, H. Wallach, H. Larochelle,K. Grauman, N. Cesa-Bianchi, and R. Garnett, eds.), pp. 7675–7684, CurranAssociates, Inc., 2018.
6. J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N. Xu,S. Song, Y. Wang, and H. Yang, “Going deeper with embedded fpga platformfor convolutional neural network,” inProceedings of the 2016 ACM/SIGDAInternational Symposium on Field-Programmable Gate Arrays, FPGA ’16,(New York, NY, USA), pp. 26–35, ACM, 2016.
7. C. Wilkerson, H. Gao, A. R. Alameldeen, Z. Chishti, M. Khellah, and S. Lu,“Trading off cache capacity for reliability to enable low voltage operation,” in2008 International Symposium on Computer Architecture, pp. 203–214, June 2008.
8. V. Lorente, A. Valero, J. Sahuquillo, S. Petit, R. Canal, P. López, and J. Du-ato, “Combining ram technologies for hard-error recovery in l1 data cachesworking at very-low power modes,” in2013 Design, Automation Test in Eu-rope Conference Exhibition (DATE), pp. 83–88, March 2013.
9. N. H. Seong, D. H. Woo, V. Srinivasan, J. A. Rivers, and H.-H. S. Lee, “Safer:Stuck-at-fault error recovery for memories,” inProceedings of the 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO’43, (Washington, DC, USA), pp. 115–124, IEEE Computer Society, 2010.
10. M. K. Qureshi, “Pay-as-you-go: Low-overhead hard-error correction for phasechange memories,” inProceedings of the 44th Annual IEEE/ACM Interna-tional Symposium on Microarchitecture, MICRO-44, (New York, NY, USA),pp. 318–328, ACM, 2011.
11. D. H. Yoon, N. Muralimanohar, J. Chang, P. Ranganathan, N. P. Jouppi, andM. Erez, “Free-p: Protecting non-volatile memory against both hard and softerrors,” in2011 IEEE 17th International Symposium on High PerformanceComputer Architecture, pp. 466–477, Feb 2011.
12. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for imagerecognition,”CoRR, vol. abs/1512.03385, 2015.
13. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin,A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in py-torch,” inNIPS-W, 2017.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *