帳號:guest(3.15.0.212)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):吳祥修
作者(外文):Wu, Siang-Siu
論文名稱(中文):激進功耗優化方法之準確度容忍神經網路
論文名稱(外文):Accuracy Tolerance Neural Networks Under Aggressive Power Optimization
指導教授(中文):張世杰
指導教授(外文):Chang, Shih-Chieh
口試委員(中文):王俊堯
吳凱強
口試委員(外文):Wang, Chun-Yao
Wu, Kai-Chiang
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:106062528
出版年(民國):108
畢業學年度:107
語文別:英文
論文頁數:30
中文關鍵詞:功耗神經網路容忍
外文關鍵詞:powerneuralnetworktoleranceoptimzation
相關次數:
  • 推薦推薦:0
  • 點閱點閱:413
  • 評分評分:*****
  • 下載下載:21
  • 收藏收藏:0
隨著深度學習的成功,許多神經網路架構被提出且被廣泛使用於不同領域的應用。如何布置複雜的神經網路架構於資源受限的裝置中是一個受關注的議題。在硬體架構上,針對功耗做優化的技巧十分重要。然而,有些針對功耗做優化的技巧可能會增加錯誤發生的機會,例如:電壓調整、多種臨界電壓等等。錯誤會因為訊號傳遞較慢而產生,雖然神經網路被認為他們可能對錯誤有一些容忍性,過多的錯誤還是可能是神經網路的功能性受影響。因此,我們提出方法來處理因訊號傳遞較慢而產生的錯誤。較慢的訊號傳遞會使得電路路徑的延遲較長且某些特定的電路輸入會叫容易出錯。基於以上原因,我們挑選出那些容易出錯的輸入或是組合並藉由調整神經網路中的權重以及對它的架構做修改來避免那些輸入出現於神經網路的運行階段。我們的方法是藉由軟體端的調整來避免錯誤,所以不需要重新設計電路的架構。實驗結果顯示我們的方法可以明顯地改善錯誤環境下的神經網路準確率,最顯著的準確度增加甚至達到27%,此外一系列的實驗也顯示我們的方法可以應用於不同的神經網路架構且都有明顯的改善。
With the success of deep learning, many neural network models have been proposed and used in different application domains. In some applications, the edge devices to implement the complicated models are power-limited and optimization techniques are often applied for reducing power consumption. However, some power optimization techniques, such as voltage scaling and multiple threshold voltages, may increase the probability of error occurrence due to slow signal propagation. Although neural network models are considered to have some tolerance to errors, the prediction accuracy could be significantly affected when there are a large number of errors. Thus, we propose methods to mitigate the errors caused by slow signal propagation, which leads to longer path delay in a circuit and fails some input patterns. We consider the patterns significantly affected by slow signal propagation and prevent them from failure by adjusting the given neural networks and its parameters. Since our methods modify the neural network on the software side, we do not need to re-design the hardware structure. The experimental results demonstrate the effectiveness of our methods in different network models.
Additionally, we can improve the accuracy by up to 27%.
摘要
目錄
1 Introduction-------------------------------1
2 Related Works------------------------------5
2.1 Error Tolerance in Neural Networks------5
2.2 Error-correction Technique--------------6
3 Preliminaries------------------------------8
3.1 Delay Variation of Paths in Circuits----8
3.2 Quantization in Neural Networks---------9
3.3 Net2Net---------------------------------11
4 Proposed Methods---------------------------12
4.1 Error Table of the Multiplier-----------12
4.2 Modified Quantization Scheme------------15
4.3 Distributing Sensitive Weights----------17
5 Experimental Results-----------------------21
5.1 Experimental Setup----------------------21
5.2 LeNet5----------------------------------23
5.3 ResNet-20-------------------------------23
5.4 Summary---------------------------------24
6 Conclusions--------------------------------26
Reference------------------------------------27
[1] T. Chen, I. J. Goodfellow, and J. Shlens. Net2net: Accelerating learning via knowl-edge transfer.CoRR, abs/1511.05641, 2016.
[2] L. . Chu and B. W. Wah. Fault tolerant neural networks with hybrid redundancy.In1990 IJCNN International Joint Conference on Neural Networks, pages 639–649vol.2, June 1990.
[3] D. Deodhare, M. Vidyasagar, and S. Sathiya Keethi. Synthesis of fault-tolerantfeedforward neural networks using minimax optimization.IEEE Transactions onNeural Networks, 9(5):891–900, Sep. 1998.
[4] P. J. Edwards and A. F. Murray. Penalty terms for fault tolerance. InProceedings ofInternational Conference on Neural Networks (ICNN’97), volume 2, pages 943–947vol.2, June 1997.
[5] D. Ernst, S. Das, S. Lee, D. Blaauw, T. Austin, T. Mudge, N. Kim, and K. Flautner.Razor: Circuit-level correction of timing errors for low-power operation.Micro,IEEE, 24:10 – 20, 12 2004.
[6] Y. Guo. A survey on methods and theories of quantized neural networks.ArXiv,abs/1808.04752, 2018.
[7] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neu-ral network with pruning, trained quantization and huffman coding.CoRR,abs/1510.00149, 2016.
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition.In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),pages 770–778, June 2016.
[9] A. Krizhevsky, V. Nair, and G. Hinton. Cifar-10.
[10] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied todocument recognition.Proceedings of the IEEE, 86(11):2278–2324, Nov 1998.
[11] Y. LeCun and C. Cortes. MNIST handwritten digit database. 2010.
[12] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficientconvnets.CoRR, abs/1608.08710, 2016.
[13] T. Mcconaghy, K. Breen, J. Dyck, and A. Gupta.Variation-Aware Design of CustomIntegrated Circuits: A Hands-on Field Guide, pages 187–188. 01 2013.
[14] C. Neti, M. H. Schneider, and E. D. Young. Maximally fault tolerant neural net-works.IEEE Transactions on Neural Networks, 3(1):14–23, Jan 1992.
[15] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Des-maison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. InNIPS-W,2017.
[14] C. Neti, M. H. Schneider, and E. D. Young. Maximally fault tolerant neural net-works.IEEE Transactions on Neural Networks, 3(1):14–23, Jan 1992.[15] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Des-maison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. InNIPS-W,2017.[16] S. Piche. Robustness of feedforward neural networks. In[Proceedings 1992] IJCNNInternational Joint Conference on Neural Networks, volume 2, pages 346–351 vol.2,June 1992.
[17] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets:Hints for thin deep nets.CoRR, abs/1412.6550, 2015.
[18] C. H. Sequin and R. D. Clay. Fault tolerance in artificial neural networks. In1990IJCNN International Joint Conference on Neural Networks, pages 703–708 vol.1,June 1990.
[19] A. Srivastava, D. Sylvester, and D. Blaauw. Statistical analysis and optimization forvlsi: Timing and power. InSeries on Integrated Circuits and Systems, 2005.
[20] C. Torres-Huitzil and B. Girau. Fault and error tolerance in neural networks: Areview.IEEE Access, 5:17322–17341, 2017.
[21] P. N. Whatmough, S. K. Lee, D. Brooks, and G. Wei. Dnn engine: A 28-nm timing-error tolerant sparse deep neural network processor for iot applications.IEEE Jour-nal of Solid-State Circuits, 53(9):2722–2731, Sep. 2018.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *