帳號:guest(3.142.252.129)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):黃秉立
作者(外文):Huang, Ping-Li
論文名稱(中文):減少晶片上記憶體數據流和資料稀疏運算的突波捲積神經網路加速器
論文名稱(外文):A Spike-Based Convolution Neural Network (SCNN)Accelerator with Reduced On-Chip Memory Data Flow and Data sparse operation
指導教授(中文):鄭桂忠
指導教授(外文):Tang, Kea-Tiong
口試委員(中文):呂仁碩
盧峙丞
口試委員(外文):LIU, REN-SHUO
Lu, Chih-Cheng
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:108061619
出版年(民國):112
畢業學年度:111
語文別:中文
論文頁數:54
中文關鍵詞:突波神經網路捲積神經網路加速器數位電路稀疏性應用
外文關鍵詞:CNNSNNAcceleratorsDigital CircuitsSparsity
相關次數:
  • 推薦推薦:0
  • 點閱點閱:61
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
人工智能網路的興起是因其有許多的應用,例如:圖像辨識、語言辨識等,為了能在邊緣設備上實現則需要更高的能源效率,來因應裝置的受限。 突波神經網絡 (spiking neural networks, SNN) 被認為是潛在的候選者,因為它們的計算特性可以減少乘法運算。 它們只需要加法和移位運算即可進行計算。 將此方法應用於 CNN 網絡可以降低計算的功耗,即藉由加法器來實現累加的部分,而位移器則是來取代非線性的運算。 這種混合網絡稱為突波捲積神經網路 (Spiking-CNN, SCNN)。
但是,為了達到更好的運算速度,往往需要大容量的內存來儲存需要的權重以及特徵,這就需要一定的面積和功耗。 本文提供了一種SCNN的數據流,可以減少芯片上所需的內存大小,並提供另一種混合數據流來減少片上內存,並針對SCNN的高稀疏性設計零跳躍。 這種方法可以降低操作所需的功耗。 通過這些方式,減少了整體所需的片上存儲器,同時提高了能效。 在CIFA-10數據集的應用下達到了104.76TOPs/W。並且相較於其他發表之突波捲積神經網路加速器在相同的應用下所需要片上記憶體能達到最少的數量。
Abstract—The rise of artificial intelligence networks is due to their numerous applications, such as image recognition and speech recognition. However, achieving these applications on edge devices requires higher energy efficiency to accommodate device limitations. Spiking neural networks(SNN) are considered potential candidates because their computational characteristics can reduce multiplication operations. They only require addition and shifting operations for calculations. Applying this method to CNN networks can reduce computational power by implementing accumulation through adders and replacing nonlinear operations with shifters. This hybrid network is known as Spiking-CNN (SCNN).

However, to achieve better computational speed, large memory capacity is often required to store the necessary weights and features, which in turn requires a certain area and power consumption. This paper provides a data flow of SCNN that can reduce the memory size required on the chip and provides another mixed data flow to reduce on-chip memory and design zero-skipping for the high sparsity of SCNN. This method can reduce the power consumption required for the operation. In these ways, the overall required on-chip memory is reduced, and energy efficiency is also improved. It reaches 104.76TOPs/W under the application of the CIFA-10 dataset. Moreover, compared to other published spiking convolutional neural network accelerators in the same application, it requires the minimum amount of on-chip memory.
第 1 章 緒論 .......................................................... 1
1.1 研究背景 ......................................................... 1
1.2 研究動機 ......................................................... 6
1.3 章節簡介 ......................................................... 8
第 2 章 文獻回顧 .......................................................... 9
2.1 神經網路加速器 ..................................................... 9
2.1.1 深度捲積神經網路加速器架構 ..................................... 9
2.1.2 資料的搬移與重複利用 .......................................... 11
2.2 突波捲積神經網路加速器 ............................................ 13
2.2.1 突波神經網路 ................................................... 13
2.2.2 突波捲積神經網路 ............................................... 14
2.3 研究動機及挑戰 .................................................... 15
第 3 章 突波捲積神經網路數據流 ........................................... 17
3.1 突波捲積神經網路數據流 ............................................ 17
3.1.1 層運算優先數據流 ............................................... 17
3.1.2 時間步長優先數據流 ............................................. 19
3.2 混合式數據流 ....................................................... 21
第 4 章 突波捲積神經網路加速器設計 ....................................... 27
4.1 突波運算行為與複合式數據流 ........................................ 27
iv
4.1.1 突波神經元模型與電路 ........................................... 27
4.1.2 複合式數據流架構及選擇機制 ..................................... 29
4.2 資料稀疏運算 ...................................................... 29
4.2.1 非零權重篩選運算 .............................................. 30
4.2.2 運算單元跳過之機制 ............................................ 31
4.3 運算單元與記憶體單元陣列配置 ...................................... 33
4.4 加速器架構 ........................................................ 34
第 5 章 實驗結果與討論 ................................................... 40
5.1 模擬結果 ........................................................... 40
5.1.1 運算單元結果 .................................................. 40
5.1.2 資料流優化結果 ................................................. 44
5.2 結果與其他先進研究之比較 ........................................... 45
第 6 章 結論與未來發展 ................................................... 48
6.1 結論 ............................................................... 48
6.2 未來發展 ........................................................... 48
參考文獻 ................................................................ 50
[1] McCulloch W.S. and Pitts W.. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 1943, 5(4): 115-133
[2] Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain[J]. Psychological review, 1958, 65(6): 386.
[3] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. nature, 1986, 323(6088): 533-536.
[4] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[5] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[6] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[7] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580-587.
[8] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779-788.
[9] Pouyanfar S, Sadiq S, Yan Y, et al. A survey on deep learning: Algorithms, techniques, and applications[J]. ACM Computing Surveys (CSUR), 2018, 51(5): 1-36
[10] Han, Song, et al. “Learning both weights and connections for efficient neural network.” Advances in neural information processing systems. 2015.
[11] N. P. Jouppi et al., "In-datacenter performance analysis of a tensor processing unit," In ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pp. 1-12, 2017.
[12] Y.-H. Chen, et al., “Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks.” In JSSC, ISSCC Special Issue, Vol. 52, No. 1, pp. 127-138, 2017.
[13] V. Sze, T.-J. Yang, Y.-H. Chen, J. Emer, "Efficient Processing of Deep Neural Networks: A Tutorial and Survey." In Proceedings of the IEEE, vol. 105, no. 12, pp. 2295-2329, December 2017.
[14] F. Akopyan et al., “TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34, no. 10, pp. 1537-1557, 2015.
[15] M. Davies et al., “Loihi: A neuromorphic manycore processor with onchip learning,” IEEE Micro, vol. 38, no. 1, pp. 82-99, 2018.
[16] G. K. Chen, R. Kumar, H. E. Sumbul, P. C. Knag and R. K. Krishnamurthy, "A 4096-Neuron 1M-Synapse 3.8PJ/SOP Spiking Neural Network with On-Chip STDP Learning and Sparse Weights in 10NM FinFET CMOS," IEEE Symposium on VLSI Circuits, 2018
[17] Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothee Masquelier, Anthony S. Maida, “Deep Learning in Spiking Neural Networks,” ArXiv , 2017.
[18] B. Rueckauer, Y. Hu, I.-A. Lungu, M. Pfeiffer, and S.-C. Liu, “Conversion of continuous-valued deep networks to efficient event-driven networks for image classification,” Frontiers in Neuroscience, vol. 11, p. 682, 2017.
[19] Hong-Han Lien, Tian-Sheuan Chang, “Sparse Compressed Spiking Neural Network Accelerator for Object Detection,” ArXiv , 2022
[20] Shijie Cao, Lingxiao Ma, Wencong Xiao, Chen Zhang, Yunxin Liu, Lintao Zhang, Lanshun Nie, and Zhi Yang2, “SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity through Low-Bit Quantization” CVPR, 2019
[21] Po-Yao Chuang, Pai-Yu Tan, Cheng-Wen Wu, and Juin-Ming Lu, “A 90nm 103.14 tops/w binary-weight spiking neural network cmos asic for real-time object classification,” 2020 57th ACM/IEEE Design Automation Conference (DAC), 2020.
[22] Hong-Han Lien, Chung-Wei Hsu, and Tian-Sheuan Chang, " VSA: Reconfigurable Vectorwise Spiking Neural Network Accelerator" 2021 IEEE International Symposium on Circuits and Systems (ISCAS), Daegu, Korea
[23] R. Wang, C. S. Thakur, T. J. Hamilton, J. Tapson and A. van Schaik, "A stochastic approach to STDP," 2016 IEEE International Symposium on Circuits and Systems (ISCAS), 2016, pp. 2082-2085.
[24] Y. Zhong, X. Cui, Y. Kuang, K. Liu, Y. Wang and R. Huang, "A Spike-Event-Based Neuromorphic Processor with Enhanced On-Chip STDP Learning in 28nm CMOS," 2021 IEEE International Symposium on Circuits and Systems (ISCAS), 2021, pp. 1–5.
[25] Sangyeob Kim, Sangjin Kim, Soyeon Um, Soyeon Kim,andHoi-Jun Yoo“Two-Step Spike Encoding Scheme and Architecture for Highly Sparse Spiking-Neural-Network,”arxiv,2022
[26] Bo Wang, Jun Zhou, Weng-Fai Wong, and Li-Shiuan Peh, “Shenjing: a low power reconfigurable neuromorphic accelerator with partial-sum and spike networks-on-chip” 2020 The Conference on Design, Automation and Test in Europe(DATE), 2020, P. 240–245.
[27] Burin Amornpaisannon, Zhixuan Zhang, Venkata Pavan Kumar Miriyala , Hong Qu , Yansong Chua , Trevor E. Carlson ,and Haizhou Li, “Rectified linear postsynaptic potential function for backpropagation in deep spiking neural networks,” IEEE Transactions on Neural Networks and Learning Systems ( Early Access ), 2021.
[28] S. Narayanan, K. Taht, R. Balasubramonian, E. Giacomin, and P.-E. Gaillardon, “Spinalflow: an architecture and dataflow tailored for spiking neural networks,” in 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), 2020, pp. 349– 362.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *