帳號:guest(18.191.222.143)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):徐晨瀚
作者(外文):Hsu, Chen-Han
論文名稱(中文):具備輸入稀疏性設計之可重建的突波卷積神經網路加速器
論文名稱(外文):A Reconstructing Spike-Based Convolution Neural Network (SCNN) Accelerator with Input Sparsity Mechanism
指導教授(中文):鄭桂忠
指導教授(外文):Tang, Kea-Tiong
口試委員(中文):黃朝宗
呂仁碩
盧峙丞
口試委員(外文):Huang, Chao-Tsung
Liu, Ren-Shuo
Lu, Chih-Cheng
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:108061575
出版年(民國):111
畢業學年度:111
語文別:中文
論文頁數:54
中文關鍵詞:突波神經網路卷積神經網路加速器稀疏性設計
外文關鍵詞:Spiking neural networkConvolution neural networkacceleratorsparsity mechanism
相關次數:
  • 推薦推薦:0
  • 點閱點閱:343
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
近年來AI科技蓬勃發展,並廣泛的應用於各式各樣的任務中,其中卷積神經網路更是廣泛的被應用在影像處理的任務上,如辨識、分類等,因此應用於邊緣裝置上的低功耗晶片需求也隨之增加,然而隨著處理的任務越加複雜,所使用的神經網路模型參數量及運算量大幅增加,造成晶片的功耗隨之增加。因此近年來突波神經網路(Spiking neural networks, SNNs)受到越來越多的關注。
突波神經網路是模仿人類等生物體所構成的神經網路,具備許多低功耗的特性,如事件觸發、資料二值化、高度稀疏性等。本研究以突波神經網路的角度出發,設計利用突波進行卷積神經網路之加速器。該加速器使用時空並行計算數據流,同時對空間以及時間方向的資料進行計算,加速運算過程提高面積使用率,並減少訪問記憶體的次數以降低整體能量消耗。此外,對於突波神經網路的事件觸發、資料二質化及高稀疏特性設計時空門控以及跳過機制,加速整體運算並節省閒置時間的能量消耗,針對不同的網路大小、層數可以重構硬件資源以適用於不同的網絡或網絡層級。本研究的加速器在40nm製程頻率300MHZ的情況下能源效益可達到54.77TOPs/W,面積效益達到2.57GOPs/kmm2,和其他以發表之突波捲積神經網路加速器相比,本研究有較好的能源使用效率以及面積使用率。
Artificial intelligence technology has flourished in recent years and has been used in various fields. Convolutional neural networks are widely used in image-processing tasks, such as recognition, classification, etc. The demand for low-power chips used in edge devices is increasing. However, as the processing tasks become complex, the number of parameters and computations of the neural network model increases significantly, increasing chip power consumption. Therefore, in recent years, Spiking Neural Networks (SNNs) have received more and more attention.
SNNs inspired by the human brain with simple functions and low data density has become an important research topic. It has many low-power features, such as event-driven, data binarization, and high input sparsity. In this research, we proposed a spiked-based CNN accelerator with Spatiotemporal Parallel Data Flow to simultaneously calculate data in the spatial and temporal domains. Reduces the number of memory accesses to reduce overall energy consumption. In addition, we also design a sparsity, event-driven circuit and propose an early skip mechanism for pooling operations to reduce power consumption and computation time. For different network sizes and layers, hardware resources can be reconstructed to apply to different networks or network layers. The accelerator in this study can achieve 54.77TOPs/W in energy efficiency and 2.57GOPs/kmm2 in area efficiency under the 40nm process frequency of 300MHZ. This study has better energy efficiency and area utilization than other spike-based convolutional neural network accelerators.
摘要..................................................i
ABSTRACT.............................................ii
目錄..................................................iii
圖目錄................................................v
表目錄.................................................viii
第1章 緒論.............................................1
研究背景...............................................1
1.1 研究動機與目的...........................................6
1.2 章節簡介................................................8
第2章 文獻回顧 ........................................9
2.1 卷積神經網路加速器 ................................ 9
2.1.1 深度卷積神經網路加速器架構 ........................ 9
2.1.2 資料覆用 ........................................10
2.1.3 資料搬移 ........................................ 13
2.2 突波卷積神經網路 ................................ 14
2.3 突波神經網路加速器 ........................ 15
2.3.1 優先運算空間維度方向資料流之架構與加速器........15
2.3.2 優先運算時間維度方向資料流之架構與加速器........18
2.4 研究動機 ........................................22
第3章 突波卷積神經網路加速器設計........................23
3.1 時間步長定義........................................23
3.2 加速器架構................................................23
3.3突波卷積運算行為與資料流................................26
3.3.1 時空並行資料流........................................26
3.3.2 神經元模型與電路........................................29
3.4 稀疏性跳零機制與電路設計................................30
3.4.1 稀疏性跳零及事件觸發機制................................30
3.4.2 跳零運算及事件觸發機制電路設計........................32
3.5 池化層提前跳過機制與電路................................34
3.6 可重建之運算單元陣列及運算電路架構........................38
第4章 實驗結果................................................42
4.1 環境設置 ................................................42
4.2 突波卷積加速器功能量測................................ 44
4.3 提出方法執行於VGG網路之成效分析........................ 45
4.3.1 時空並行資料流對記憶體存取次數..........................45
4.3.2 稀疏性跳零與池化層提前跳過機制........................46
4.4 晶片規格及與多種SCNN加速器之比較........................47
第5章 結論與未來發展.......................................50
參考文獻................................................51

[1] McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, P.115-133, 1943
[2] Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986-10-09). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. ISSN 1476-4687. S2CID 205001834.
[3] Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
[4] Howard, A. et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” ArXiv , 2017.
[5] X. Zhang, X. Zhou, M. Lin, and J. Sun, 'Shufflenet: An extremely efficient convolutional neural network for mobile devices,' in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6848-6856.
[6] Lee Chankyu, Sarwar Syed Shakib, Panda Priyadarshini, Srinivasan Gopalakrishnan, Roy Kaushik. “Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.” Frontiers in Neuroscience, 2020.
[7] Jianhao Ding, Zhaofei Yu, Yonghong Tian, Tiejun Huang, ” Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks”,IJCAI,2021
[8] Han, Song, et al. “Learning both weights and connections for efficient neural network.” Advances in neural information processing systems. 2015.
[9] N. P. Jouppi et al., "In-datacenter performance analysis of a tensor processing unit," In ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pp. 1-12, 2017.
[10] Y.-H. Chen, et al., “Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks.” In JSSC, ISSCC Special Issue, Vol. 52, No. 1, pp. 127-138, 2017.
[11] V. Sze, T.-J. Yang, Y.-H. Chen, J. Emer, "Efficient Processing of Deep Neural Networks: A Tutorial and Survey." In Proceedings of the IEEE, vol. 105, no. 12, pp. 2295-2329, December 2017.
[12] Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothee Masquelier, Anthony S. Maida, “Deep Learning in Spiking Neural Networks,” ArXiv , 2017.
[13] J. Zylberberg, J. T. Murphy, and M. R. DeWeese, “A sparse coding model with synaptically local plasticity and spiking neurons can account for the diverse shapes of V1 simple cell receptive fields,” PLoS Comput Biol, vol. 7, no. 10, p.e1002250, 2011.
[14] A. Tavanaei, Z. Kirby, and A. S. Maida, “Training spiking ConvNets by STDP and gradient descent,” in Neural Networks (IJCNN), The 2018 International Joint Conference on. IEEE, 2018, pp. 1–8.
[15] B. Rueckauer, Y. Hu, I.-A. Lungu, M. Pfeiffer, and S.-C. Liu, “Conversion of continuous-valued deep networks to efficient event-driven networks for image classification,” Frontiers in Neuroscience, vol. 11, p. 682, 2017.
[16] Hong-Han Lien, Tian-Sheuan Chang, “Sparse Compressed Spiking Neural Network Accelerator for Object Detection,” ArXiv , 2022
[17] Shijie Cao, Lingxiao Ma, Wencong Xiao, Chen Zhang, Yunxin Liu, Lintao Zhang, Lanshun Nie, and Zhi Yang2, “SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity through Low-Bit Quantization” CVPR, 2019
[18] P.-Y. Chuang, P.-Y. Tan, C.-W. Wu and J.-M. Lu, "A 90nm 103.14 tops/w binary-weight spiking neural network cmos asic for real-time object classification"2020 57th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA
[19] Hong-Han Lien, Chung-Wei Hsu, and Tian-Sheuan Chang, " VSA: Reconfigurable Vectorwise Spiking Neural Network Accelerator" 2021 IEEE International Symposium on Circuits and Systems (ISCAS), Daegu, Korea
[20] S. Narayanan, K. Taht, R. Balasubramonian, E. Giacomin and P.-E. Gaillardon, "Spinalflow: an architecture and dataflow tailored for spiking neural networks", 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), Valencia, Spain, 2020, pp. 349-362
[21] Jeong-Jun Lee, Peng Li “Reconfigurable Dataflow Optimization for Spatiotemporal Spiking Neural Computation on Systolic Array Accelerators”, 2020 IEEE 38th International Conference on Computer Design (ICCD)
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *