|
[1] McCulloch W.S. and Pitts W.. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 1943, 5(4): 115-133 [2] Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain[J]. Psychological review, 1958, 65(6): 386. [3] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. nature, 1986, 323(6088): 533-536. [4] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [5] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014. [6] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. [7] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580-587. [8] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779-788. [9] Pouyanfar S, Sadiq S, Yan Y, et al. A survey on deep learning: Algorithms, techniques, and applications[J]. ACM Computing Surveys (CSUR), 2018, 51(5): 1-36 [10] Han, Song, et al. “Learning both weights and connections for efficient neural network.” Advances in neural information processing systems. 2015. [11] N. P. Jouppi et al., "In-datacenter performance analysis of a tensor processing unit," In ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pp. 1-12, 2017. [12] Y.-H. Chen, et al., “Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks.” In JSSC, ISSCC Special Issue, Vol. 52, No. 1, pp. 127-138, 2017. [13] V. Sze, T.-J. Yang, Y.-H. Chen, J. Emer, "Efficient Processing of Deep Neural Networks: A Tutorial and Survey." In Proceedings of the IEEE, vol. 105, no. 12, pp. 2295-2329, December 2017. [14] F. Akopyan et al., “TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34, no. 10, pp. 1537-1557, 2015. [15] M. Davies et al., “Loihi: A neuromorphic manycore processor with onchip learning,” IEEE Micro, vol. 38, no. 1, pp. 82-99, 2018. [16] G. K. Chen, R. Kumar, H. E. Sumbul, P. C. Knag and R. K. Krishnamurthy, "A 4096-Neuron 1M-Synapse 3.8PJ/SOP Spiking Neural Network with On-Chip STDP Learning and Sparse Weights in 10NM FinFET CMOS," IEEE Symposium on VLSI Circuits, 2018 [17] Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothee Masquelier, Anthony S. Maida, “Deep Learning in Spiking Neural Networks,” ArXiv , 2017. [18] B. Rueckauer, Y. Hu, I.-A. Lungu, M. Pfeiffer, and S.-C. Liu, “Conversion of continuous-valued deep networks to efficient event-driven networks for image classification,” Frontiers in Neuroscience, vol. 11, p. 682, 2017. [19] Hong-Han Lien, Tian-Sheuan Chang, “Sparse Compressed Spiking Neural Network Accelerator for Object Detection,” ArXiv , 2022 [20] Shijie Cao, Lingxiao Ma, Wencong Xiao, Chen Zhang, Yunxin Liu, Lintao Zhang, Lanshun Nie, and Zhi Yang2, “SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity through Low-Bit Quantization” CVPR, 2019 [21] Po-Yao Chuang, Pai-Yu Tan, Cheng-Wen Wu, and Juin-Ming Lu, “A 90nm 103.14 tops/w binary-weight spiking neural network cmos asic for real-time object classification,” 2020 57th ACM/IEEE Design Automation Conference (DAC), 2020. [22] Hong-Han Lien, Chung-Wei Hsu, and Tian-Sheuan Chang, " VSA: Reconfigurable Vectorwise Spiking Neural Network Accelerator" 2021 IEEE International Symposium on Circuits and Systems (ISCAS), Daegu, Korea [23] R. Wang, C. S. Thakur, T. J. Hamilton, J. Tapson and A. van Schaik, "A stochastic approach to STDP," 2016 IEEE International Symposium on Circuits and Systems (ISCAS), 2016, pp. 2082-2085. [24] Y. Zhong, X. Cui, Y. Kuang, K. Liu, Y. Wang and R. Huang, "A Spike-Event-Based Neuromorphic Processor with Enhanced On-Chip STDP Learning in 28nm CMOS," 2021 IEEE International Symposium on Circuits and Systems (ISCAS), 2021, pp. 1–5. [25] Sangyeob Kim, Sangjin Kim, Soyeon Um, Soyeon Kim,andHoi-Jun Yoo“Two-Step Spike Encoding Scheme and Architecture for Highly Sparse Spiking-Neural-Network,”arxiv,2022 [26] Bo Wang, Jun Zhou, Weng-Fai Wong, and Li-Shiuan Peh, “Shenjing: a low power reconfigurable neuromorphic accelerator with partial-sum and spike networks-on-chip” 2020 The Conference on Design, Automation and Test in Europe(DATE), 2020, P. 240–245. [27] Burin Amornpaisannon, Zhixuan Zhang, Venkata Pavan Kumar Miriyala , Hong Qu , Yansong Chua , Trevor E. Carlson ,and Haizhou Li, “Rectified linear postsynaptic potential function for backpropagation in deep spiking neural networks,” IEEE Transactions on Neural Networks and Learning Systems ( Early Access ), 2021. [28] S. Narayanan, K. Taht, R. Balasubramonian, E. Giacomin, and P.-E. Gaillardon, “Spinalflow: an architecture and dataflow tailored for spiking neural networks,” in 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), 2020, pp. 349– 362.
|