|
[1] Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986-10-09). "Learning representations by back-propagating errors". Nature. 323 (6088): 533– 536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. [2] McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, P.115-133, 1943 [3] Murat Isik, "A Survey of Spiking Neural Network Accelerator on FPGA." arXiv:2307.03910 [cs.AR], https://doi.org/10.48550/arXiv.2307.03910 [4] Y.-H. Wang, T.-C. Gong, Y.-X. Ding, Y. Li, W. Wang, Z.-A. Chen, N. Du, E. Covi, M. Farronato, D. Ielmini, X.-M. Zhang, Q. Luo, Redox memristors with volatile threshold switching behavior for neuromorphic computing, Journal of Electronic Science and Technology (2022), doi: https:// doi.org/10.1016/j.jnlest.2022.100177. [5] J. von Neumann, "First draft of a report on the EDVAC, " in IEEE Annals of the History of Computing, vol. 15, no. 4, pp. 27-75, 1993. [6] F.Akopyanetal.,“TrueNorth:Designandtoolflowofa65mW1millionneuron programmable neurosynaptic chip,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Volume 34, No. 10, P.1537-1557, 2015. [7] Furber S B, Lester D R, Plana L A, Garside J D, Painkras E, Temple S, Brown A D. Overview of the SpiNNaker system architecture. IEEE Transactions on Computers, 2013, 62(12): 2454-2467. K. Elissa, “Title of paper if known,” unpublished. [8] M. Davies et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro, Volume 38, No. 1, P.82-99, 2018.C. Szegedy et al., "Going deeper with convolutions." In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, doi: 10.1109/CVPR.2015. [9] Y. Liu et al., " An 82 nW 0.53 pJ/SOP clock-free spiking neural network with 40 μ s latency for AloT wake-up functions using ultimate-event-driven bionic architecture and computing-in-memory technique ", IEEE Int. Solid-State Circuits Conf. (ISSCC) Dig. Tech. Papers, vol. 65, pp. 372-374, Feb. 2022. [10] M. Horowitz, "1.1 Computing's energy problem (and what we can do about it)," 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), San Francisco, CA, USA, 2014, pp. 10-14, doi: 10.1109/ISSCC.2014.6757323. E. Lindholm, et al., "NVIDIA Tesla: A Unified Graphics and Computing Architecture," In IEEE Micro, vol. 28, no. 2, pp. 39-55, 2008. [11] Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka Nakamura, Pallab Datta, Gi-Joon Nam et al., "Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip", IEEE transactions on computer-aided design of integrated circuits and systems, vol. 34, no. 10, pp. 1537-1557, 2015. Zhuang Liu et al., "Learning efficient convolutional networks through network slimming." [12] Frenkel, C., Legat, J.-D., and Bol, D. (2018). A 0.086-mm2 9.8-pj/sop 64k-synapse 256-neuron online-learning digital spiking neuromorphic processor in 28nm CMOS. arXiv preprint arXiv:1804.07858. Available online at: https://arxiv.org/ abs/1804.07858 [13] V. Truong-Tuan et al., "FPGA Implementation of Parallel Neurosynaptic Cores for Neuromorphic Architectures", 2021 19th IEEE International New Circuits and Systems Conference, pp. 1-4, 2021. [14] Z. Yang et al., "Back to Homogeneous Computing: A Tightly-Coupled Neuromorphic Processor With Neuromorphic ISA," in IEEE Transactions on Parallel and Distributed Systems, vol. 34, no. 11, pp. 2910-2927, Nov. 2023, doi: 10.1109/TPDS.2023.3307408. [15] J. -J. Lee, W. Zhang and P. Li, "Parallel Time Batching: Systolic-Array Acceleration of Sparse Spiking Neural Computation," 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, Korea, Republic of, 2022, pp. 317-330, doi: 10.1109/HPCA53966.2022.00031. Z. Cai, et al., “Deep learning with low precision by half-wave gaussian quantization.” In CVPR, 2017. [16] B. U. Pedroni et al., "Forward table-based presynaptic event-triggered spike-timing-dependent plasticity," 2016 IEEE Biomedical Circuits and Systems Conference (BioCAS), Shanghai, China, 2016, pp. 580-583, doi: 10.1109/BioCAS.2016.7833861. [17] G. K. Chen, R. Kumar, H. E. Sumbul, P. C. Knag and R. K. Krishnamurthy, "A 4096-Neuron 1M-Synapse 3.8PJ/SOP Spiking Neural Network with On-Chip STDP Learning and Sparse Weights in 10NM FinFET CMOS," 2018 IEEE Symposium on VLSI Circuits, Honolulu, HI, USA, 2018, pp. 255-256, doi: 10.1109/VLSIC.2018.8502423. [18] C. Frenkel, J. Legat, D. Bol, “MorphIC: A 65-nm 738k-Synapse/mm$^2$ Quad- Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike- Driven Online Learning,” IEEE Transactions on Biomedical Circuits and Systems, Volume 13, No. 5, P.999-1010, 2019. [19] S. Narayanan, K. Taht, R. Balasubramonian, E. Giacomin and P.-E. Gaillardon, "Spinalflow: an architecture and dataflow tailored for spiking neural networks", 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), Valencia, Spain, 2020, pp. 349-362 [20] Kung, H. T. and Charles E. Leiserson. "Systolic Arrays for (VLSI), " (1978). [21] Jeong-Jun Lee, Peng Li “Reconfigurable Dataflow Optimization for Spatiotemporal Spiking Neural Computation on Systolic Array Accelerators”, 2020 IEEE 38th International Conference on Computer Design (ICCD) [22] G. K. Chen, R. Kumar, H. E. Sumbul, P. C. Knag and R. K. Krishnamurthy, "A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS," in IEEE Journal of Solid-State Circuits, vol. 54, no. 4, pp. 992-1002, April 2019, doi: 10.1109/JSSC.2018.2884901. [23] Sze, V., Chen, Y.-H., Yang, T.-J., and Emer, J., "Efficient Processing of Deep Neural Networks: A Tutorial and Survey, " arXiv, 2017. [24] J. Park, J. Lee, and D. Jeon, “A 65nm 236.5 nJ/classification neuromorphic processor with 7.5% energy overhead on-chip learning using direct spike-only feedback,” in Proc. IEEE Int. Solid-State Circuits Conference (ISSCC), pp. 140–142, 2019 [25] J. Zhang et al., “22.6 ANP-I: A 28nm 1.5pJ/SOP Asynchronous Spiking Neural Network Processor Enabling Sub-O.1 μJ/Sample On-Chip Learning for Edge-AI Applications,” 2023 IEEE International Solid-State Circuits Conference (ISSCC). IEEE, Feb. 19, 2023. doi:10.1109/isscc42615.2023.10067650. [26] Jongkil Park, YeonJoo Jeong, Jaewook Kim, Suyoun Lee, Joon Young Kwak, Jong-Keuk Park, “High-Density Digital Neuromorphic Processor with High-Precision Neural and Synaptic Dynamics and Temporal Acceleration,”2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS). [27] Chen-Fu Yeh. depth-from-motion. https://github.com/twetto/depth-from-motion, 2020 [28] X. Ju, B. Fang, R. Yan, X. Xu, and H. Tang, “An fpga implementation of deep spiking neural networks for low-power and fast classification,” Neural computation, vol. 32, no. 1, pp. 182–204, 2020. [29] H. Fang, A. Shrestha, D. Ma, and Q. Qiu, “Scalable noc-based neuromorphic hardware learning and inference,” in 2018 International joint conference on neural networks (IJCNN). IEEE, 2018, pp. 1–8. [30] S. Li, Z. Zhang, R. Mao, J. Xiao, L. Chang, and J. Zhou, “A fast and energy-efficient snn processor with adaptive clock/event-driven com- putation scheme and online learning,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 68, no. 4, pp. 1543–1552, 2021. [31] Y. Liu, Y. Chen, W. Ye, and Y. Gui, “Fpga-nhap: A general fpga-based neuromorphic hardware acceleration platform with high speed and low power,” IEEE Transactions on Circuits and Systems I: Regular Papers, 2022.
|