|
[1] Alex Krizhevsky, et al., “ImageNet Classification with Deep Convolutional Neural Networks,” Advances in neural information processing systems, pp. 1097–1105, 2012 [2] Karen Simonyan, et al., “Very Deep Convolutional Networks for Large-Scale Image Recognition,” ICLR, 2015. [3] Kaiming He, et al., “Deep residual learning for image recognition,” arXiv preprint arXiv:1512.03385. [4] Yann LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Computation, 1(4):541–551, 1989. [5] Shuchang Zhou,et al., “DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients,” arXiv preprint arXiv:1606.06160, 2016. [6] Bengio et al., “Estimating or propagating gradients through stochastic neurons for conditional computation,” arXiv preprint arXiv:1308.3432, 2013. [7] B. Chen et al., “Efficient in-memory computing architecture based on crossbar arrays,” IEEE International Electron Devices Meeting, pp. 17.5.1-17.5.4, 2015. [8] S. Li et al., “Pinatubo: A processing-in-memory architecture for bulk bitwise operations in emerging non-volatile memories,” ACM/EDAC/IEEE Design Automation Conference, pp. 1-6, 2016. [9] Q. Dong et al., “A 0.3V VDDmin 4+2T SRAM for searching and in-memory computing using 55nm DDC technology,” IEEE Symposium on VLSI Circuits, pp. C160-C161, 2017. [10] J. Zhang, Z. Wang and N. Verma, "In-Memory Computation of a Machine-Learning Classifier in a Standard 6T SRAM Array," IEEE Journal of Solid-State Circuits, vol. 52, no. 4, pp. 915-924, April 2017. [11] A. Biswas, et al., “Conv-RAM: An energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications” IEEE International Solid-State Circuits Conference, pp. 488-490, 2018 [12] S. K. Gonugondla, et al., “A 42pJ/decision 3.12TOPS/W robust in-memory machine learning classifier with on-chip training” IEEE International Solid-State Circuits Conference, pp. 490-492, 2018 [13] M. Motomura, et al., "BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W," IEEE Journal of Solid-State Circuits, vol. 53, no. 4, pp. 983-994, April 2018. [14] W. Khwa et al., “A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors” IEEE International Solid-State Circuits Conference, pp. 496-498, 2018 [15] Wei-Hao Chen et al., “A 65nm 1Mb nonvolatile computing-in-memory ReRAM macro with sub-16ns multiply-and-accumulate for binary DNN AI edge processors,” IEEE International Solid-State Circuits Conference, pp. 494-496, 2018 [16] Pin-Yi Li et al., “A Neuromorphic Computing System for Bitwise Neural Networks Based on ReRAM Synaptic Array,”IEEE Biomedical Circuits and Systems Conference, 2018 [17] Rui Liu et al., “Parallelizing SRAM Arrays with Customized Bit-Cell for Binary Neural Networks,” ACM/ESDA/IEEE Design Automation Conference, pp. 1-6, 2018. [18] Xin Si et al., “24.5 A Twin-8T SRAM Computation-In-Memory Macro for Multiple-Bit CNN-Based Machine Learning,” IEEE International Solid-State Circuits Conference, pp. 396-398, 2019 |