|
[1] C.-X. Xue, W.-H. Chen, J.-S. Liu, J.-F. Li, W.-Y. Lin, W.-E. Lin, J.-H. Wang, W.-C. Wei, T.-Y. Huang, T.-W. Chang, T.-C. Chang, H.-Y. Kao, Y.-C. Chiu, C.-Y. Lee, Y.-C. King, C.-J. Lin, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, and M.-F. Chang, “Embedded 1-Mb ReRAM-Based Computing-in-Memory Macro with Multibit Input and Weight for CNN-based AI Edge Processors,” IEEE Jour. Solid-State Circuits, vol. 55, no. 1, pp. 203-215, Jan. 2020. [2] B. Yan, Q. Yang, W.-H. Chen, K.-T. Chang, J.-W. Su, C.-H. Hsu, S.-H. Li, H.-Y. Lee, S.-S. Sheu, M.-S. Ho, Q. Wu, M.-F. Chang, Y. Chen, and H. Li, “RRAM-Based Spiking Nonvolatile Computing-in-Memory Processing Engine with Precision-Configurable in situ Nonlinear Activation,” in Proc. 2019 Symp. VLSI Technology, pp. T86-T87, June 2019. [3] F. N. Buhler, P. Brown, J. Li, T. Chen, Z. Zhang and M. P. Flynn, “A 3.43 TOPS/W 48.9 pJ/pixel 50.1 nJ/classification 512 Analog Neuron Sparse Coding Neural Network with On-Chip Learning and Classification in 40nm CMOS,” in Proc. 2017 Symp. VLSI Circuits, pp. C30-C31, June 2017. [4] K.-W. Hou, H.-H. Cheng, C. Tung, C.-W. Wu, and J.-M. Lu, “Fault Modeling and Testing of Memristor-Based Spiking Neural Networks,” in Proc. IEEE Int. Test Conf. (ITC), Sept. 2022. [5] C. Tung, K.-W. Hou, and C.-W. Wu, "A Built-In Self-Calibration Scheme for Memristor-Based Spiking Neural Networks," in Proc. Int. Symp. on VLSI Design, Automation, and Test (VLSI-DAT), Hsinchu, Apr. 2023 (to appear). [6] K.-W. Hou, "A Power-Efficient Memristor-Based Spiking Neural Network for Real-Time Object Classification," Ph.D. Dissertation, Dept. Electrical Engineering, National Tsing Hua Univ., Hsinchu, Taiwan (in preparation). [7] P.-Y. Chuang, P.-Y. Tan, C.-W. Wu, and J.-M. Lu, “A 90nm 103.14 TOPS/W Binary-Weight Spiking Neural Network CMOS ASIC for Real-Time Object Classification,” in Proc. IEEE/ACM Design Automation Conf. (DAC), July 2020. [8] P.-Y. Tan, P.-Y. Chuang, Y.-T. Lin, C.-W. Wu, and J.-M. Lu, “A Power Efficient Binary-Weight Spiking Neural Network Architecture for Real-Time Object Classification,” arXiv:2003.06310, 2020. [9] M. E. Fouda, S. Lee, J. Lee, G. H. Kim, F. Kurdahi and A. M. Eltawi, "IR-QNN Framework: An IR Drop-Aware Offline Training of Quantized Crossbar Arrays," IEEE Access, vol. 8, pp. 228392-228408, Dec. 2020. [10] S. Lee, G. Jung, M. E. Fouda, J. Lee, A. Eltawil and F. Kurdahi, "Learning to Predict IR Drop with Effective Training for ReRAM-Based Neural Network Hardware," in Proc. 57th ACM/IEEE Design Automation Conf. (DAC), pp. 1-6, Jul. 2020. [11] Y.-H. Chiang, C.-E. Ni, Y. Sung, T.-H. Hou, T.-S. Chang and S.-J. Jou, "Hardware-Robust In-RRAM-Computing for Object Detection," IEEE Jour. Emerging and Selected Topics in Circuits and Systems, vol. 12, no. 2, pp. 547-556, June 2022. [12] W. Shim, J.-S. Seo and S. Yu, "Two-Step Write-Verify Scheme and Impact of the Read Noise in Multilevel RRAM-Based Inference Engine," Semiconductor Science and Technology, vol. 35, no. 11, Oct. 2020. [13] J.-H. Yoon, M. Chang, W.-S. Khwa, Y.-D. Chih, M.-F. Chang and A. Raychowdhury, "A 40-nm, 64-Kb, 56.67 TOPS/W Voltage-Sensing Computing-In-Memory/Digital RRAM Macro Supporting Iterative Write with Verification and Online Read-Disturb Detection," IEEE Jour. Solid-State Circuits, vol. 57, no. 1, pp. 68-79, Jan. 2022. [14] W. He, W. Shim, S. Yin, X. Sun, D. Fan, S. Yu, and J.-S. Seo, "Characterization and Mitigation of Relaxation Effects on Multi-Level RRAM based In-Memory Computing," in Proc. 2021 IEEE Int. Reliability Physics Symp. (IRPS), pp. 1-7, Mar. 2021. [15] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, and A. Lerer, “Automatic Differentiation in Pytorch,” in Proc. NIPS 2017 Workshop, Oct. 2017. [16] Draw_convnet: https://github.com/gwding/draw_convnet. [17] LTC3623: https://www.analog.com/en/products/ltc3623.html. [18] L.-T. Wang, C.-W. Wu, and X. Wen, Design for Testability: VLSI Test Principles and Architectures, Elsevier (Morgan Kaufmann), San Francisco, 2006. |