|
[1] Y.-C. Chiu, Z. Zhang, J.-J. Chen, X. Si, R. Liu, Y.-N. Tu, J.-W. Su, W.-H. Huang, J.-H. Wang, W.-C. Wei, J.-M. Hung, S.-S. Sheu, S.-H. Li, C.-I. Wu, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, and M.-F. Chang, “A 4-kb 1-to-8-bit configurable 6t sram-based computation-in-memory unit-macro for cnn-based ai edge processors,” IEEE Journal of Solid-State Circuits, vol. 55, no. 10, pp. 2790–2801, Oct 2020. [2] C.-S. Lin, F.-C. Tsai, J.-W. Su, S.-H. Li, T.-S. Chang, S.-S. Sheu, W.-C. Lo, S.-C. Chang, C.-I.Wu, and T.-H. Hou, “A 48 tops and 20943 tops/w 512kb computationin- sram macro for highly reconfigurable ternary cnn acceleration,” in 2021 IEEE Asian Solid-State Circuits Conference (A-SSCC), Nov 2021, pp. 1–3. [3] Q. Wang, Y. Park, and W. Lu, “Device variation effects on neural network inference accuracy in analog in-memory computing systems,” Advanced Intelligent Systems, p. 2100199, 01 2022. [4] Y.-W. Kang, C.-F. Wu, Y.-H. Chang, T.-W. Kuo, and S.-Y. Ho, “On minimizing analog variation errors to resolve the scalability issue of reram-based crossbar accelerators,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 39, no. 11, pp. 3856–3867, Nov 2020. [5] Z.-H. Lee, F.-C. Tsai, and S.-C. Chang, “Robust binary neural network against noisy analog computation,” in 2022 Design, Automation Test in Europe Conference Exhibition (DATE), March 2022, pp. 484–489. [6] V. Joshi, M. L. Gallo, S. Haefeli, I. Boybat, S. R. Nandakumar, C. Piveteau, M. Dazzi, B. Rajendran, A. Sebastian, and E. Eleftheriou, “Accurate deep neural network inference using computational phase-change memory,” Nature Communications, vol. 11, no. 1, may 2020. [Online]. Available: https://doi.org/10.1038%2Fs41467-020-16108-9 [7] M. Qin and D. Vucinic, “Training recurrent neural networks against noisy computations during inference,” 2018. [Online]. Available: https: //arxiv.org/abs/1807.06555 [8] S. Yu, H. Jiang, S. Huang, X. Peng, and A. Lu, “Compute-in-memory chips for deep learning: Recent trends and prospects,” IEEE Circuits and Systems Magazine, vol. 21, no. 3, pp. 31–56, thirdquarter 2021. [9] H. Kim, J.-H. Bae, S. Lim, S.-T. Lee, Y.-T. Seo, D. Kwon, B.-G. Park, and J.-H. Lee, “Efficient precise weight tuning protocol considering variation of the synaptic devices and target accuracy,” Neurocomputing, vol. 378, pp. 189–196, 2020. [Online]. Available: https://www.sciencedirect.com/science/ article/pii/S0925231219314900 [10] M. Klachko, M. R. Mahmoodi, and D. B. Strukov, “Improving noise tolerance of mixed-signal neural networks,” 2019. [Online]. Available: https: //arxiv.org/abs/1904.01705 [11] C. Zhou, P. Kadambi, M. Mattina, and P. N. Whatmough, “Noisy machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation,” 2020. [Online]. Available: https://arxiv.org/abs/2001.04974 [12] P. Warden, “Speech commands: A dataset for limited-vocabulary speech recognition,” 2018. [Online]. Available: https://arxiv.org/abs/1804.03209 [13] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnor-net: Imagenet classification using binary convolutional neural networks,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 525–542. |