|
[1] Y.-C. Chiu, Z. Zhang, J.-J. Chen, X. Si, R. Liu, Y.-N. Tu, J.-W. Su, W.-H. Huang, J.-H. Wang, W.-C. Wei, J.-M. Hung, S.-S. Sheu, S.-H. Li, C.-I.Wu, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, and M.-F. Chang,“A 4-kb 1-to-8-bitconfigurable 6t sram-based computation-in-memory unitmacro for cnn-based ai edge processors”, IEEE Journal of Solid-State Circuits, vol. 55, no. 10, pp. 2790–2801, Oct 2020. [2] C.-S. Lin, F.-C. Tsai, J.-W. Su, S.-H. Li, T.-S. Chang, S.-S. Sheu, W.-C. Lo, S.-C. Chang, C.-I. Wu, and T.-H. Hou,“A 48 tops and 20943 tops/w 512kb computation-in-sram macro for highly reconfigurable ternary cnn acceleration”, in 2021 IEEE Asian Solid-State Circuits Conference (ASSCC), Nov 2021, pp. 1–3. [3] C. Zhou, P. Kadambi, M. Mattina, and P. N. Whatmough,“Noisy machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation”, 2020.Online Available: https://arxiv.org/abs/2001.04974 [4] J. Lorenz, E. Bar, A. Burenkov, P. Evanschitzky, A. Asenov, L. Wang, ¨ X. Wang, A. R. Brown, C. Millar, and D. Reid,“Simultaneous simulation of systematic and stochastic process variations”, in 2014 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD), Sep. 2014, pp. 289–292. [5] Q. Wang, Y. Park, and W. Lu,“Device variation effects on neural network inference accuracy in analog in-memory computing systems” , Advanced Intelligent Systems, pp. 2100199, 01 2022. [6] Z.-H. Lee, F.-C. Tsai, and S.-C. Chang,“Robust binary neural network against noisy analog computation”, in 2022 Design, Automation Test in Europe Conference Exhibition (DATE), March 2022, pp. 484–489. [7] Y.-W. Kang, C.-F. Wu, Y.-H. Chang, T.-W. Kuo, and S.-Y. Ho,“On minimizing analog variation errors to resolve the scalability issue of reram-based crossbar accelerators”, IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, vol. 39, no. 11, pp.3856–3867, Nov 2020. [8] M. Qin and D. Vucinic, “Training recurrent neural networks against noisy computations during inference”, 2018. Online Available: https://arxiv.org/abs/1807.06555 [9] Y. Jiang, R. M. Zur, L. L. Pesce, and K. Drukker, “A study of the effect of noise injection on the training of artificial neural networks”, in 2009 International Joint Conference on Neural Networks, June 2009, pp. 1428–1432. [10] V. Joshi, M. L. Gallo, S. Haefeli, I. Boybat, S. R. Nandakumar, C. Piveteau, M. Dazzi, B. Rajendran, A. Sebastian, and E. Eleftheriou, “Accurate deep neural network inference using computational phase change memory”, Nature Communications, vol. 11, no. 1, may 2020. Online Available: https://doi.org/10.1038%2Fs41467-020-16108-9 [11] R. Bruce, S. G. Sarwat, I. Boybat, C.-W. Cheng, W. Kim, S. Nandakumar, C. Mackin, T. Philip, Z. Liu, K. Brew, N. Gong, I. Ok, P. Adusumilli, K. Spoon, S. Ambrogio, B. Kersting, T. Bohnstingl, M. L. Gallo, A. Simon, N. Li, I. Saraf, J.-P. Han, L. Gignac, J. Papalia, T. Yamashita, N. Saulnier, G. W. Burr, H. Tsai, A. Sebastian, V. Narayanan, and M. BrightSky, “Mushroom-type phase change memory with projection liner: An array-level demonstration of conductance drift and noise mitigation”, in 2021 IEEE International Reliability Physics Symposium (IRPS), March 2021, pp. 1–6. [12] S. R. Nandakumar, I. Boybat, V. Joshi, C. Piveteau, M. Le Gallo, B. Rajendran, A. Sebastian, and E. Eleftheriou, “Phase-change memory models for deep learning training and inference”, in 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Nov 2019, pp. 727–730. [13] J Mockus, V Tiesis, and A Zilinskas, “The application of Bayesian methods for seeking the extremum”, in 1978, Towards Global Optimization, 2:117–129 [14] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnor-net: Imagenet classification using binary convolutional neural networks”, in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 525–542. [15] H. Kim, J.-H. Bae, S. Lim, S.-T. Lee, Y.-T. Seo, D. Kwon, B.-G. Park, and J.-H. Lee, “Efficient precise weight tuning protocol considering variation of the synaptic devices and target accuracy”, Neurocomputing, vol. 378, pp. 189–196, 2020. Online Available: https://www.sciencedirect.com/science/article/pii/S0925231219314900 [16] M. Klachko, M. R. Mahmoodi, and D. B. Strukov, “Improving noise tolerance of mixed-signal neural networks”, 2019. Online Available: https://arxiv.org/abs/1904.01705 [17] C. Zhou, P. Kadambi, M. Mattina, and P. N. Whatmough, “Noisy machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation”, 2020. Online Available: https://arxiv.org/abs/2001.04974 [18] P. Warden, “Speech commands: A dataset for limited-vocabulary speech recognition”, 2018. Online Available: https://arxiv.org/abs/1804.03209 [19] M. Courbariaux, Y. Bengio, and J.-P. David, “Binaryconnect: Training deep neural networks with binary weights during propagations”, 2015. Online Available: https://arxiv.org/abs/1511.00363 [20] R. Tang and J. Lin, “Deep residual learning for small-footprint keyword spotting”, 2017. Online Available: https://arxiv.org/abs/1710.10361 [21] S. Nassif, “Modeling and forecasting of manufacturing variations”, in Proceedings of the ASP-DAC 2001. Asia and South Pacific Design Automation Conference 2001 (Cat. No.01EX455), Feb 2001, pp. 145–149 [22] S. Yu, H. Jiang, S. Huang, X. Peng, and A. Lu, “Compute-in-memory chips for deep learning: Recent trends and prospects”, IEEE Circuits and Systems Magazine, vol. 21, no. 3, pp. 31–56, thirdquarter 2021. [23] A. Krizhevsky, “Learning multiple layers of features from tiny images”, International Conference on Learning Representations (ICLR), 2015. [24] C.-I. Chung, “Post-Silicon Calibration of CIM Deep Learning Model”, Master thesis, 2022 |