|
[1] J. K. Gill, “Automatic log analysis using deep learning and ai.,” 2018 (Oct 21,2018). https://www.xenonstack.com/blog/log-analytics-deep-machinelearning/? fbclid=IwAR2dOLlgJG6IH0MzlL-. [2] S. Kishi, “Simple regression model by tensorflow.,” 2017 (Sep 24,2017). https://marubonds. blogspot.com/2017/09/simple-regression-model-by-tensorflow.html. [3] S. Saha, “A comprehensive guide to convolutional neural networks —the eli5 way.,” 2018 (Dec 16,2018). https://towardsdatascience.com/a-comprehensive-guide-toconvolutional- neural-networks-the-eli5-way-3bd2b1164a53. [4] Y.-H. Chen, J. Emer, and V. Sze, “Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks,” IEEE Micro, vol. PP, pp. 1–1, 06 2017. [5] C.-X. Xue, T.-Y. Huang, J.-S. Liu, H.-Y. Kao, J.-H. Wang, T.-W. Liu, S.-Y. Wei, S.-P. Huang, W.-C. Wei, Y.-R. Chen, T.-H. Hsu, Y.-K. Chen, Y.-C. Lo, T.-H. Wen, C.-C. Lo, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, and M.-F. Chang, “15.4 a 22nm 2mb reram computein- memory macro with 121-28tops/w for multibit mac computing for tiny ai edge devices,” 10.1109/ISSCC19947.2020.9063078, pp. 244–246, 02 2020. [6] W.-H. Chen, K.-X. Li, W.-Y. Lin, K.-H. Hsu, P.-Y. Li, C.-H. Yang, C.-X. Xue, E.- Y. Yang, Y.-K. Chen, Y.-S. Chang, T.-H. Hsu, F. Chen, C.-J. Lin, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, and M.-F. Chang, “A 65nm 1mb nonvolatile computing-in-memory reram macro with sub-16ns multiply-and-accumulate for binary dnn ai edge processors,” 10.1109/ISSCC.2018.8310400, pp. 494–496, 02 2018. [7] C.-X. Xue, W.-H. Chen, J.-S. Liu, J.-F. Li, W.-Y. Lin, W.-E. Lin, J.-H. Wang, W.-C. Wei, T.-C. Chang, T.-Y. Huang, H.-Y. Kao, S.-Y. Wei, Y.-C. Chiu, C.-Y. Lee, C.-C. Lo, F. Chen, C.-J. Lin, R.-S. Liu, and M.-F. Chang, “24.1 a 1mb multibit reram computing-inmemory macro with 14.6ns parallel mac computing time for cnn based ai edge processors,” 10.1109/ISSCC.2019.8662395, pp. 388–390, 02 2019. [8] S. Han, H. Mao, and W. Dally, “Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding,” CoRR, vol. abs/1510.00149, 2016. [9] E. Touger, “What’s the difference between artificial intelligence (ai), machine learning, and deep learning?,” 2018 (Aug 3, 2018). https://www.prowesscorp.com/whats-thedifference- between-artificial-intelligence-ai-machine-learning-and-deep-learning. [10] B. Y. . H. G. LeCun, Y., “Deep learning.,” Nature, vol. 521, pp. 436–444, 2015. [11] e. a. Silver, D., “Mastering the game of go with deep neural networks and tree search.,” Nature, vol. 529, p. 484–489, 2016a. [12] M. A. Nielsen, “Neural networks and deep learning, determination press.,” 2015. [13] H. Leung and S. Haykin., “The complex backpropagation algorithm.,” IEEE Transactions on Signal Processing, vol. 39, no. 9, pp. 2101–2104, Sept. 1991. [14] S. B. M. R. Thomas Serre, Lior Wolf and I. Tomaso Poggio, Member, “Robust object recognition with cortex-like mechanisms,” IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 29(3), pp. 411–26, 2007. [15] Y. L. L. B. Y. Bengio and P. Haner., “Gradientbased learning applied to document recognition,” Processing of IEEE, vol. 86(11), pp. 2278–324, 1998. [16] G. E. D. T. N. S. G. E. Hinton, “Improving deep neural networks for lvcsr using rectified linear units and dropout,” Acoustics,Speech and Signal Processing(ICASSP), vol. 2013, pp. 8609–13, 2013. [17] F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain,” Psychological Review, pp. 65–386, 1958. [18] D. H. Hubel and T. N. Wiesel, “Receptive fields and functional architecture of monkey striate cortex.,” The Journal of physiology, vol. 195 1, pp. 215–43, 1968. [19] A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” Neural Information Processing Systems, vol. 25, 01 2012. [20] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” pp. 770–778, 06 2016. [21] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” 04 2017. [22] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” Proceedings of the 14th International Conference on Artificial Intelligence and Statisitics (AISTATS) 2011, vol. 15, pp. 315–323, 01 2011. [23] Y. Wu, H. Zhao, and L. Zhang, “Image denoising with rectified linear units,” 10.1007/978- 3-319-12643-218, vol. 8836, pp. 142 − −149, 112014. [24] Y. Lecun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, pp. 541–551, 12 1989. [25] S. Khan, “Ethem alpaydin. introduction to machine learning (adaptive computation and machine learning series),” Natural Language Engineering, vol. 14, pp. 133–137, 01 2008. [26] N. Jouppi, A. Borchers, R. Boyle, P.-l. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, C. Young, T. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. Ho, D. Hogberg, J. Hu, and N. Boden, “In-datacenter performance analysis of a tensor processing unit,” 10.1145/3079856.3080246, pp. 1–12, 06 2017. [27] J. Lee, J. Lee, D. Han, J. Lee, G. Park, and H.-J. Yoo, “7.7 lnpu: A 25.3tflops/w sparse deep-neural-network learning processor with fine-grained mixed precision of fp8-fp16,” 10.1109/ISSCC.2019.8662302, pp. 142–144, 02 2019. [28] V. Sze, Y.-H. Chen, T.-J. Yang, and J. Emer, “Efficient processing of deep neural networks: A tutorial and survey,” Proceedings of the IEEE, vol. 105, 03 2017. [29] A. Biswas and A. Chandrakasan, “Conv-ram: An energy-efficient sram with embedded convolution computation for low-power cnn-based machine learning applications,” 10.1109/ISSCC.2018.8310397, pp. 488–490, 02 2018. [30] S. Gonugondla, M. Kang, and N. Shanbhag, “A 42pj/decision 3.12tops/ w robust in-memory machine learning classifier with on-chip training,” 10.1109/ISSCC.2018.8310398, pp. 490–492, 02 2018. [31] X. Si, Y.-N. Tu, W.-H. Huanq, J.-W. Su, P.-J. Lu, J.-H. Wang, T.-W. Liu, S.-Y. Wu, R. Liu, Y.-C. Chou, Z. Zhang, S.-H. Sie, W.-C. Wei, Y.-C. Lo, T.-H. Wen, T.-H. Hsu, Y.-K. Chen, W. Shih, C.-C. Lo, and M.-F. Chang, “15.5 a 28nm 64kb 6t sram computing-in-memory macro with 8b mac operation for ai edge chips,” 10.1109/ISSCC19947.2020.9062995, pp. 246–248, 02 2020. [32] X. Si, J.-J. Chen, Y.-N. Tu, W.-H. Huang, J.-H. Wang, Y.-C. Chiu, W.-C. Wei, S.-Y. Wu, X. Sun, R. Liu, S. Yu, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, Q. Li, and M.-F. Chang, “24.5 a twin-8t sram computation-in-memory macro for multiple-bit cnn-based machine learning,” 10.1109/ISSCC.2019.8662392, pp. 396–398, 02 2019. [33] W.-S. Khwa, J.-J. Chen, J.-F. Li, X. Si, E.-Y. Yang, X. Sun, R. Liu, P.-Y. Chen, Q. Li, S. Yu, and M.-F. Chang, “A 65nm 4kb algorithm-dependent computing-in-memory sram unit-macro with 2.3ns and 55.8tops/w fully parallel product-sum operation for binary dnn edge processors,” 10.1109/ISSCC.2018.8310401, pp. 496–498, 02 2018. [34] W.-H. Chen, C. Dou, K.-X. Li, W.-Y. Lin, P.-Y. Li, J.-H. Huang, J.-H. Wang, W.-C. Wei, C.-X. Xue, Y.-C. Chiu, F. Chen, C.-J. Lin, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, J. Yang, M.-S. Ho, and M.-F. Chang, “Cmos-integrated memristive non-volatile computing-inmemory for ai edge processors,” Nature Electronics, vol. 2, 08 2019. [35] J. Hung, X. Li, J. Wu, and M. Chang, “Challenges and trends indeveloping nonvolatile memory-enabled computing chips for intelligent edge devices,” IEEE Transactions on Electron Devices, vol. 67, no. 4, pp. 1444–1453, 2020. [36] Y. Liao, H. Wu, W. Wan, W. Zhang, B. Gao, H.-S. P. Wong, and H. Qian, “Novel in-memory matrix-matrix multiplication with resistive cross-point arrays,” 10.1109/VLSIT. 2018.8510634, pp. 31–32, 06 2018. [37] M.-F. Chang, C.-C. Kuo, S.-S. Sheu, C.-J. Lin, Y.-C. King, F. Chen, T. Ku, M.-J. Tsai, J.-J. Wu, and Y.-D. Chih, “Area-efficient embedded resistive ram (reram) macros using logic-process vertical-parasitic-bjt (vpbjt) switches and read-disturb-free temperatureaware current-mode read scheme,” Solid-State Circuits, IEEE Journal of, vol. 49, pp. 908– 916, 04 2014. [38] M.-F. Chang, C.-W. Wu, C.-C. Kuo, S.-J. Shen, S.-M. Yang, K.-F. Lin, W.-C. Shen, Y.- C. King, C.-J. Lin, and Y.-D. Chih, “A low-voltage bulk-drain-driven read scheme for sub-0.5 v 4 mb 65 nm logic-process compatible embedded resistive ram (reram) macro,” Solid-State Circuits, IEEE Journal of, vol. 48, pp. 2250–2259, 09 2013. [39] J. Wu, Y. Chen, W. Khwa, S. Yu, T. Wang, J. Tseng, Y. Chih, and C. Diaz, “A 40nm low-power logic compatible phase change memory technology,” 10.1109/IEDM.2018.8614513, pp. 27.6.1–27.6.4, 12 2018. [40] M. W. W. Zhang, R. Mazzarello and E. Ma, “Designing crystallization in phase-change materials for universal memory and neuroinspired computing.,” Nature, vol. 4, pp. 107– 108, 01 2019. [41] A. Patil, H. Hua, S. Gonugondla, M. Kang, and N. Shanbhag, “An mram-based deep inmemory architecture for deep neural networks,” pp. 1–5, 05 2019. [42] G. Hu, M. Gottwald, Q. He, J.-H. Park, G. Lauer, J. Nowak, S. Brown, B. Doris, D. Edelstein, E. Evarts, P. Hashemi, B. Khan, Y. Kim, C. Kothandaraman, N. Marchack, E. O’Sullivan, M. Reuter, R. Robertazzi, J. Sun, and D. Worledge, “Key parameters affecting stt-mram switching efficiency and improved device performance of 400°c-compatible p-mtjs,” 10.1109/IEDM.2017.8268515, pp. 38.3.1–38.3.4, 12 2017. [43] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015. Software available from tensorflow.org. [44] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014. [45] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017. [46] T. E. Oliphant, A guide to NumPy, vol. 1. Trelgol Publishing USA, 2006. [47] Y. LeCun, C. Cortes, and C. Burges, “Mnist handwritten digit database,” ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, vol. 2, 2010. [48] A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (canadian institute for advanced research),” [49] I. Hubara, D. Soudry, and R. El-Yaniv, “Binarized neural networks,” 02 2016. [50] S. Zhou, Z. Ni, X. Zhou, H. Wen, Y. Wu, and Y. Zou, “Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients,” CoRR, vol. abs/1606.06160, 2016. [51] X. Jin and J. Han, K-Means Clustering, pp. 563–564. Boston, MA: Springer US, 2010. [52] D. Huffman, “A method for the construction of minimum-redundancy codes,” Resonance, vol. 11, pp. 91–99, 02 2006. [53] P. Yin, J. Lyu, S. Zhang, S. J. Osher, Y. Qi, and J. Xin, “Understanding straight-through estimator in training activation quantized neural nets,” CoRR, vol. abs/1903.05662, 2019. [54] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, 2016. |