|
[1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp, 436-444, May 2015. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convoltional neural networks,” in Proc. Advances in Neural Information Processing Systems 25 (NIPS 2012), pp. 1106–1114, 2012. [3] H. Sharma, J. Park, D. Mahajan, E. Amaro, J. K. Kim, C. Shao, A. Mishra, and H. Esmaeilzadeh, “From high-level deep neural models to FPGAs,” in Proc. 49th Ann. IEEE/ACM Int. Symp. on Microarchitecture (MICRO), pp. 1-12, Oct. 2016. [4] N. P. Jouppi, C. Yong, N. Patil, D. Patterson, et al., “In-Datacenter performance analysis of a tensor processing unit,” in Proc. Int. Symp. on Computer Architecture (ISCA), pp. 1-12, June 2017. [5] C. Merkel, R. Hasan, N. Soures, D. Kudithipudi, T. Taha, S. Agarwal, and M. Marinella, "Neuromemristive Systems: Boosting Efficiency through Brain-Inspired Computing," IEEE Computer, vol. 49, no. 10, pp. 56-64, Oct. 2016. [6] C. D. Schuman, T. E. Potok, R. M. Patton, J. D. Birdwell, M. E. Dean, G. S. Rose, and J. S. Plank, "A survey of neuromorphic computing and neural networks in hardware" in arXiv:1705.06963v1[cs.NE], May, 2017 [7] B. Li, Y. Shan, M. Hu, Y. Wang, Y. Chen, and H. Yang, “Memristor-based approximated computation,” in Proc. Int. Symp. on Low Power Electronics and Design (ISLPED), pp. 242–247, Sep. 2013. [8] P. Chi, S. Li, C. Xu, T. Zhang, J. Zhao, Y. Liu, Y. Wang, and Y. Xie, “Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory,” in Proc. 43rd Int. Symp. on Computer Architecture (ISCA), pp. 27-39, Jun. 2016. [9] A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian, J. P. Strachan, M. Hu, R. S. Williams, V. Srikumar, “ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars,” in Proc. 43rd Int. Symp. on Computer Architecture (ISCA), pp. 14-26, Jun. 2016. [10] P. Mazumder, S. Kang, and R. Waser, “Memristors: Devices, models and applications,” Proc. of the IEEE, vol. 100, no. 6, pp. 1911–1919, Jun. 2012. [11] D. Walczyk, T. Bertaud, M. Sowinska, M. Lukosius, et al., “Resistive switching behavior in tin/hfo2/ti/tin devices,” in Proc. Int. Semiconductor Conference Dresden-Grenoble (ISCDG), pp. 143–146, Sept. 2012. [12] C. Y. Chen, H. C. Shih, C. W. Wu, C. H. Lin, P. F. Chiu, S. S. Sheu, and F. T. Chen, “RRAM defect modeling and failure analysis based on march test and a novel squeeze-search scheme,” IEEE Transactions on Computers, vol. 64, no. 1, pp. 180-190, Jan. 2015. [13] L. Xia, M. Liu, X. Ning, K. Chakrabarty, and Y. Wang, “Fault-tolerant training with on-line fault detection for RRAM-based neural computing systems,” in Proc. 54th Design Automation Conference (DAC), p. 33, Jun. 2017. [14] W. Huangfu, L. Xia, M. Cheng, X. Yin, T. Tang, B. Li, K. Chakrabarty, Y. Xie, Y. Wang, and H. Yang, “Computation-oriented fault-tolerace schemes for RRAM computing systems,” in Proc. Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 794-799, Jan. 2017. [15] L. Chen, J. Li, Y. Chen, Q. Deng, J. Shen, X. Liang, and L. Jiang, “Accelerator-friendly neural-network training: learning variations and defects in RRAM crossbar,” in Proc. Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 19-24, March 2017 [16] M. Hu, H. Li, Q. Wu, and G. S. Rose, “Hardware realization of BSB recall function using memristor crossbar arrays,” in Proc. Design Automatic Conference (DAC), pp. 498-503, Jun. 2012. [17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998. [18] L.-T. Wang, C.-W. Wu, and X. Wen, Design for Testability: VLSI Test Principles and Architectures, Elsevier (Morgan Kaufmann), San Francisco, 2006. [19] C.-F. Wu, C.-T. Huang, and C.-W. Wu, “RAMSES: a fast memory fault simulator,” in Proc. IEEE International Symp. on Defect and Fault Tolerance in VLSI System (DFT), pp. 165-173, Nov. 1999 [20] Y. LeCun, C. Cortes, and C. J.C. Burges, “The MNIST database of handwritten digits,” 1998. [21] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding,” in Proc. Int. Conference of Learning Representation (CoRR), vol. 2, arXiv preprint arXiv:1510.00149, Oct. 2015. [22] P. Pouyan, E. Amat, and A. Rubio, “Memristive crossbar memory lifetime evaluation and reconfiguration strategies,” IEEE Trans. on Emerging Topics in Computing, pp. 1, June 2016. [23] P.-Y. Chung, C.-W. Wu, and H. H. Chen, “Covering hard-to-detect defects by thermal quorum sensing,” in Proc. European Test Symp. (ETS), pp. 1-2, May 2018. [24] B.-Y. Lin, H.-W. Hung, S.-M. Tseng, C. Chen, and C.-W. Wu, “Highly reliable and low-cost symbiotic IOT devices and systems,” in Proc. Int. Test Conference (ITC), pp. 1-10, Oct. 2017. [25] C.-W. Wu, B.-Y. Lin, H.-W. Hung, S.-M. Tseng, and C. Chen, “Symbiotic system models for efficient IOT system design and test,” in Proc. Int. Test Conference in Asia (ITC-Asia), pp. 71-76, Nov. 2017.
|