|
[1] T. Chen, I. J. Goodfellow, and J. Shlens. Net2net: Accelerating learning via knowl-edge transfer.CoRR, abs/1511.05641, 2016. [2] L. . Chu and B. W. Wah. Fault tolerant neural networks with hybrid redundancy.In1990 IJCNN International Joint Conference on Neural Networks, pages 639–649vol.2, June 1990. [3] D. Deodhare, M. Vidyasagar, and S. Sathiya Keethi. Synthesis of fault-tolerantfeedforward neural networks using minimax optimization.IEEE Transactions onNeural Networks, 9(5):891–900, Sep. 1998. [4] P. J. Edwards and A. F. Murray. Penalty terms for fault tolerance. InProceedings ofInternational Conference on Neural Networks (ICNN’97), volume 2, pages 943–947vol.2, June 1997. [5] D. Ernst, S. Das, S. Lee, D. Blaauw, T. Austin, T. Mudge, N. Kim, and K. Flautner.Razor: Circuit-level correction of timing errors for low-power operation.Micro,IEEE, 24:10 – 20, 12 2004. [6] Y. Guo. A survey on methods and theories of quantized neural networks.ArXiv,abs/1808.04752, 2018. [7] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neu-ral network with pruning, trained quantization and huffman coding.CoRR,abs/1510.00149, 2016. [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition.In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),pages 770–778, June 2016. [9] A. Krizhevsky, V. Nair, and G. Hinton. Cifar-10. [10] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied todocument recognition.Proceedings of the IEEE, 86(11):2278–2324, Nov 1998. [11] Y. LeCun and C. Cortes. MNIST handwritten digit database. 2010. [12] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficientconvnets.CoRR, abs/1608.08710, 2016. [13] T. Mcconaghy, K. Breen, J. Dyck, and A. Gupta.Variation-Aware Design of CustomIntegrated Circuits: A Hands-on Field Guide, pages 187–188. 01 2013. [14] C. Neti, M. H. Schneider, and E. D. Young. Maximally fault tolerant neural net-works.IEEE Transactions on Neural Networks, 3(1):14–23, Jan 1992. [15] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Des-maison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. InNIPS-W,2017. [14] C. Neti, M. H. Schneider, and E. D. Young. Maximally fault tolerant neural net-works.IEEE Transactions on Neural Networks, 3(1):14–23, Jan 1992.[15] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Des-maison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. InNIPS-W,2017.[16] S. Piche. Robustness of feedforward neural networks. In[Proceedings 1992] IJCNNInternational Joint Conference on Neural Networks, volume 2, pages 346–351 vol.2,June 1992. [17] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets:Hints for thin deep nets.CoRR, abs/1412.6550, 2015. [18] C. H. Sequin and R. D. Clay. Fault tolerance in artificial neural networks. In1990IJCNN International Joint Conference on Neural Networks, pages 703–708 vol.1,June 1990. [19] A. Srivastava, D. Sylvester, and D. Blaauw. Statistical analysis and optimization forvlsi: Timing and power. InSeries on Integrated Circuits and Systems, 2005. [20] C. Torres-Huitzil and B. Girau. Fault and error tolerance in neural networks: Areview.IEEE Access, 5:17322–17341, 2017. [21] P. N. Whatmough, S. K. Lee, D. Brooks, and G. Wei. Dnn engine: A 28-nm timing-error tolerant sparse deep neural network processor for iot applications.IEEE Jour-nal of Solid-State Circuits, 53(9):2722–2731, Sep. 2018. |