|
[1] M. Courbariaux, Y. Bengio, and J.-P. David, “BinaryConnect: training deep neural networks with binary weights during propagations,” in Proc. NIPS, pp. 3123–3131, 2015. [2] M. Courbariaux, Y. Bengio, “BinaryNet: training deep neural networks with weights and activations constrained to +1 or -1,” ArXiv:1602.02830, 2016. [3] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, "Binarized neural networks: training deep neural networks with weights and activations constrained to +1 or -1," ArXiv:1602.02830, 2016. [4] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “XNOR-Net: imageNet classification using binary convolutional neural networks,” in Proc. ECCV, pp. 525–542, 2016. [5] P. Gysel, M. Motamedi, and S. Ghiasi, “Hardware-oriented approximation of convolutional neural networks,” ArXiv:1604.03168, 2016. [6] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, “DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients,” ArXiv:1606.06160, 2016. [7] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Quantized neural networks: training neural networks with low precision weights and activations,” ArXiv:1609.07061, 2016. [8] S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both weights and connections for efficient neural networks,” in Proc. NIPS, pp. 1135-1143, 2015. [9] T.-J. Yang, Y.-H. Chen, and V. Sze, “Designing energy-efficient convolutional neural networks using energy-aware pruning,” in Proc. CVPR, 2017. [10] S. Han, H. Mao, and W. J. Dally, "Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding," ArXiv:1510.00149, 2015. [11] W. Chen, J. T. Wilson, S. Tyree,K. Q. Weinberger, and Y. Chen, “Compressing neural networks with the hashing trick,” in Proc. ICML, pp. 2285–2294, 2015. [12] H. Kim, J. Sim, Y. Choi, L.-S. Kim, “A kernel decomposition architecture for binary-weight convolutional neural networks,” in Proc. DAC, p. 60, 2017. [13] C.-C. Chi and J.-H. R. Jiang, "Logic synthesis of binarized neural networks for efficient circuit implementation," in Proc. ICCAD, pp 84:1–84:7, 2018. [14] Y. Umuroglu, N. J. Fraser, G. Gambardella, M. Blott, P. Leong, M. Jahre, and K. Vissers, "FINN: A framework for fast, scalable binarized neural network inference," in Proc. Int. Symp. on Field-Programmable Gate Arrays, pp. 65--74, 2017. [15] A. Krizhevsky, "Learning multiple layers of features from tiny images," MS thesis, University of Toronto, https://www.cs.toronto.edu/~kriz/cifar.html, 2009. [16] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A. Y. Ng, "Reading digits in natural images with unsupervised feature learning," NIPS Workshop on Deep Learning and Unsupervised Feature Learning, pp. 5, 2011. [17] “keras,” https://keras.io/ [18] “GUROBI,” https://www.gurobi.com/.
|