|
[1] O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu. Convolutional neural networks for speech recognition.IEEE/ACM Transactions on Audio,Speech, and Language Processing, 22(10):1533–1545, 2014. [2] Y. Bengio, N. L ́eonard, and A. Courville.Estimating or propagating gradi-ents through stochastic neurons for conditional computation.arXiv preprintarXiv:1308.3432, 2013. [3] A. Brock, S. De, and S. L. Smith. Characterizing signal propagation to close theperformance gap in unnormalized resnets. InInternational Conference on LearningRepresentations, 2020. [4] L. Chang, X. Ma, Z. Wang, Y. Zhang, W. Zhao, and Y. Xie. Corn: In-buffer com-puting for binary neural network. In2019 Design, Automation Test in Europe Con-ference Exhibition (DATE), pages 384–389, 2019. [5] Y.-C. Chiu, Z. Zhang, J.-J. Chen, X. Si, R. Liu, Y.-N. Tu, J.-W. Su, W.-H. Huang,J.-H. Wang, W.-C. Wei, J.-M. Hung, S.-S. Sheu, S.-H. Li, C.-I. Wu, R.-S. Liu, C.-C.Hsieh, K.-T. Tang, and M.-F. Chang. A 4-kb 1-to-8-bit configurable 6t sram-basedcomputation-in-memory unit-macro for cnn-based ai edge processors.IEEE Journalof Solid-State Circuits, 55(10):2790–2801, 2020. [6] S. Choi, S. Seo, B. Shin, H. Byun, M. Kersner, B. Kim, D. Kim, and S. Ha. Tem-poral convolution for real-time keyword spotting on mobile devices.arXiv preprintarXiv:1904.03814, 2019. [7] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neuralnetworks: Training deep neural networks with weights and activations constrainedto+ 1 or-1.arXiv preprint arXiv:1602.02830, 2016. [8] L. Huang, J. Qin, Y. Zhou, F. Zhu, L. Liu, and L. Shao. Normalization tech-niques in training dnns: Methodology, analysis and application.arXiv preprintarXiv:2009.12836, 2020. [9] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neuralnetworks: Training neural networks with low precision weights and activations.TheJournal of Machine Learning Research, 18(1):6869–6898, 2017. [10] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. InInternational conference on machine learning,pages 448–456. PMLR, 2015. [11] V. Joshi, M. Le Gallo, S. Haefeli, I. Boybat, S. R. Nandakumar, C. Piveteau,M. Dazzi, B. Rajendran, A. Sebastian, and E. Eleftheriou. Accurate deep neuralnetwork inference using computational phase-change memory.Nature communica-tions, 11(1):1–13, 2020.[12] M. Klachko, M. R. Mahmoodi, and D. Strukov. Improving noise tolerance of mixed-signal neural networks. In2019 International Joint Conference on Neural Networks(IJCNN), pages 1–8. IEEE, 2019. [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deepconvolutional neural networks.Advances in neural information processing systems,25:1097–1105, 2012. [14] M. M. Lopez and J. Kalita.Deep learning applied to nlp.arXiv preprintarXiv:1703.03091, 2017. [15] S. Mittermaier, L. K ̈urzinger, B. Waschneck, and G. Rigoll. Small-footprint keywordspotting on raw audio data with sinc-convolutions. InICASSP 2020-2020 IEEEInternational Conference on Acoustics, Speech and Signal Processing (ICASSP),pages 7454–7458. IEEE, 2020. [16] S. Qiao, H. Wang, C. Liu, W. Shen, and A. Yuille.Micro-batch trainingwith batch-channel normalization and weight standardization.arXiv preprintarXiv:1903.10520, 2019. [17] H. Qin, R. Gong, X. Liu, X. Bai, J. Song, and N. Sebe. Binary neural networks: Asurvey.Pattern Recognition, 105:107281, 2020. [18] M. Qin and D. Vucinic. Training recurrent neural networks against noisy computa-tions during inference. In2018 52nd Asilomar Conference on Signals, Systems, andComputers, pages 71–75. IEEE, 2018.[19] M. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Unsupervised learning ofinvariant feature hierarchies with applications to object recognition. In2007 IEEEConference on Computer Vision and Pattern Recognition, pages 1–8, 2007. [20] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet clas-sification using binary convolutional neural networks. InEuropean conference oncomputer vision, pages 525–542. Springer, 2016. [21] M. Ravanelli and Y. Bengio. Speaker recognition from raw waveform with sincnet.In2018 IEEE Spoken Language Technology Workshop (SLT), pages 1021–1028.IEEE, 2018.[22] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scaleimage recognition.arXiv preprint arXiv:1409.1556, 2014. [23] R. Tang and J. Lin. Deep residual learning for small-footprint keyword spotting. In2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5484–5488. IEEE, 2018. (ICASSP), pages 5484–5488. IEEE, 2018. [24] S. Ward-Foxton. Mythic ai accelerator targets high-end edge with 35 tops, Nov2020. [25] P. Warden. Speech commands: A dataset for limited-vocabulary speech recognition.arXiv preprint arXiv:1804.03209, 2018. [26] T. Young, D. Hazarika, S. Poria, and E. Cambria. Recent trends in deep learn-ing based natural language processing.ieee Computational intelligenCe magazine,13(3):55–75, 2018. [27] J. Zhang, Z. Wang, and N. Verma. In-memory computation of a machine-learningclassifier in a standard 6t sram array.IEEE Journal of Solid-State Circuits,52(4):915–924, 2017. [28] Y. Zhang, N. Suda, L. Lai, and V. Chandra. Hello edge: Keyword spotting onmicrocontrollers.arXiv preprint arXiv:1711.07128, 2017. [29] C. Zhou, P. Kadambi, M. Mattina, and P. N. Whatmough. Noisy machines: Under-standing noisy neural networks and enhancing robustness to analog hardware errorsusing distillation.arXiv preprint arXiv:2001.04974, 2020. |