|
[1] Michaël Gharbi, Gaurav Chaurasia, Sylvain Paris, and Frédo Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph., vol. 35, no. 6, Nov. 2016. [2] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017. [3] Kai Zhang, Wangmeng Zuo, and Lei Zhang, “FFDNet: Toward a Fast and Flexible Solution for CNN-Based Image Denoising,” IEEE Trans. Image Process., vol. 27, no. 9, pp. 4608–4622, 2018. [4] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang, “Learning a deep convolutional network for image super-resolution,” in Computer Vision – ECCV 2014, David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, Eds., Cham, 2014, pp. 184–199, Springer International Publishing. [5] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1646–1654. [6] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi, “Photo-Realistic Single Image SuperResolution Using a Generative Adversarial Network,” arXiv e-prints, p. arXiv:1609.04802, Sept. 2016. [7] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” arXiv e-prints, p. arXiv:1707.02921, July 2017. [8] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi, “Xnor-net: Imagenet classification using binary convolutional neural networks,” CoRR, vol. abs/1603.05279, 2016. [9] Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou, “Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients,” CoRR, vol. abs/1606.06160, 2016. [10] Asit K. Mishra, Eriko Nurvitadhi, Jeffrey J. Cook, and Debbie Marr, “WRPN: wide reduced-precision networks,” in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. 2018, OpenReview.net. [11] Jungwook Choi, Swagath Venkataramani, Vijayalakshmi Srinivasan, Kailash Gopalakrishnan, Zhuo Wang, and Pierce Chuang, “Accurate and efficient 2-bit quantized neural networks,” in Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019, Ameet Talwalkar, Virginia Smith, and Matei Zaharia, Eds. 2019, mlsys.org. [12] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio, “Quantized neural networks: Training neural networks with low precision weights and activations,” J. Mach. Learn. Res., vol. 18, pp. 187:1–187:30, 2017. [13] Patrick Judd, Jorge Albericio, and Andreas Moshovos, “Stripes: Bit-serial deep neural network computing,” IEEE Comput. Archit. Lett., vol. 16, no. 1, pp. 80–83, 2017. [14] Hardik Sharma, Jongse Park, Naveen Suda, Liangzhen Lai, Benson Chau, Vikas Chandra, and Hadi Esmaeilzadeh, “Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network,” in 45th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2018, Los Angeles, CA, USA, June 1-6, 2018, Murali Annavaram Timothy Mark Pinkston, and Babak Falsafi, Eds. 2018, pp. 764–775, IEEE Computer Society. [15] L.W. Wang and C.T. Huang, Reconfigurable Convolution Engine with CoarseGrained Bit-Level Flexibility for Style Transfer, 國立清華大學, 2021. [16] Philipp Gysel, Mohammad Motamedi, and Soheil Ghiasi, “Hardwareoriented approximation of convolutional neural networks,” CoRR, vol. abs/1604.03168, 2016. [17] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David, “Training deep neural networks with low precision multiplications,” 2014. [18] Philipp Gysel, “Ristretto: Hardware-oriented approximation of convolutional neural networks,” 2016. [19] Chao-Tsung Huang, Yu-Chun Ding, Huan-Ching Wang, Chi-Wen Weng, KaiPing Lin, Li-Wei Wang, and Li-De Chen, “eCNN: A Block-Based and Highly-Parallel CNN Accelerator for Edge Inference,” arXiv e-prints, p.arXiv:1910.05680, Oct. 2019. |