|
[1] Ron Banner, Yury Nahshan, and Daniel Soudry. Post training 4-bit quanti- zation of convolutional networks for rapid-deployment. Advances in Neural Information Processing Systems, 32, 2019. [2] Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Zeroq: A novel zero shot quantization framework. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13169–13178, 2020. [3] Yoni Choukroun, Eli Kravchik, Fan Yang, and Pavel Kisilev. Low-bit quan- tization of neural networks for efficient inference. In 2019 IEEE/CVF Inter- national Conference on Computer Vision Workshop (ICCVW), pages 3009– 3018. IEEE, 2019. [4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Im- agenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. [5] Kirsty Duncan, Ekaterina Komendantskaya, Robert Stewart, and Michael Lones. Relative robustness of quantized neural networks against adversarial attacks. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2020. 23 [6] Kartik Gupta and Thalaiyasingam Ajanthan. Improved gradient-based adver- sarial attacks for quantized networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 6810–6818, 2022. [7] Dan Hendrycks and Thomas G Dietterich. Benchmarking neural network robustness to common corruptions and surface variations. arXiv preprint arXiv:1807.01697, 2018. [8] Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yigitcan Kaya, and Tudor Dumitras. Qu-anti-zation: Exploiting quantization artifacts for achieving adversarial outcomes. Advances in Neural Information Processing Systems, 34:9303–9316, 2021. [9] Faiq Khalid, Hassan Ali, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, and Muhammad Shafique. Qusecnets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks. In 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS), pages 182–187. IEEE, 2019. [10] Ji Lin, Chuang Gan, and Song Han. Defensive quantization: When efficiency meets robustness. arXiv preprint arXiv:1904.08444, 2019. [11] Markus Nagel, Rana Ali Amjad, Mart Van Baalen, Christos Louizos, and Tijmen Blankevoort. Up or down? adaptive rounding for post-training quan- tization. In International Conference on Machine Learning, pages 7197–7206. PMLR, 2020. [12] Adnan Siraj Rakin, Jinfeng Yi, Boqing Gong, and Deliang Fan. Defend deep neural networks against adversarial examples via fixed and dynamic quantized activation functions. arXiv preprint arXiv:1807.06714, 2018. 24
[13] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural net- works. arXiv preprint arXiv:1312.6199, 2013. [14] Peiqi Wang, Yu Ji, Xinfeng Xie, Yongqiang Lyu, Dongsheng Wang, and Yuan Xie. Qgan: Quantize generative adversarial networks to extreme low-bits. 2019. [15] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Im- age quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004. |