|
[1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. [2] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learn-ing models resistant to adversarial attacks,” International Conference on Learning Representations, 2018. [3] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan, “Theoretically principled trade-off between robustness and accuracy,” in International Conference on Machine Learning, 2019. [4] S. Zagoruyko and N. Komodakis, “Wide residual networks,” in Proceedings of the British Machine Vision Conference (BMVC) (E. R. H. Richard C. Wilson and W. A. P. Smith, eds.), pp. 87.1–87.12, BMVA Press, September 2016. [5] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” International Conference on Learning Representations, 2014. [6] C. Laidlaw, S. Singla, and S. Feizi, “Perceptual adversarial robustness: Defense against unseen threat models,” in ICLR, 2021. [7] J. Zhang, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli, “Attacks which do not kill training make adversarial learning stronger,” in International Conference on Machine Learning, 2020. [8] D. Wu, S.-T. Xia, and Y. Wang, “Adversarial weight perturbation helps robust generalization,” in Advances in Neural Information Processing Systems, 2020. [9] E. Wong, L. Rice, and J. Z. Kolter, “Fast is better than free: Revisiting adversarial training,” in International Conference on Learning Representations, 2020. [10] F. Croce, M. Andriushchenko, V. Sehwag, E. Debenedetti, N. Flammarion, M. Chi-ang, P. Mittal, and M. Hein, “Robustbench: a standardized adversarial robustness benchmark,” in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2021. [11] B. Biggio and F. Roli, “Wild patterns: Ten years after the rise of adversarial machine learning,” Pattern Recognit., vol. 84, pp. 317–331, 2018. [12] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” International Conference on Learning Representations, 2015. [13] N. Carlini and D. A. Wagner, “Towards evaluating the robustness of neural net-works,” in 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017, pp. 39–57, IEEE Computer Society, 2017. [14] P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh, “EAD: elastic-net attacks to deep neural networks via adversarial examples,” in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI’18/IAAI’18/EAAI’18, AAAI Press, 2018. [15] F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” in International conference on machine learning, pp. 2206–2216, PMLR, 2020. [16] H. Hosseini and R. Poovendran, “Semantic adversarial examples,” in IEEE Con-ference on Computer Vision and Pattern Recognition Workshops, pp. 1614–1619, 2018. [17] A. Joshi, A. Mukherjee, S. Sarkar, and C. Hegde, “Semantic adversarial attacks: Para-metric transformations that fool deep classifiers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4773–4783, 2019. [18] A. S. Shamsabadi, R. Sanchez-Matilla, and A. Cavallaro, “Colorfool: Semantic adversarial colorization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1151–1160, 2020. [19] Y. Wang, S. Wu, W. Jiang, S. Hao, Y.-a. Tan, and Q. Zhang, “Demiguise attack: Crafting invisible semantic adversarial perturbations with perceptual similarity,” arXiv preprint arXiv:2107.01396, 2021. [20] S. Wang, S. Chen, T. Chen, S. Nepal, C. Rudolph, and M. Grobler, “Gener-ating semantic adversarial examples via feature manipulation,” arXiv preprint arXiv:2001.02297, 2020. [21] A. Bhattad, M. J. Chong, K. Liang, B. Li, and D. A. Forsyth, “Unrestricted ad-versarial examples via semantic manipulation,” in 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, OpenReview.net, 2020. [22] D. Kang, Y. Sun, D. Hendrycks, T. Brown, and J. Steinhardt, “Testing robustness against unforeseen adversaries,” arXiv preprint arXiv:1908.08016, 2019. [23] H. Qiu, C. Xiao, L. Yang, X. Yan, H. Lee, and B. Li, “Semanticadv: Generating adver-sarial examples via attribute-conditioned image editing,” in European Conference on Computer Vision, pp. 19–37, Springer, 2020. [24] C. Xiao, J.-Y. Zhu, B. Li, W. He, M. Liu, and D. Song, “Spatially transformed adversarial examples,” arXiv preprint arXiv:1801.02612, 2018. [25] L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry, “Exploring the land-scape of spatial robustness,” in International Conference on Machine Learning, pp. 1802–1811, PMLR, 2019. [26] E. Wong, F. Schmidt, and Z. Kolter, “Wasserstein adversarial examples via projected sinkhorn iterations,” in International Conference on Machine Learning, pp. 6808–6817, PMLR, 2019. [27] J. Mohapatra, T.-W. Weng, P.-Y. Chen, S. Liu, and L. Daniel, “Towards verifying robustness of neural networks against a family of semantic perturbations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. [28] I. Dunn, L. Hanu, H. Pouget, D. Kroening, and T. Melham, “Evaluating robustness to context-sensitive feature perturbations of different granularities,” arXiv preprint arXiv:2001.11055, 2020. [29] D. Zhou, T. Liu, B. Han, N. Wang, C. Peng, and X. Gao, “Towards defending against adversarial examples via attack-invariant features,” ArXiv, vol. abs/2106.05036, 2021. [30] C. Laidlaw and S. Feizi, “Functional adversarial attacks,” arXiv preprint arXiv:1906.00001, 2019. [31] M. Jordan, N. Manoj, S. Goel, and A. G. Dimakis, “Quantifying perceptual distortion of adversarial examples,” arXiv preprint arXiv:1902.08265, 2019. [32] X. Mao, Y. Chen, S. Wang, H. Su, Y. He, and H. Xue, “Composite adversarial attacks,” Association for the Advancement of Artificial Intelligence (AAAI), 2021. [33] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” International Conference on Learning Representations, 2017. [34] D. Stutz, M. Hein, and B. Schiele, “Confidence-calibrated adversarial training: Generalizing to unseen attacks,” in International Conference on Machine Learning, pp. 9155–9166, PMLR, 2020. [35] Y. Sharma and P.-Y. Chen, “Attacking the Madry defense model with L1-based adversarial examples,” ICLR Workshop, 2018. [36] F. Tramèr and D. Boneh, “Adversarial training and robustness for multiple pertur-bations,” arXiv preprint arXiv:1904.13000, 2019. [37] J. Wang, T. Zhang, S. Liu, P.-Y. Chen, J. Xu, M. Fardad, and B. Li, “Towards a unified min-max framework for adversarial exploration and robustness,” arXiv preprint arXiv:1906.03563, 2019. [38] P. Maini, E. Wong, and Z. Kolter, “Adversarial robustness against the union of multiple perturbation models,” in Proceedings of the 37th International Conference on Machine Learning (H. D. III and A. Singh, eds.), vol. 119 of Proceedings of Machine Learning Research, pp. 6640–6650, PMLR, 13–18 Jul 2020. [39] G. Mena, D. Belanger, S. Linderman, and J. Snoek, “Learning latent permuta-tions with gumbel-sinkhorn networks,” in International Conference on Learning Representations, 2018. [40] R. Sinkhorn, “A relationship between arbitrary positive matrices and stochastic matrices,” Canadian Journal of Mathematics, vol. 18, p. 303–306, 1966. [41] H. Wang and A. Banerjee, “Bregman alternating direction method of multipliers,” in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NeurIPS’14, (Cambridge, MA, USA), p. 2816–2824, MIT Press, 2014. [42] R. Sinkhorn and P. Knopp, “Concerning nonnegative matrices and doubly stochas-tic matrices,” Pacific Journal of Mathematics, vol. 21, no. 2, pp. 343–348, 1967. [43] H. W. Kuhn, “The hungarian method for the assignment problem,” Naval research logistics quarterly, vol. 2, no. 1-2, pp. 83–97, 1955. [44] J. R. Munkres, “Algorithms for the assignment and transportation problems,” Jour-nal of The Society for Industrial and Applied Mathematics, vol. 10, pp. 196–210, 1957. [45] M. Cuturi, “Sinkhorn distances: Lightspeed computation of optimal transport,” Advances in neural information processing systems, vol. 26, pp. 2292–2300, 2013. [46] J. Altschuler, J. Weed, and P. Rigollet, “Near-linear time approximation algorithms for optimal transport via sinkhorn iteration,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 1964–1974, 2017. [47] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” ICLR Workshop, 2017. [48] E. Riba, D. Mishkin, D. Ponsa, E. Rublee, and G. Bradski, “Kornia: an open source differentiable computer vision library for pytorch,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3674–3683, 2020. [49] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. Rep. 0, University of Toronto, Toronto, Ontario, 2009. [50] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015. [51] L. Engstrom, A. Ilyas, H. Salman, S. Santurkar, and D. Tsipras, “Robustness (python library),” 2019. [52] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32 (H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, eds.), pp. 8024–8035, Curran Associates, Inc., 2019. |