|
[1] N. Carlini and D. A.Wagner. Towards evaluating the robustness of neural networks. In IEEESSP’17, 2017. [2] H.-Y. Chen, P.-H.Wang, C.-H. Liu, S.-C. Chang, J.-Y. Pan, Y.-T. Chen, W.Wei, and D.-C. Juan. Complement objective training. In ICLR’19, 2019. [3] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li. Boosting adversarial attacks with momentum. In CVPR’18, 2018. [4] I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR’15, 2015. [5] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative Adversarial Networks. In ICLR’14, 2014. [6] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR’16, 2016. [7] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS’12, 2012. [9] A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial examples in the physical world. In ICLR’17 Workshop, 2017. [10] A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial machine learning at scale. In ICLR’17, 2017. [11] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. [12] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: common objects in context. In ECCV’14, 2014. [13] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR’18, 2018. [14] P. Nakkiran. Adversarial robustness may be at odds with simplicity. arXiv preprint arXiv:1901.00532, 2019. [15] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy, 2016. [16] N. Papernot and P. D. McDaniel. Extending defensive distillation. arXiv preprint arXiv:1705.05264, 2017. [17] N. Papernot, P. D. McDaniel, X.Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE Symposium on Security and Privacy, 2015. [18] A. Raghunathan, J. Steinhardt, and P. Liang. Certified defenses against adversarial examples. In ICLR’18, 2018. [19] P. J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 1987. [20] F. Tramr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel. Ensemble adversarial training: Attacks and defenses. In ICLR’18, 2018. [21] D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry. Robustness may be at odds with accuracy. In ICLR’19, 2019. [22] V. N. Vapnik. An overview of statistical learning theory. IEEE transactions on neural networks, 1999. [23] T.-W. Weng, H. Zhang, P.-Y. Chen, J. Yi, D. Su, Y. Gao, C.-J. Hsieh, and L. Daniel. Evaluating the robustness of neural networks: An extreme value theory approach. In ICLR’18, 2018. |