|
[1]T. N. Kipf and M. Welling, “Semi-supervised classification with graph convo-lutional networks,”arXiv preprint arXiv:1609.02907, 2016. [2]X. Rong, “word2vec parameter learning explained,”arXiv preprintarXiv:1411.2738, 2014. [3]X. Liang, “Semantic object parsing with graph lstm,”European Conferenceon Computer Vision, 2016. [4]X. Liang, L. Lin, X. Shen, J. Feng, S. Yan, and E. P. Xing, “Interpretablestructure-evolving lstm,” inProceedingsoftheIEEEConferenceonComputerVision and Pattern Recognition, 2017, pp. 1010–1019. [5]L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmenta-tion with superpoint graphs,” inProceedings of the IEEE Conference on Com-puter Vision and Pattern Recognition, 2018, pp. 4558–4567. [6]Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon,“Dynamic graph cnn for learning on point clouds,”Acm Transactions OnGraphics (tog), vol. 38, no. 5, pp. 1–12, 2019. [7]D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel,A. Aspuru-Guzik, and R. P. Adams, “Convolutional networks on graphs forlearning molecular fingerprints,” inAdvances in neural information process-ing systems, 2015, pp. 2224–2232. [8]T. Hamaguchi, H. Oiwa, M. Shimbo, and Y. Matsumoto, “Knowledge transferfor out-of-knowledge-base entities: A graph neural network approach,”arXivpreprint arXiv:1706.05674, 2017. [9]I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing ad-versarial examples,”arXiv preprint arXiv:1412.6572, 2014. [10]S. Müller and A. Welsh, “Outlier robust model selection in linear regression,”Journal of the American Statistical Association, vol. 100, no. 472, pp. 1297–1310, 2005. [11]E. M. Jordaan and G. F.Smits, “Robust outlier detection using svm regression,”inIEEE International Joint Conference on Neural Networks, vol. 3, 2004, pp.2017–2022. [12]K. Mitra, A. Veeraraghavan, and R. Chellappa, “Robust rvm regression usingsparse outlier model,” in2010 IEEE Computer Society Conference on Com-puter Vision and Pattern Recognition. IEEE, 2010, pp. 1887–1894. [13]D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neu-ral networks for graph data,” inProceedings of the 24th ACM SIGKDD In-ternational Conference on Knowledge Discovery & Data Mining, 2018, pp.2847–2856.45 [14]J.Chen, L.Chen, Y.Chen, M.Zhao, S.Yu, Q.Xuan, andX.Yang, “Ga-based q-attack on community detection,”IEEE Transactions on Computational SocialSystems, vol. 6, no. 3, pp. 491–503, 2019. [15]H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, “Adversarialattack on graph structured data,”arXiv preprint arXiv:1806.02371, 2018. [16]H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarialexamples on graph data: Deep insights into attack and defense,”arXivpreprintarXiv:1903.01610, 2019. [17]D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”arXivpreprint arXiv:1412.6980, 2014. [18]L. Bottou, “Large-scale machine learning with stochastic gradient descent,” inProceedings of COMPSTAT’2010. Springer, 2010, pp. 177–186. [19]N. Carlini and D. Wagner, “Adversarial examples are not easily detected: By-passing ten detection methods,” inProceedings of the 10th ACM Workshop onArtificial Intelligence and Security, 2017, pp. 3–14. [20]E. Wong, F. R. Schmidt, and J. Z. Kolter, “Wasserstein adversarial examplesvia projected sinkhorn iterations,”arXiv preprint arXiv:1902.07906, 2019. [21]K. Xu, S. Liu, P. Zhao, P.-Y. Chen, H. Zhang, Q. Fan, D. Erdogmus, Y. Wang,and X. Lin, “Structured adversarial attack: Towards general implementationand better interpretability,”arXiv preprint arXiv:1808.01664, 2018. [22]R. Dang-Nhu, G. Singh, P. Bielik, and M. Vechev, “Adversarial at-tacks on probabilistic autoregressive forecasting models,”arXiv preprintarXiv:2003.03778, 2020. [23]S. Liu, S. Lu, X. Chen, Y. Feng, K. Xu, A. Al-Dujaili, M. Hong, and U.-M.Obelilly, “Min-max optimization without gradients: Convergence and appli-cations to adversarial ml,”arXiv preprint arXiv:1909.13806, 2019. [24]B. Ru, A. Cobb, A. Blaas, and Y. Gal, “Bayesopt adversarial attack,” inInter-national Conference on Learning Representations, 2019. [25]A. Ilyas, L. Engstrom, and A. Madry, “Prior convictions: Black-box adversar-ial attacks with bandits and priors,”arXiv preprint arXiv:1807.07978, 2018. [26]J. Chen, Y. Wu, X. Xu, Y. Chen, H. Zheng, and Q. Xuan, “Fast gradient attackon network embedding,”arXiv preprint arXiv:1809.02797, 2018. [27]D. Zügner and S. Günnemann, “Adversarial attacks on graph neural networksvia meta learning,”arXiv preprint arXiv:1902.08412, 2019. [28]J. Li, H. Zhang, Z. Han, Y. Rong, H. Cheng, and J. Huang, “Adversarial attackon community detection by hiding individuals,” inProceedings of The WebConference 2020, 2020, pp. 917–927.46 [29]A. Bojchevski and S. Günnemann, “Adversarial attacks on node embeddingsvia graph poisoning,”arXiv preprint arXiv:1809.01093, 2018. [30]A. J. Bose, A. Cianflone, and W. L. Hamilton, “Generalizable adver-sarial attacks with latent variable perturbation modelling,”arXiv preprintarXiv:1905.10864, 2019. [31]J. Chen, Y. Chen, L. Chen, M. Zhao, and Q. Xuan, “Multiscale evo-lutionary perturbation attack on community detection,”arXiv preprintarXiv:1910.09741, 2019. [32]N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami,“Practical black-box attacks against machine learning,” inProceedings of the2017 ACM on Asia conference on computer and communications security,2017, pp. 506–519. [33]X. Liu, M. Cheng, H. Zhang, and C.-J. Hsieh, “Towards robust neural networksvia random self-ensemble,” inProceedings of the European Conference onComputer Vision (ECCV), 2018, pp. 369–385. [34]X. Liu, Y. Li, C. Wu, and C.-J. Hsieh, “Adv-bnn: Improved adversarial defensethrough robust bayesian neural network,”arXiv preprint arXiv:1810.01279,2018. [35]M. Naseer, S. Khan, M. Hayat, F. S. Khan, and F. Porikli, “A self-supervisedapproach for adversarial robustness,” inProceedings of the IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, 2020, pp. 262–271. [36]T. Borkar, F. Heide, and L. Karam, “Defending against universal attacksthrough selective feature regeneration,” inProceedings of the IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, 2020, pp. 709–719. [37]Y. Dong, Q.-A. Fu, X. Yang, T. Pang, H. Su, Z. Xiao, and J. Zhu, “Bench-marking adversarial robustness on image classification,” inProceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020,pp. 321–331. [38]A. N. Bhagoji, D. Cullina, and P. Mittal, “Dimensionality reduction as a de-fense against evasion attacks on machine learning classifiers,”arXiv preprintarXiv:1704.02654, vol. 2, 2017. [39]R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner, “Detecting adversarialsamples from artifacts,”arXiv preprint arXiv:1703.00410, 2017. [40]J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff, “On detecting adver-sarial perturbations,”arXiv preprint arXiv:1702.04267, 2017. [41]X. Yin, S. Kolouri, and G. K. Rohde, “Gat: Generative adversarial trainingfor adversarial example detection and robust classification,” inInternationalConference on Learning Representations, 2019. [42]X. Wang, X. Liu, and C.-J. Hsieh, “Graphdefense: Towards robust graph con-volutional networks,”arXiv preprint arXiv:1911.04429, 2019.47 [43]Q. Dai, X. Shen, L. Zhang, Q. Li, and D. Wang, “Adversarial training methodsfor network embedding,” inThe World Wide Web Conference, 2019, pp. 329–339. [44]D. Zügner and S. Günnemann, “Certifiable robustness and robust training forgraph convolutional networks,” inProceedings of the 25th ACM SIGKDD In-ternational Conference on Knowledge Discovery & Data Mining, 2019, pp.246–256. [45]F. Feng, X. He, J. Tang, and T.-S. Chua, “Graph adversarial training: Dynami-cally regularizing based on graph structure,”IEEETransactionsonKnowledgeand Data Engineering, 2019. [46]X. Tang, Y. Li, Y. Sun, H. Yao, P. Mitra, and S. Wang, “Transferring robustnessfor graph neural network against poisoning attacks,” inProceedingsofthe13thInternationalConferenceonWebSearchandDataMining, 2020, pp. 600–608. [47]D. Zhu, Z. Zhang, P. Cui, and W. Zhu, “Robust graph convolutional networksagainst adversarial attacks,” inProceedingsofthe25thACMSIGKDDInterna-tional Conference on Knowledge Discovery & Data Mining, 2019, pp. 1399–1407. [48]X. Xu, Y. Yu, B. Li, L. Song, C. Liu, and C. Gunter, “Characterizing maliciousedges targeting on graph neural networks,” 2018. [49]Y. Zhang, S. Khan, and M. Coates, “Comparing and detecting adversarial at-tacks for graph deep learning,” inProc. Representation Learning on Graphsand Manifolds Workshop, Int. Conf. Learning Representations, New Orleans,LA, USA, 2019. [50]M. Yoon, B. Hooi, K. Shin, and C. Faloutsos, “Fast and accurate anomalydetection in dynamic graphs with a two-pronged approach,” inProceedings ofthe 25th ACM SIGKDD International Conference on Knowledge Discovery &Data Mining, 2019, pp. 647–657. [51]L. Zheng, Z. Li, J. Li, Z. Li, and J. Gao, “Addgraph: Anomaly detection indynamic graph using attention-based temporal gcn.” inIJCAI, 2019, pp. 4419–4425. [52]Q. Li, Z. Han, and X.-M. Wu, “Deeper insights into graph convolutional net-works for semi-supervised learning,” inThirty-Second AAAI Conference onArtificial Intelligence, 2018. [53]H. Anton and C. Rorres,Elementary linear algebra: applications version.John Wiley & Sons, 2013. [54]A. K. McCallum, K. Nigam, J. Rennie, and K. Seymore, “Automating theconstruction of internet portals with machine learning,”Information Retrieval,vol. 3, no. 2, pp. 127–163, 2000. [55]P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad,“Collective classification in network data,”AI magazine, vol. 29, no. 3, pp.93–93, 2008.48 [56]L. A. Adamic and N. Glance, “The political blogosphere and the 2004 us elec-tion: divided they blog,” inProceedings of the 3rd international workshop onLink discovery, 2005, pp. 36–43. |