帳號:guest(18.191.176.81)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林軒毅
作者(外文):Lin, Xuan-Yi
論文名稱(中文):針對深度學習模型與3D感測器的黑盒物理對抗例攻擊
論文名稱(外文):3D-Adv:Black-Box Physical Adversarial Attacks against Deep Learning Models through 3D Sensors
指導教授(中文):何宗易
指導教授(外文):Ho, Tsung-Yi
口試委員(中文):李淑敏
陳宏明
口試委員(外文):Li, Shu-Min
Chen, Hung-Ming
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:108062544
出版年(民國):110
畢業學年度:109
語文別:英文
論文頁數:39
中文關鍵詞:對抗例攻擊深度學習神經網路
外文關鍵詞:Adversarial AttackDeep LearningNeural Network
相關次數:
  • 推薦推薦:0
  • 點閱點閱:176
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
深度學習技術和商用3D感測器的結合展示了光明的未來,因為它們提供了一種低成本且方便的方法來收集和分析環境中的深度訊息,適用於從工業建模到移動端人臉識別的各種應用。
儘管已有許多的研究致力於開發更精確、靈活且高效的機器學習方案以及3D感測器,但多數與這些技術相關的安全問題仍未有深入的探討。
在本文中,我們提出了一種新的針對這種情境的對抗式攻擊方法,該方法顯示主流3D感測器與深度學習模型的組合可能會對現實環境中的物體進行錯誤分類。
相比於現有針對3D數據分析開發的深度學習模型攻擊算法(僅考慮點雲數據和單一深度學習模型),我們的攻擊目標是主流的商用3D感測器在黑盒環境下的各種深度模型架構。
實驗結果表明我們3D列印後的對抗式物體經過3D感測器掃描後仍然能有效攻擊深度學習模型。
The combination of deep learning techniques and commercial 3D sensors reveal a bright future as they provide a low cost and convenient method to collect and analyze depth information from the environment for various applications ranging from industrial modeling to mobile face recognition. Despite the abundant research devoted to the development of more accurate, flexible and efficient machine learning schemes as well as 3D sensors, security concerns related to these techniques remain largely untouched.
In this thesis, we propose a novel adversarial attack against this combination by showing that deep learning models with popular 3D sensors may misclassify real objects in the physical environment. Comparing to the existing attack algorithms against deep learning models developed for 3D data analysis that only consider digital point cloud data and single deep learning model, our attacks target popular commercial 3D sensors combined with various deep learning schemes in the black-box setting. The experimental results demonstrate that our 3D printed adversarial objects stay effective after scanned by the 3D sensor.
摘要i
Acknowledgement ii
Abstract iii
1 Introduction 1
2 Related Work 4
2.1 Commercial 3D Vision System . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Deep Learning on 3D Data . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Adversarial Attacks in Deep Learning . . . . . . . . . . . . . . . . . . . . 7
3 Proposed Methodology 8
3.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Genetic Adversarial Attack in 3D Sensing . . . . . . . . . . . . . . . . . . 9
3.2.1 Genetic algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2.2 Fitness function. . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.3 Mutation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.4 Crossover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.5 Attack types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Experimental Results 14
4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Most-likely Altering Attack on ModelNet40 Mesh . . . . . . . . . . . . . . 16
4.3 Reshaping Attack from Standard Sphere . . . . . . . . . . . . . . . . . . . 18
4.4 Evaluation of 3D Printed Adversarial Objects . . . . . . . . . . . . . . . . 22
4.5 Comparison with existing works . . . . . . . . . . . . . . . . . . . . . . . 24
5 Conclusion 26
6 Future work 27
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.2 Proposed Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.2.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.2.2 Adversarial Object Generation Flow . . . . . . . . . . . . . . . . . 28
6.2.3 Types of Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.3.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.3.2 Face Transforming Attack Evaluation . . . . . . . . . . . . . . . . 32
6.3.3 Object Altering Attack Evaluation . . . . . . . . . . . . . . . . . . 33
6.3.4 Evaluation of 3D Physical Adversarial Objects. . . . . . . . . . . . 34
6.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
References 36
[1] D. Maturana and S. Scherer, “Voxnet: A 3d convolutional neural network for realtime object recognition,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 922–928, IEEE, 2015.
[2] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660, 2017.
[3] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” in Advances in neural information processing systems, pp. 5099–5108, 2017.
[4] M. Atzmon, H. Maron, and Y. Lipman, “Point convolutional neural networks by extension operators,” arXiv preprint arXiv:1803.10091, 2018.
[5] W. Wu, Z. Qi, and L. Fuxin, “Pointconv: Deep convolutional networks on 3d point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9621–9630, 2019.
[6] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
[7] N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou, “Hidden voice commands,” in 25th fUSENIXg Security Symposium (fUSENIXg Security 16), pp. 513–530, 2016.
[8] S.-J. Wang, Y.-S. Chen, and K. S.-M. Li, “Adversarial attack against modeling attack on pufs,” in 2019 56th ACM/IEEE Design Automation Conference (DAC), pp. 1–6, IEEE, 2019.
[9] S. M. P. Dinakarrao, S. Amberkar, S. Bhat, A. Dhavlle, H. Sayadi, A. Sasan, H. Homayoun, and S. Rafatirad, “Adversarial attack on microarchitectural events based malware detectors,” in Proceedings of the 56th Annual Design Automation Conference 2019, pp. 1–6, 2019.
[10] X. Yuan, Y. Chen, Y. Zhao, Y. Long, X. Liu, K. Chen, S. Zhang, H. Huang, X. Wang, and C. A. Gunter, “Commandersong: A systematic approach for practical adversarial voice recognition,” in 27th USENIX Security Symposium (USENIX Security 18), pp. 49–64, 2018.
[11] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial Intelligence Safety and Security, pp. 99–112, Chapman and Hall/CRC, 2018.
[12] C. Xiang, C. R. Qi, and B. Li, “Generating 3d adversarial point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9136–9144, 2019.
[13] D. Liu, R. Yu, and H. Su, “Extending adversarial attacks and defenses to deep 3d point cloud classifiers,” arXiv preprint arXiv:1901.03006, 2019.
[14] Y. Cao, C. Xiao, B. Cyr, Y. Zhou,W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao, “Adversarial sensor attack on lidar-based perception in autonomous driving,” arXiv preprint arXiv:1907.06826, 2019.
[15] T. Tsai, K. Yang, T.-Y. Ho, and Y. Jin, “Robust adversarial objects against deep learning models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 954–962, 2020.
[16] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[17] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in Proceedings of the Security and Privacy (S&P) on 2016 IEEE European Symposium, IEEE, 2016.
[18] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in Proceedings of the Security and Privacy (S&P) on 2017 IEEE Symposium, IEEE, 2017.
[19] N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” arXiv preprint arXiv:1605.07277, 2016.
[20] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26, 2017.
[21] J. Chen, M. Su, S. Shen, H. Xiong, and H. Zheng, “Poba-ga: Perturbation optimized black-box adversarial attacks via genetic algorithm,” Computers & Security, vol. 85, pp. 89–106, 2019.
[22] https://pytorch3d.readthedocs.io/en/latest/modules/loss.html.
[23] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920, 2015.
[24] M. Alzantot, Y. Sharma, S. Chakraborty, H. Zhang, C.-J. Hsieh, and M. B. Srivastava, “Genattack: Practical black-box attacks with gradient-free optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1111–1119, 2019.
[25] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1499–1503, 2016.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *