帳號:guest(18.220.91.255)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張庭宇
作者(外文):Chang, Ting-Yu
論文名稱(中文):針對機台加工訊號之卷積神經網路設計並應用於工具狀態監控
論文名稱(外文):Convolutional Neural Network Design for Manufacturing Signal and its Application in Tool Condition Monitoring
指導教授(中文):張禎元
指導教授(外文):Chang, Jen-Yuan
口試委員(中文):宋震國
林峻永
口試委員(外文):Sung, Cheng-Kuo
Lin, Chun-Yeon
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:108033592
出版年(民國):110
畢業學年度:109
語文別:中文
論文頁數:76
中文關鍵詞:工具狀態監控深度神經網路卷積神經網路
外文關鍵詞:Tool Condition MonitoringDeep Neural NetworkConvolutional Neural Network
相關次數:
  • 推薦推薦:0
  • 點閱點閱:424
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
本篇論文主要在探討利用人工智慧技術針對機台加工時產生的加工訊號進行切削刀具磨耗之預測,其目的在於降低製造後仍然需求大量人力及時間針對物件進行逐一檢查之成本。由電腦自動監控刀具磨耗,能夠達到加工物件品質管理的效果,同時可以有效地提升其整體生產之效能。在本研究中,提出了使用深度一維卷積神經網路針對機台訊號進行預測。在設計模型中使用密集殘差連接提升模型的複雜度,以提升預測值之準確度,且採用池化層大幅降低所使用之參數量,同時引入分組卷積技術進一步降低參數量。並在此基礎上加入模擬人類思考模式之自注意力層進一步增加準確度,保持合理之運算速度,最後利用開源之機械數據集驗證已設計之刀具磨耗預測模型,並且進行消融實驗以探討所引入技術之間之貢獻度及優缺點。
本研究所提出之預測模型除了對於準確度有進行優化外,同時對所需儲存之參數量以及模型推論時所耗費之計算量都有進行設計。此外,本研究與過去文獻進行比較後,可提升預測準確度達原先的百分之十,且不須經過訊號前處理,在推論速度上也保持在應用範圍內。
This research mainly discusses applying artificial intelligence technology to predict machining tool condition using signals generated during manufacturing. Its purpose is to reduce the cost of inspecting product one by one. Automatically monitoring tool condition by computers can achieve efficient product quality management and improve the overall production efficiency. In this study, a deep one-dimensional convolutional neural network is proposed to predict surface quality. In the designed model, dense residual skip-connections are used to improve the complexity of the model, so as to improve the accuracy of predicted values. Pooling layer is adopted to greatly reduce the amount of parameters used in the model. Also, group convolution method is utilized in the prediction model. On the basis of designed deep 1D-CNN, a self-attention layer is added to simulate human thinking behavior to further increase the accuracy, while maintaining reasonable inference speed. Finally, the accuracy of the model is verified by using open source mechanical datasets. The prediction model proposed in this research not only optimizes the accuracy, but also reduces the parameters that need to be stored and the computation resource that is consumed in the inference stage. In addition, compared with the previous literature, the model proposed in this study can improve the prediction accuracy by 10 percent, without any signal preprocessing algorithm needed.
目錄
中文摘要................................I
Abstract...............................II
致謝..................................III
目錄...................................IV
圖目錄................................VII
表目錄..................................X
第一章 緒論............................1
1.1 前言............................1
1.2 研究動機........................2
1.3 文獻回顧........................3
1.3.1 訊號前處理......................4
1.3.2 預測模型建立....................7
1.4 研究目標及方法.................12
1.5 預期結果.......................13
第二章 理論背景.......................14
2.1 前言...........................14
2.2 神經網路模型理.................14
2.2.1 全連接層.......................17
2.2.2 卷積層.........................18
2.2.3 分組卷積.......................19
2.2.4 池化層.........................21
2.2.5 神經元初始化...................22
2.2.6 殘差連接.......................24
2.3 神經網路模型優化理論...........26
2.3.1 損失函數.......................26
2.3.2 反向傳播算法...................29
2.3.3 優化器.........................31
2.4 自注意力機制...................32
2.5 本章總結.......................35
第三章 磨耗預測模型之建立與驗證.......36
3.1 前言...........................36
3.2 卷積神經網路評價指標...........36
3.3 開源機械切削資料集.............38
3.3.1 NASA 資料集....................38
3.3.2 NUAA_Ideahouse 資料集..........41
3.4 卷積神經網路建立實驗...........43
3.4.1 卷積層之深度設計...............44
3.4.2 卷積核大小設計.................50
3.4.3 池化層設計.....................52
3.4.4 殘差連接設計...................53
3.4.5 分組卷積設計...................55
3.5 自注意力機制實驗...............57
3.6 消融實驗與其他方法比較.........60
3.6.1 消融實驗.......................60
3.6.2 其他方法比較...................62
3.7 本章總結.......................65
第四章 結論...........................66
4.1 結論...........................66
4.2 本文貢獻.......................70
4.3 未來展望.......................71
參考文獻...............................72






[1] D. E. Dimla and P. M. Lister, "On-line metal cutting tool condition monitoring," International Journal of Machine Tools and Manufacture, vol. 40, no. 5, pp. 739-768, 2000.
[2] F. Aghazadeh, A. Tahan, and M. Thomas, "Tool condition monitoring using spectral subtraction and convolutional neural networks in milling process," The International Journal of Advanced Manufacturing Technology, vol. 98, no. 9-12, pp. 3217-3227, 2018.
[3] "[Online picture] FFT and WT resolution," https://zh.wikipedia.org/wiki/%E5%B0%8F%E6%B3%A2%E5%88%86%E6%9E%90.
[4] T. Kalvoda and Y.-R. Hwang, "A cutter tool monitoring in machining process using Hilbert–Huang transform," vol. 50, no. 5, pp. 495-501, 2010.
[5] M. Brezocnik, M. Kovacic, and M. Ficko, "Prediction of surface roughness with genetic programming," Journal of Materials Processing Technology, vol. 157-158, pp. 28-36, 2004.
[6] H. Dong, D. Wu, and H. Su, "Use of least square support vector machine in surface roughness prediction model," SPIE.
[7] "[Online picture] SVM," https://cg2010studio.com/2012/05/20/%E6%94%AF%E6%8C%81%E5%90%91%E9%87%8F%E6%A9%9F%E5%99%A8-support-vector-machine/.
[8] A. M. Zain, H. Haron, and S. Sharif, "Prediction of surface roughness in the end milling machining using Artificial Neural Network," Expert Systems with Applications, vol. 37, no. 2, pp. 1755-1768, 2010.
[9] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[10] D. Stoller, S. Ewert, and S. Dixon, "Wave-u-net: A multi-scale neural network for end-to-end audio source separation," arXiv preprint arXiv:1806.03185, 2018.
[11] R. Zhao, J. Wang, R. Yan, and K. Mao, "Machine health monitoring with LSTM networks," IEEE.
[12] "[Online picture] Neurons," https://blog.birkhoff.me/introducing-artificial-neural-network-1/.
[13] "[Online picture] Neuron math structure," https://insights.sei.cmu.edu/sei_blog/2018/02/deep-learning-going-deeper-toward-meaningful-patterns-in-complex-data.html.
[14] K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. Lecun, "What is the best multi-stage architecture for object recognition?," IEEE.
[15] "[Online picture] Pooling," https://medium.com/ai-in-plain-english/pooling-layer-beginner-to-intermediate-fa0dbdce80eb.
[16] "[Online picture] NN stucture," https://www.i2tutorials.com/hidden-layers-in-neural-networks/.
[17] K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward networks are universal approximators," Neural networks, vol. 2, no. 5, pp. 359-366, 1989.
[18] "[Online picture] Overfitting," https://livebook.manning.com/book/machine-learning-for-mortals-mere-and-otherwise/chapter-9/v-4/.
[19] "[Online picture] Convolution," https://www.shuzhiduo.com/A/Gkz19Z1rzR/.
[20] M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European conference on computer vision, 2014, pp. 818-833: Springer.
[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Communications of the ACM, vol. 60, no. 6, pp. 84-90, 2017.
[22] Andrew et al., "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," arXiv pre-print server, 2017-04-17 2017.
[23] G. Huang, S. Liu, Laurens, and Kilian, "CondenseNet: An Efficient DenseNet using Learned Group Convolutions," arXiv pre-print server, 2018-06-07 2018.
[24] X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010, pp. 249-256.
[25] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[26] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
[27] "[Online picture] Loss function," https://heartbeat.fritz.ai/5-regression-loss-functions-all-machine-learners-should-know-4fb140e9d4b0.
[28] M. C. Mozer, "A focused back-propagation algorithm for temporal pattern recognition," Complex systems, vol. 3, no. 4, pp. 349-381, 1989.
[29] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. The MIT Press, 2016.
[30] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
[31] "[Online picture] Momentum hill climbing," https://www.programmersought.com/article/19954697033/.
[32] "[Online picture] Attention," https://www.cnblogs.com/jins-note/p/13056604.html.
[33] V. Mnih, N. Heess, and A. Graves, "Recurrent models of visual attention," in Advances in neural information processing systems, 2014, pp. 2204-2212.
[34] D. Bahdanau, K. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
[35] A. Vaswani et al., "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008.
[36] G. K. Agogino A, "Milling Dataset," 2007.
[37] L. C. LI Yingguang, LI Dehua, HUA Jiaqi, WAN Peng, ""Tool wear dataset of NUAA_Ideahouse"," IEEE Dataport, March 20 2021.
[38] W. Luo, Y. Li, R. Urtasun, and R. Zemel, "Understanding the effective receptive field in deep convolutional neural networks," Advances in neural information processing systems, vol. 29, pp. 4898-4906, 2016.

(此全文未開放授權)
電子全文
中英文摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *