|
[1] D. E. Dimla and P. M. Lister, "On-line metal cutting tool condition monitoring," International Journal of Machine Tools and Manufacture, vol. 40, no. 5, pp. 739-768, 2000. [2] F. Aghazadeh, A. Tahan, and M. Thomas, "Tool condition monitoring using spectral subtraction and convolutional neural networks in milling process," The International Journal of Advanced Manufacturing Technology, vol. 98, no. 9-12, pp. 3217-3227, 2018. [3] "[Online picture] FFT and WT resolution," https://zh.wikipedia.org/wiki/%E5%B0%8F%E6%B3%A2%E5%88%86%E6%9E%90. [4] T. Kalvoda and Y.-R. Hwang, "A cutter tool monitoring in machining process using Hilbert–Huang transform," vol. 50, no. 5, pp. 495-501, 2010. [5] M. Brezocnik, M. Kovacic, and M. Ficko, "Prediction of surface roughness with genetic programming," Journal of Materials Processing Technology, vol. 157-158, pp. 28-36, 2004. [6] H. Dong, D. Wu, and H. Su, "Use of least square support vector machine in surface roughness prediction model," SPIE. [7] "[Online picture] SVM," https://cg2010studio.com/2012/05/20/%E6%94%AF%E6%8C%81%E5%90%91%E9%87%8F%E6%A9%9F%E5%99%A8-support-vector-machine/. [8] A. M. Zain, H. Haron, and S. Sharif, "Prediction of surface roughness in the end milling machining using Artificial Neural Network," Expert Systems with Applications, vol. 37, no. 2, pp. 1755-1768, 2010. [9] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. [10] D. Stoller, S. Ewert, and S. Dixon, "Wave-u-net: A multi-scale neural network for end-to-end audio source separation," arXiv preprint arXiv:1806.03185, 2018. [11] R. Zhao, J. Wang, R. Yan, and K. Mao, "Machine health monitoring with LSTM networks," IEEE. [12] "[Online picture] Neurons," https://blog.birkhoff.me/introducing-artificial-neural-network-1/. [13] "[Online picture] Neuron math structure," https://insights.sei.cmu.edu/sei_blog/2018/02/deep-learning-going-deeper-toward-meaningful-patterns-in-complex-data.html. [14] K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. Lecun, "What is the best multi-stage architecture for object recognition?," IEEE. [15] "[Online picture] Pooling," https://medium.com/ai-in-plain-english/pooling-layer-beginner-to-intermediate-fa0dbdce80eb. [16] "[Online picture] NN stucture," https://www.i2tutorials.com/hidden-layers-in-neural-networks/. [17] K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward networks are universal approximators," Neural networks, vol. 2, no. 5, pp. 359-366, 1989. [18] "[Online picture] Overfitting," https://livebook.manning.com/book/machine-learning-for-mortals-mere-and-otherwise/chapter-9/v-4/. [19] "[Online picture] Convolution," https://www.shuzhiduo.com/A/Gkz19Z1rzR/. [20] M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European conference on computer vision, 2014, pp. 818-833: Springer. [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Communications of the ACM, vol. 60, no. 6, pp. 84-90, 2017. [22] Andrew et al., "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," arXiv pre-print server, 2017-04-17 2017. [23] G. Huang, S. Liu, Laurens, and Kilian, "CondenseNet: An Efficient DenseNet using Learned Group Convolutions," arXiv pre-print server, 2018-06-07 2018. [24] X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010, pp. 249-256. [25] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. [26] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708. [27] "[Online picture] Loss function," https://heartbeat.fritz.ai/5-regression-loss-functions-all-machine-learners-should-know-4fb140e9d4b0. [28] M. C. Mozer, "A focused back-propagation algorithm for temporal pattern recognition," Complex systems, vol. 3, no. 4, pp. 349-381, 1989. [29] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. The MIT Press, 2016. [30] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. [31] "[Online picture] Momentum hill climbing," https://www.programmersought.com/article/19954697033/. [32] "[Online picture] Attention," https://www.cnblogs.com/jins-note/p/13056604.html. [33] V. Mnih, N. Heess, and A. Graves, "Recurrent models of visual attention," in Advances in neural information processing systems, 2014, pp. 2204-2212. [34] D. Bahdanau, K. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014. [35] A. Vaswani et al., "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008. [36] G. K. Agogino A, "Milling Dataset," 2007. [37] L. C. LI Yingguang, LI Dehua, HUA Jiaqi, WAN Peng, ""Tool wear dataset of NUAA_Ideahouse"," IEEE Dataport, March 20 2021. [38] W. Luo, Y. Li, R. Urtasun, and R. Zemel, "Understanding the effective receptive field in deep convolutional neural networks," Advances in neural information processing systems, vol. 29, pp. 4898-4906, 2016.
|