|
[1] 最壞勒索軟體CryptoWall 3已造成3.25億美元損失 https://www.ithome.com.tw/news/99669 [2] 美國銀行Capital One遭駭,逾1億名北美客戶資料外洩 https://www.ithome.com.tw/news/132117 [3] 網路攻擊造成醫院 IT 系統癱瘓 https://technews.tw/2020/09/21/germany-hospital-hacked/ [4] 入侵檢測系統 https://zh.wikipedia.org/wiki/%E5%85%A5%E4%BE%B5%E6%A3%80%E6%B5%8B%E7%B3%BB%E7%BB%9F [5] 主機型入侵檢測系統 http://www.netqna.com/2014/04/host-based-intrusion-detection-system.html [6] 網路型入侵檢測系統 http://www.netqna.com/2014/04/network-based-intrusion-detection-system.html [7] Tavallaee, M., Bagheri, E., Lu, W., & Ghorbani, A. A. (2009, July). A detailed analysis of the KDD CUP 99 data set. In 2009 IEEE symposium on computational intelligence for security and defense applications (pp. 1-6). IEEE. [8] John, G. H., & Langley, P. (2013). Estimating continuous distributions in Bayesian classifiers. arXiv preprint arXiv:1302.4964. [9] Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32. [10] Aldous, D. (1991). The continuum random tree. I. The Annals of Probability, 1-28. [11] Chang, C. C., & Lin, C. J. (2011). LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3), 1-27. [12] Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1, No. 2). Cambridge: MIT press. [13] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. [14] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. [15] Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. [16] Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. [17] https://blog.yeshuanova.com/2018/01/autoencoder-tutorial/ [18] He, H., Bai, Y., Garcia, E. A., & Li, S. (2008, June). ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence) (pp. 1322-1328). IEEE. [19] Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3), 1-58. [20] 隨機過取樣 https://www.mdeditor.tw/pl/2VGp/zh-tw [21] Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321-357. [22] 人工神經網 https://zh.wikipedia.org/wiki/%E4%BA%BA%E5%B7%A5%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C [23] 神經元http://www.hkpe.net/hkdsepe/human_body/neuron.htm [24] Tanh Function https://cvfiasd.pixnet.net/blog/post/275774124-%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92%E6%BF%80%E5%8B%B5%E5%87%BD%E6%95%B8%E4%BB%8B%E7%B4%B9 [25] Sigmoid Function https://en.wikipedia.org/wiki/Sigmoid_function [26] ReLU Function https://zh.wikipedia.org/zh-tw/%E7%BA%BF%E6%80%A7% E6%95%B4%E6%B5%81%E5%87%BD%E6%95%B0 [27] He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision (pp. 1026-1034). [28] Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for activation functions. arXiv preprint arXiv:1710.05941. [29] https://colah.github.io/posts/2015-08-Understanding-LSTMs/ [30] https://zh.wikipedia.org/wiki/%E7%89%B9%E5%BE% B5%E6%8F%90%E5%8F%96 [31] John, G. H., & Langley, P. (2013). Estimating continuous distributions in Bayesian classifiers. arXiv preprint arXiv:1302.4964 [32] Aldous, D. (1991). The continuum random tree. I. The Annals of Probability, 1-28. [33] Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32. [34] Kohavi, R. (1996, August). Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. In Kdd (Vol. 96, pp. 202-207) [35] Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273-297. [36] Kanakarajan, N. K., & Muniasamy, K. (2016). Improving the accuracy of intrusion detection using gar-forest with feature selection. In Proceedings of the 4th International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA) 2015 (pp. 539-547). Springer, New Delhi. [37] Mining, W. I. D. (2006). Data mining: Concepts and techniques. Morgan Kaufinann, 10, 559-569. [38] Parimala, R., & Nallaswamy, R. (2011). A study of spam e-mail classification using feature selection package. Global Journal of Computer Science and Technology. [39] Lower, N., & Zhan, F. (2020, January). A study of ensemble methods for cyber security. In 2020 10th Annual Computing and Communication Workshop and Conference (CCWC) (pp. 1001-1009). IEEE. [40] Kittler, J., Hatef, M., Duin, R. P., & Matas, J. (1998). On combining classifiers. IEEE transactions on pattern analysis and machine intelligence, 20(3), 226-239. [41] Breiman, L. (1996). Bagging predictors. Machine learning, 24(2), 123 -140. [42] Freund, Y., Schapire, R., & Abe, N. (1999). A short introduction to boosting. Journal-Japanese Society For Artificial Intelligence, 14(771-780),1612. [43] Yin, C., Zhu, Y., Fei, J., & He, X. (2017). A deep learning approach for intrusion detection using recurrent neural networks. Ieee Access, 5, 21954-21961. [44] Al-Qatf, M., Lasheng, Y., Al-Habib, M., & Al-Sabahi, K. (2018). Deep learning approach combining sparse autoencoder with SVM for network intrusion detection. IEEE Access, 6, 52843-52856. [45] Ng, A. (2011). Sparse autoencoder. CS294A Lecture notes, 72(2011), 1-19. [46] Li, Y., Xu, Y., Liu, Z., Hou, H., Zheng, Y., Xin, Y., ... & Cui, L. (2020). Robust detection for network intrusion of industrial IoT based on multi-CNN fusion. Measurement, 154, 107450. [47] Yang, Y., Zheng, K., Wu, B., Yang, Y., & Wang, X. (2020). Network intrusion detection based on supervised adversarial variational auto-encoder with regularization. IEEE Access, 8, 42169-42184. [48] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. (2017). Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028. [49] One-Hot Encoding https://codertw.com/%E4%BA%BA%E5%B7%A5%E6%99%BA%E6%85%A7/95606/ [50] Optimizer https://medium.com/%E9%9B%9E%E9%9B%9E%E8%88%87%E5%85%94%E5%85%94%E7%9A%84%E5%B7%A5%E7%A8%8B%E4%B8%96%E7%95%8C/%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92ml-note-sgd-momentum-adagrad-adam-optimizer-f20568c968db [51] Nadam https://www.twblogs.net/a/5b7e5b6a2b7177683856d907 [52] EarlyStopping https://medium.com/ai%E5%8F%8D%E6%96%97%E5%9F%8E/learning-model-earlystopping%E4%BB%8B%E7%B4%B9-%E8%BD%89%E9%8C%84-f364f4f220fb
|