帳號:guest(18.97.9.169)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士以作者查詢全國書目勘誤回報
作者:李佳霖
作者(外文):Chia-Lin Li
論文名稱:基於自適應合成抽樣與深度學習的多分類網路入侵檢測
論文名稱(外文):Multi-class Network Intrusion Detection Based on Adaptive Synthetic Sampling and Deep Learning
指導教授:江振瑞
指導教授(外文):Jehn-Ruey Jiang
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程學系
學號:108526019
出版年:110
畢業學年度:109
語文別:中文
論文頁數:73
中文關鍵詞:自適應合成深度學習F1分數入侵檢測系統長短期記憶神經網路精準度召回率變分自動編碼器
外文關鍵詞:adaptive syntheticdeep learningF1-scoreintrusion detection systemlong-term short-term neural networkprecisionrecallvariational autoencoder
相關次數:
  • 推薦推薦:0
  • 點閱點閱:4
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
隨著網路技術的不斷發展與進步,網路正逐漸改變我們的生活。網路的蓬勃發展雖然帶給我們莫大的便利性,但是我們所面臨的網路安全威脅卻也日益增加。因此,發展入侵檢測系統 (intrusion detection system, IDS) 作為檢測出因為網路入侵所造成的異常 (anomaly) 就變得越來越重要。有許多研究採用最新的技術發展網路入檢偵測系統,這些系統可以分為二分類或多分類系統,前者可以識別網路流量資料 (network traffic data) 是正常或是異常;而後者除了可以識別網路流量資料是正常或異常之外,還可以分辨異常所屬的類別。
本論文提出一個基於自適應合成 (adaptive synthetic, ADASYN) 抽樣與深度學習 (deep learning) 的入侵檢測方法以發展多分類網路入侵檢測系統。所提方法首先透過自適應合成 (adaptive synthetic, ADASYN) 抽樣來達成網路流量資料的少量樣本過採樣 (oversampling)。接下來,藉由變分自動編碼器 (variational autoencoder, VAE) 擷取輸入資料中重要的特徵,並壓縮出一組代表輸入特徵的低維向量,藉此解決資料集特徵維度過於龐大的問題。最後,再搭配長短期記憶 (long short-term memory, LSTM) 深度神經網路來識別輸入資料所屬的類別。本論文採用NSL-KDD公開資料集來評估所提的方法的效能,並與其他相關方法進行效能比較。比較結果顯示,不管在二分類或多分類入侵檢方面,本論文所提的方法都具有最好的正確率、精準度、召回率和 F1 分數。
With the development and progress of the Internet technology, the Internet is gradually changing our lives. Although the vigorous development of the Internet has brought us great convenience, the network security threats are also increasing. Therefore, it is desirable to develop intrusion detection systems (IDSs) for detecting anomalies caused by network intrusions. There are many studies using the state-of-the-art technology to develop network IDSs. These systems can be divided into binary-class or multi-class systems. The former can identify whether network traffic data is normal or anomalous; the latter can not only identify whether network traffic data is normal or anomalous, but also distinguish the class of the anomalous data.
This thesis proposes a network intrusion detection method based on adaptive synthetic (ADASYN) sampling and the deep learning for developing multi-class network IDSs. The proposed method first uses the ADASYN sampling mechanism to oversample minority samples in network traffic data. Next, it uses the variational autoencoder (VAE) for extracting important features from data and outputs a set of low-dimensional vectors. Finally, the long-term short-term (LSTM) deep neural network is applied for classifying the data. The well-known NSL-KDD dataset is used to evaluate the performance of the proposed method. The evaluated results are compared with those of related methods. The comparisons show that the proposed method has the best accuracy, precision, recall and F1 score in terms of binary- and the multi-class intrusion detection.
中文摘要 I
Abstract II
誌謝 III
圖目錄 VI
表目錄 VII
一、 緒論 1
1.1 研究背景與動機 1
1.2 研究目的與方法 2
1.3 論文架構 4
二、 背景知識 5
2.1 異常檢測 5
2.2 入侵檢測系統 5
2.2.1 入侵檢測系統 5
2.2.2 主機型入侵檢測系統 5
2.2.3 網路型入侵檢測系統 6
2.3 自適應合成抽樣法 7
2.3.1 欠採樣與過採樣 7
2.3.2 隨機過取樣 8
2.3.3 合成少數類過取樣技術 8
2.3.4 自適應合成抽樣法 9
2.4 深度學習 11
2.4.1 人工神經網路 11
2.4.2 反向傳播演算法 14
2.4.3 活化函數 15
2.4.4 深度學習介紹 16
2.4.4.1 監督式學習 17
2.4.4.2 非監督式學習 18
2.4.5 深度神經網路 18
2.4.6 遞歸神經網路 19
2.4.7 長短期記憶網路 20
2.4.8 變分自動編碼器 24
2.4.8.1 特徵擷取 24
2.4.8.2 自動編碼器 25
2.4.8.3 變分自動編碼器 26
2.5 相關研究 28
三、 研究方法 32
3.1 資料集 32
3.2 研究方法介紹 34
3.2.1 研究方法流程架構 34
3.2.2 資料前處理 36
3.2.3 資料擴增 37
3.2.4 特徵萃取 38
3.2.5 建構訓練模型 40
3.2.6 驗證模型 42
四、 實驗結果與分析 42
4.1 實驗環境 42
4.2 評估標準 43
4.3 實驗結果 44
4.3.1 資料擴增方法比較 46
4.3.2 特徵萃取方法比較 48
4.3.3 相關研究文獻比較 51
五、 結論與未來展望 52
參考文獻 53

[1] 最壞勒索軟體CryptoWall 3已造成3.25億美元損失
https://www.ithome.com.tw/news/99669
[2] 美國銀行Capital One遭駭,逾1億名北美客戶資料外洩
https://www.ithome.com.tw/news/132117
[3] 網路攻擊造成醫院 IT 系統癱瘓
https://technews.tw/2020/09/21/germany-hospital-hacked/
[4] 入侵檢測系統
https://zh.wikipedia.org/wiki/%E5%85%A5%E4%BE%B5%E6%A3%80%E6%B5%8B%E7%B3%BB%E7%BB%9F
[5] 主機型入侵檢測系統
http://www.netqna.com/2014/04/host-based-intrusion-detection-system.html
[6] 網路型入侵檢測系統
http://www.netqna.com/2014/04/network-based-intrusion-detection-system.html
[7] Tavallaee, M., Bagheri, E., Lu, W., & Ghorbani, A. A. (2009, July). A detailed analysis of the KDD CUP 99 data set. In 2009 IEEE symposium on computational intelligence for security and defense applications (pp. 1-6). IEEE.
[8] John, G. H., & Langley, P. (2013). Estimating continuous distributions in Bayesian classifiers. arXiv preprint arXiv:1302.4964.
[9] Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
[10] Aldous, D. (1991). The continuum random tree. I. The Annals of Probability, 1-28.
[11] Chang, C. C., & Lin, C. J. (2011). LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3), 1-27.
[12] Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep
learning (Vol. 1, No. 2). Cambridge: MIT press.
[13] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[14] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
[15] Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
[16] Kingma, D. P., & Welling, M. (2013). Auto-encoding variational
bayes. arXiv preprint arXiv:1312.6114.
[17] https://blog.yeshuanova.com/2018/01/autoencoder-tutorial/
[18] He, H., Bai, Y., Garcia, E. A., & Li, S. (2008, June). ADASYN: Adaptive
synthetic sampling approach for imbalanced learning. In 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence) (pp. 1322-1328). IEEE.
[19] Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A
survey. ACM computing surveys (CSUR), 41(3), 1-58.
[20] 隨機過取樣
https://www.mdeditor.tw/pl/2VGp/zh-tw
[21] Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002).
SMOTE: synthetic minority over-sampling technique. Journal of artificial
intelligence research, 16, 321-357.
[22] 人工神經網
https://zh.wikipedia.org/wiki/%E4%BA%BA%E5%B7%A5%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C
[23] 神經元http://www.hkpe.net/hkdsepe/human_body/neuron.htm
[24] Tanh Function
https://cvfiasd.pixnet.net/blog/post/275774124-%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92%E6%BF%80%E5%8B%B5%E5%87%BD%E6%95%B8%E4%BB%8B%E7%B4%B9
[25] Sigmoid Function https://en.wikipedia.org/wiki/Sigmoid_function
[26] ReLU Function
https://zh.wikipedia.org/zh-tw/%E7%BA%BF%E6%80%A7%
E6%95%B4%E6%B5%81%E5%87%BD%E6%95%B0
[27] He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers:
Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision (pp. 1026-1034).
[28] Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for activation
functions. arXiv preprint arXiv:1710.05941.
[29] https://colah.github.io/posts/2015-08-Understanding-LSTMs/
[30] https://zh.wikipedia.org/wiki/%E7%89%B9%E5%BE%
B5%E6%8F%90%E5%8F%96
[31] John, G. H., & Langley, P. (2013). Estimating continuous distributions in Bayesian classifiers. arXiv preprint arXiv:1302.4964
[32] Aldous, D. (1991). The continuum random tree. I. The Annals of
Probability, 1-28.
[33] Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
[34] Kohavi, R. (1996, August). Scaling up the accuracy of naive-bayes
classifiers: A decision-tree hybrid. In Kdd (Vol. 96, pp. 202-207)
[35] Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine
learning, 20(3), 273-297.
[36] Kanakarajan, N. K., & Muniasamy, K. (2016). Improving the accuracy of
intrusion detection using gar-forest with feature selection. In Proceedings of the 4th International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA) 2015 (pp. 539-547). Springer, New Delhi.
[37] Mining, W. I. D. (2006). Data mining: Concepts and techniques. Morgan
Kaufinann, 10, 559-569.
[38] Parimala, R., & Nallaswamy, R. (2011). A study of spam e-mail
classification using feature selection package. Global Journal of Computer
Science and Technology.
[39] Lower, N., & Zhan, F. (2020, January). A study of ensemble methods for
cyber security. In 2020 10th Annual Computing and Communication Workshop and Conference (CCWC) (pp. 1001-1009). IEEE.
[40] Kittler, J., Hatef, M., Duin, R. P., & Matas, J. (1998). On combining
classifiers. IEEE transactions on pattern analysis and machine
intelligence, 20(3), 226-239.
[41] Breiman, L. (1996). Bagging predictors. Machine learning, 24(2), 123
-140.
[42] Freund, Y., Schapire, R., & Abe, N. (1999). A short introduction to
boosting. Journal-Japanese Society For Artificial Intelligence, 14(771-780),1612.
[43] Yin, C., Zhu, Y., Fei, J., & He, X. (2017). A deep learning approach for
intrusion detection using recurrent neural networks. Ieee Access, 5, 21954-21961.
[44] Al-Qatf, M., Lasheng, Y., Al-Habib, M., & Al-Sabahi, K. (2018). Deep
learning approach combining sparse autoencoder with SVM for network intrusion detection. IEEE Access, 6, 52843-52856.
[45] Ng, A. (2011). Sparse autoencoder. CS294A Lecture notes, 72(2011), 1-19.
[46] Li, Y., Xu, Y., Liu, Z., Hou, H., Zheng, Y., Xin, Y., ... & Cui, L. (2020).
Robust detection for network intrusion of industrial IoT based on multi-CNN fusion. Measurement, 154, 107450.
[47] Yang, Y., Zheng, K., Wu, B., Yang, Y., & Wang, X. (2020). Network
intrusion detection based on supervised adversarial variational auto-encoder with regularization. IEEE Access, 8, 42169-42184.
[48] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A.
(2017). Improved training of wasserstein gans. arXiv preprint
arXiv:1704.00028.
[49] One-Hot Encoding
https://codertw.com/%E4%BA%BA%E5%B7%A5%E6%99%BA%E6%85%A7/95606/
[50] Optimizer
https://medium.com/%E9%9B%9E%E9%9B%9E%E8%88%87%E5%85%94%E5%85%94%E7%9A%84%E5%B7%A5%E7%A8%8B%E4%B8%96%E7%95%8C/%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92ml-note-sgd-momentum-adagrad-adam-optimizer-f20568c968db
[51] Nadam
https://www.twblogs.net/a/5b7e5b6a2b7177683856d907
[52] EarlyStopping
https://medium.com/ai%E5%8F%8D%E6%96%97%E5%9F%8E/learning-model-earlystopping%E4%BB%8B%E7%B4%B9-%E8%BD%89%E9%8C%84-f364f4f220fb
論文全文檔清單如下︰
1.電子全文連結(3576.136K)
(電子全文 已開放)
紙本授權註記:2023/9/1開放
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *