帳號:guest(3.22.77.63)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):李宜鎂
作者(外文):Li, Yi-Mei
論文名稱(中文):適用於時變多輸入多輸出通訊系統之基於深度學習晶格簡化技術
論文名稱(外文):Deep Learning Based Lattice Reduction For Time-Varying MIMO Communication Systems
指導教授(中文):黃元豪
指導教授(外文):Huang, Yuan-Hao
口試委員(中文):伍紹勳
蔡佩芸
陳喬恩
口試委員(外文):Wu, Sau-Hsuan
Tsai, Pei-Yun
Chen, Chiao-En
學位類別:碩士
校院名稱:國立清華大學
系所名稱:通訊工程研究所
學號:108064504
出版年(民國):111
畢業學年度:110
語文別:英文
論文頁數:61
中文關鍵詞:晶格簡化多輸入多輸出時變通道深度學習
外文關鍵詞:lattice reductionMIMOtime-varying channeldeep learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:380
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
隨著現代無線網路對大量數據傳輸需求的提高,多輸入多輸出 (MIMO) 已成為在有限頻寬下增加通道容量及提升頻譜使用率的重要技術。MIMO通訊系統中的MIMO檢測器在整個系統中扮演著重要的角色。因此,為了提高MIMO檢測器的性能,晶格簡化 (LR) 技術被用來降低會導致檢測器檢測錯誤的通道失真。LR輔助的MIMO檢測器由於對通道進行了修正,使得系統可以在較為正交的通道 (orthogonal channel) 下檢測訊號,而非原本容易造成失真的相關通道 (correlated channel)。而正交通道表示訊號在傳輸時較不受其他發射天線的干擾。近年來,深度學習 (DL) 技術已被廣泛應用在不同領域,其中包括了通訊系統中的優化問題和性能提升。然而,深度學習尚未被應用在晶格簡化演算法。因此,我們希望設計一個適合晶格簡化的深度學習模型。本論文參考了AlphaGo Zero的架構並提出了一種基於深度學習的晶格簡化演算法 (DLLR)。該演算法是利用AlphaGo Zero的強化學習技術來實現晶格簡化,此外,該演算法透過離線訓練和線上應用兩種方案來完成。離線訓練的目的是為了訓練出最好的神經網路,而線上應用則表示實際在真實環境中使用。換言之,所提出的演算法可以透過離線訓練方案得到的最佳神經網路,將其應用在線上應用方案中得到晶格簡化的結果,最終實現一個基於深度學習的晶格簡化演算法。本研究是考慮了時變通道環境進行模擬。模擬結果表明,與原始LR輔助的MIMO檢測器相比,所提出之DLLR輔助的MIMO檢測器使得MIMO系統中的通道正交性 (channel orthogonality)和位元錯誤率 (BER) 的性能都有所提升。
Since the demand for the huge data transmission of modern wireless networks, the multiple-input multiple-output (MIMO) has become a popular technique to increase the channel capacity and improve the spectral efficiency under the limited bandwidth. The MIMO detector plays an important role in a MIMO communication system. Thus, to improve the performance of the MIMO detector, lattice reduction (LR) is proposed to reduce the distortion of the channel matrix which causes the detection error. The LR-aided MIMO detector makes the system can detect the signal in an orthogonal channel rather than in the original correlated channel, where an orthogonal channel matrix means that the transmission is free from the interference of other transmit antennas. In recent years, deep learning (DL) techniques have been widely used in different fields, which include optimization issues and performance improvement problems in the communication system. However, deep learning techniques have not been applied to lattice reduction.
Therefore, this work focuses on the design of building up a DL model that fits the LR algorithm. This thesis refers to the architecture of AlphaGo Zero and proposes a deep learning-based lattice reduction algorithm (DLLR). The proposed algorithm uses the reinforcement learning technique of AlphaGo Zero to realize the function of LR. Moreover, the process of DLLR is separated into the offline scheme and the online scheme. The purpose of the former is to train the best neural network, and the latter denotes the real application in a real environment. In other words, we can obtain the lattice reduction result through the best neural network from the offline scheme to complete a deep learning-based lattice reduction algorithm. In addition, this study is considered in time-varying channel environments. Compare with the original LR-aided MIMO system, the simulation results show that the orthogonality of the channel matrix and the BER performance in the DLLR-aided MIMO system is improved.
摘要
目錄
1 Introduction ------ 1
1.1 MIMO Systems and Lattice Reduction Aided MIMO ------ 1
1.2 Research Motivation ------2
1.3 Organization of This Thesis ------ 2
1.4 Notation ------3
2 Channel Model and Lattice Reduction Algorithm ------ 5
2.1 Channel Model ------ 5
2.1.1 Time-Correlated Channel Model ------ 6
2.1.2 QuaDRiGa Channel Model ------ 7
2.2 Lattice Reduction Algorithm ------ 11
2.2.1 Lattice ------ 11
2.2.2 Lattice Reduction Algorithm ------ 12
2.3 Lattice Reduction-Aided MIMO Detector ------ 14
3 Deep Learning Design for Lattice Reduction Aided MIMO System ------ 17
3.1 Design Issues for Deep Learning Algorithm ------ 17
3.1.1 Lattice Reduction in Time-Varying Channel ------ 17
3.1.2 Definition of Path Matrix and Step Matrix ------ 18
3.1.3 Deep Learning Model Consideration ------ 19
3.2 AlphaGo Zero ------ 19
3.2.1 Architecture of AlphaGo Zero ------ 20
3.2.2 Monte-Carlo Tree Search ------ 20
3.2.3 Neural Network ------ 23
4 Proposed Deep Learning Based Lattice Reduction Aided MIMO System ------ 27
4.1 Proposed Deep Learning Based Lattice Reduction Algorithm ------ 27
4.1.1 Deep Learning Based Lattice Reduction Algorithm ------ 28
4.1.2 Offline Scheme ------ 29
4.1.3 Online Scheme ------ 35
4.2 Comparison of AlphaGo Zero and Deep Learning Based Lattice Reduction Algorithm ------ 36
5 Simulation and Analysis Results ------ 39
5.1 Simulation Environments ------ 39
5.2 Simulation Results in Time-Correlated Channel Model ------ 42
5.2.1 Training Results ------ 42
5.2.2 Bit Error Rate Performance ------ 46
5.3 Simulation Results in Time-Correlated Channel Model ------ 51
5.3.1 Training Results ------ 51
5.3.2 Bit Error Rate Performance ------ 51
5.4 Discussions ------ 54
6 Conclusion ------ 57
6.1 Conclusion ------ 57
6.2 Future Works ------ 58
參考文獻 ------ 59
[1] S. Jaeckel, L. Raschkowski, K. B¨orner, and L. Thiele, “Quadriga: A 3-d multi-cell channel model with time evolution for enabling virtual field trials,” IEEE transactions on antennas and propagation, vol. 62, no. 6, pp. 3242–3256, 2014.
[2] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the game of go without human knowledge,” nature, vol. 550, no. 7676, pp. 354–359, 2017.
[3] K. Yu and B. Ottersten, “Models for mimo propagation channels: a review,” Wireless communications and mobile computing, vol. 2, no. 7, pp. 653–666, 2002.
[4] D.W¨ubben, D. Seethaler, J. Jald´en, and G. Matz, “Lattice reduction,” IEEE Signal Processing Magazine, vol. 28, no. 3, pp. 70–91, 2011.
[5] H. Yao and G. Wornell, “Lattice-reduction-aided detectors for mimo communication systems,” in Global Telecommunications Conference, 2002. GLOBECOM ’02. IEEE, vol. 1, 2002, pp. 424–428 vol.1.
[6] M. Soltani, V. Pourahmadi, A. Mirzaei, and H. Sheikhzadeh, “Deep learning-based channel estimation,” IEEE Communications Letters, vol. 23, no. 4, pp. 652–655, 2019.
[7] H. He, C.-K. Wen, S. Jin, and G. Y. Li, “A model-driven deep learning network for mimo detection,” in 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2018, pp. 584–588.
[8] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton, “A survey of monte carlo tree search methods,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 4, no. 1, pp. 1–43, 2012.
[9] Y. H. Gan, C. Ling, and W. H. Mow, “Complex lattice reduction algorithm for lowcomplexity full-diversity mimo detection,” IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2701–2710, 2009.
[10] J. Jalden, D. Seethaler, and G. Matz, “Worst- and average-case complexity of lll lattice reduction in mimo wireless systems,” in 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, 2008, pp. 2685–2688.
[11] A. Becker, N. Gama, and A. Joux, “Solving shortest and closest vector problems: The decomposition approach,” Cryptology ePrint Archive, 2013.
[12] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
[13] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of go with deep neural networks and tree search,” nature, vol. 529, no. 7587, pp. 484–489, 2016.
[14] L. P. Kaelbling, M. L. Littman, and A. W. Moore, “Reinforcement learning: A survey,” Journal of artificial intelligence research, vol. 4, pp. 237–285, 1996.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
[16] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[17] C. Ling and N. Howgrave-Graham, “Effective lll reduction for lattice decoding,” in 2007 IEEE International Symposium on Information Theory, 2007, pp. 196–200.
[18] K. He and J. Sun, “Convolutional neural networks at constrained time cost,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5353–5360.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *