帳號:guest(3.145.48.115)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):王瑋逸
作者(外文):Wang, Wei-Yi.
論文名稱(中文):混和隱藏層架構深度學習輔助之低密度奇偶檢查碼解碼器
論文名稱(外文):Deep Learning Assisted Low Density Parity Check Decoder with Hybrid Hidden Layer Architecture
指導教授(中文):吳仁銘
指導教授(外文):Wu, Jen-Ming
口試委員(中文):洪樂文
吳卓諭
口試委員(外文):Hong, Yao-Win
Wu, Jwo-Yuh
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:105061518
出版年(民國):107
畢業學年度:107
語文別:英文
論文頁數:48
中文關鍵詞:低密度奇偶檢查碼神經網路置信度傳播通道編碼小迴圈對訊息重複計算的影響
外文關鍵詞:Low density parity check codeneural networkbelief propagationchannel codingdouble counting effect for small cycles
相關次數:
  • 推薦推薦:0
  • 點閱點閱:608
  • 評分評分:*****
  • 下載下載:28
  • 收藏收藏:0
在本篇論文中我們運用了深度學習的基礎來輔助置信度傳播演算法(Belief propagation),進而幫助解碼低密度奇偶檢查碼(Low density parity check codes)。在過去的論文以及實作上已證實了當最小迴圈(small cycles) 的周長很長時,使用置信度傳播演算法來解碼低密度機偶演算法能夠達到接近使用最大似然(Maximum likelihood) 的解碼效果,並且相對於最大以然,置信度傳播有較低的運算複雜度。但是在Tanner 圖上的相同權重(equal weights) 會造成置信度傳播演算法產生訊息重複計算。在Tanner 圖上傳遞的訊息會因為奇偶檢查矩陣(parity check matrix) 的設計,例如:最小迴圈以及小迴圈的總數,或是每個訊息碼在通道中所受到的汙染程度不同進而產生每個訊息有不同的可靠度。置信度傳播演算法在解低密度奇偶檢查碼時依賴每個傳遞訊息之間彼此獨立的特性,但在奇偶檢查矩陣(parity check matrix) 裡的小迴圈會造成在做置信度傳播時彼此訊息產生關連性,違反了置信度傳播其原本的要求,使得整個解碼效果會在高訊雜比時會有一些損失。在過去有URW-BP 及VFAP-BP 使用非相同權重(unqual weights) 來補償因為訊息關聯性而產生的解碼效果損失。但這兩種方法都只使用了一個常數來做調整,並不能夠準確的補償。此外,每個訊息的可靠度會隨著訊息來回傳遞次數以及每個檢查結點(Check node)不同而有不一樣的可靠度,所以很難使用一個公式來表示每個訊息的權重。因此我們設計了一個混合隱藏層神經網路輔助之奇偶檢查碼解碼器,透過神經網路來學習在Tanner 圖上的權重,藉由所學習到的權重補償了相同權重所產生的解碼效果損失,並提高了在高訊雜比的解碼效果。我們也為此設計了一個新的線上學習通訊系統來實現神經網路輔助之奇偶解碼器。
In this thesis, we propose a novel belief propagation for decoding the low density parity check code (LDPC) with the assistance of deep learning method. With long enough girth, the belief propagation (BP) has been shown with the powerful ability to reduce the complexity of decoding the LDPC, and yields nice error correction performance which is close to the maximum likelihood (ML) method. However, the equal weights on the Tanner graph is faced by ”double counting effect”. The messages passed on the edge have different reliability due to the structure of the parity check matrix design, e.g. girth and numbers of small cycles, and the channel condition each bit faced. The performance of BP relies on the independent of messages from different nodes. However, the small cycles in the Tanner graph leads to correlation of messages. The dependency of messages violates the independent requirement of BP in decoding and degrade the performance of belief propagation. There are many methods such as uniformly reweighted belief propagation (URW-BP) and variable factor appearance probability belief propagation (VFAP-BP) using unequal weights to deal with the message dependency in BP. However, the compensation is done by using one constant weights which is not general enough. Besides, the condition of reliability is changed in every iteration of decoding and the condition also varies from different check node. It is very difficult to develop a formula of the reweighted factor. Hence, we design a hybrid hidden layer neural network assisted BP algorithm to learned the unequal weights on Tanner graph. The weights compensate the negative effect of inreliability in the parity check matrix structure. With the aid of the learned weights, the error correction performance in high SNR region is enhanced. We also design an online training communication system to improve the modern system.
摘要i
Abstract ii
Contents iv
1 INTRODUCTION 1
1.1 Foreword . . . . 1
1.2 Research Motivation and Objective . . . . 3
1.3 Related works . . . . 5
1.4 Proposed method . . . . 6
1.5 Contribution and Achievement . . . . 6
1.6 Thesis Organization . . . . 7
2 BACKGROUNDS 8
2.1 Low density parity check code . . . . 8
2.1.1 Quasi cyclic (QC) LDPC . . . . 9
2.2 The decoder of low density parity check code . . . . 10
2.3 Deep Learning . . . . 13
2.4 Related Work . . . . 14
2.4.1 Reweighted Belief Propagation . . . . 15
2.4.2 Learning to Decode Linear Codes . . . . 16
3 Deep Learning Assisted LDPC Decoder 19
3.1 System Model . . . . 19
3.2 The Neural Network Structure . . . . 21
3.2.1 Prepare the Training data . . . . 21
3.2.2 The Neural Network Assisted LDPC Decoder . . . . 21
3.2.3 Neural Network Structure . . . . 24
4 SIMULATION RESULTS 34
4.1 Simulation Parameters . . . . 34
4.2 The learned reweighted factor . . . . 36
4.3 The error rate performance . . . 37
4.4 Faster convergence.. . . . 40
5 CONCLUSIONS 43
[1] E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear codes using deep learning,” in Proc. IEEE Annu. Allerton Conf. Commun. Control Comput. (Allerton), Monticello, IL, USA, 2016, pp. 341–346.
[2] C. Shannon, “Communication in the presence of noise,” in Proceedings of the IEEE, vol. 72, Sep. 1984, pp. 1-5.
[3] R. G. Gallager, Low density parity check codes, Cambridge, 1964.
[4] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The bulletin of mathematical biophysics, vol. 5, pp. 115-133, Dec 1943.
[5] K. Fukushima and S. Miyake, “Neocognitron: A Neural Network Model for a Mechanism of Visual Pattern Recognition,” IEEE Trans. Syst. Man Cybern., vol. SMC13, pp. 826/–834, 1983.
[6] G. E. Hinton and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” in Science, vol. 313, Jul. 2006, pp. 504-507.
[7] N. Raveendran and S. G. Srinivasa, “An analysis into the loopy belief propagation algorithm over short cycles,” in 2014 IEEE International Conference on Communications (ICC), Sydney, NSW, Australia, Jun 2014.
[8] X. Zhang and S. Chen, “A Two-Stage Decoding Algorithm to Lower the Error-Floorsfor LDPC Codes,” IEEE Communications Letters, vol. 19, pp. 517–520, Apr. 2015.
[9] X. Tao, P. Liu, Z. Feng, and Z. Hu, “On the Construction of Low Error Floor LDPC Codes on Rectangular Lattices,” IEEE Communications Letters, vol. 18, pp. 2073–2076, Apr. 2014.
[10] L. Dolecek, P. Lee, Z. Zhang, V. Anantharam, B. Nikolic, and M. Wainwright, “Predicting error floors of structured LDPC codes: deterministic bounds and estimates,” IEEE Journal on Selected Areas in Communications, vol. 27, pp. 2073–2076, Aug. 2009.
[11] H. Wymeersch, F. Penna, and V. Savic, “Uniformly reweighted belief propagation for estimation and detection in wireless networks,” IEEE Trans. Wireless Commun., vol. 11, no. 4, pp. 1587–1595, Apr. 2012.
[12] J. Liu and R. C. de Lamare, “Low-latency reweighted belief propagation decoding for LDPC codes,” IEEE Commun. Lett., vol. 16, no. 10, pp. 1660–1663, Oct. 2012.
[13] F. Liang, C. Shen, and F. Wu, “An Iterative BP-CNN Architecture for Channel Decoding,”IEEE Journal of Selected Topics in Signal Processing, vol. 12, pp. 144 – 159, Feb.2018.
[14] L. Lugosch and W. J. Gross, “Neural offset min-sum decoding,” in 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, Jun 2017.
[15] B. J.Wythoff, “Backpropagation neural networks. A tutorial,” Chemometrics and Intelligent Laboratory Systems, vol. 18, pp. 115–155, 1993.
[16] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” in Nature, vol. 512, 2015, pp.436–444.
[17] J. Liang and R. Liu, “Stacked denoising autoencoder and dropout together to prevent overfitting in deep neural network,” in 2015 8th International Congress on Image and Signal Processing (CISP), pp. 697–701, 2015.
[18] T. R. Halford and K. M. Chugg, “An algorithm for counting short cycles in bipartite graph,” IEEE Trans. Inf. Theory, vol. 52, no. 1, pp. 287–292, Jan. 2006.
[19] M. Karimi and A. H. Banihashemi, “A message-passing algorithm for counting short cycles in a graph,” in 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo), 2010, pp. 1–5.
[20] K. Gracie and M.-H. Hamon, “Turbo and turbo-like codes: Principles and applications in telecommunications,” Proc. IEEE, vol. 95, no. 6, pp. 1228–1254, Jun. 2007.
[21] E. Arikan, “ Systematic Polar Coding,” IEEE Communications Letters, vol. 15, no. 8, pp. 860–862, 2011.
[22] C. Y. Wang, “Double Quasi-Cyclic Low-Density Parity Check Codec Design for 5G New Radio,” Master’s thesis, National Tsing Hua University, Hsinchu, Taiwan 300, R.O.C.,2017.
[23] Nokia and A.-L. S. Bel, “LDPC design for eMBB(R1-1708829),” in 3GPP TSG RAN WG1 Meeting, Hangzhou, P.R. China, May 2017.
[24] N. T. D. Linh, G. Wang, M. Jia, and G. Rugumira, “Performance Evaluation of Sum Product and Min-sum Stopping Node Algorithm for LDPC Decoding,” Information Technology Journal, vol. 11, pp. 1298–1303, 2012.
[25] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” In International Conference on Learning Representations (ICLR), 2015.
[26] J. Duchi, E. Hazan, and Y. Singer, “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” Journal of Machine Learning Research, p. 2121–2159,2011.
[27] G. Hinton, N. Srivastava, and K. Swersky, “ Lecture 6a: Overview of mini-batch gradient descent,” Coursera Lecture slides https://class. coursera. org/neuralnets-2012-001/ lecture[online], 2012.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *