|
[1] E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear codes using deep learning,” in Proc. IEEE Annu. Allerton Conf. Commun. Control Comput. (Allerton), Monticello, IL, USA, 2016, pp. 341–346. [2] C. Shannon, “Communication in the presence of noise,” in Proceedings of the IEEE, vol. 72, Sep. 1984, pp. 1-5. [3] R. G. Gallager, Low density parity check codes, Cambridge, 1964. [4] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The bulletin of mathematical biophysics, vol. 5, pp. 115-133, Dec 1943. [5] K. Fukushima and S. Miyake, “Neocognitron: A Neural Network Model for a Mechanism of Visual Pattern Recognition,” IEEE Trans. Syst. Man Cybern., vol. SMC13, pp. 826/–834, 1983. [6] G. E. Hinton and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” in Science, vol. 313, Jul. 2006, pp. 504-507. [7] N. Raveendran and S. G. Srinivasa, “An analysis into the loopy belief propagation algorithm over short cycles,” in 2014 IEEE International Conference on Communications (ICC), Sydney, NSW, Australia, Jun 2014. [8] X. Zhang and S. Chen, “A Two-Stage Decoding Algorithm to Lower the Error-Floorsfor LDPC Codes,” IEEE Communications Letters, vol. 19, pp. 517–520, Apr. 2015. [9] X. Tao, P. Liu, Z. Feng, and Z. Hu, “On the Construction of Low Error Floor LDPC Codes on Rectangular Lattices,” IEEE Communications Letters, vol. 18, pp. 2073–2076, Apr. 2014. [10] L. Dolecek, P. Lee, Z. Zhang, V. Anantharam, B. Nikolic, and M. Wainwright, “Predicting error floors of structured LDPC codes: deterministic bounds and estimates,” IEEE Journal on Selected Areas in Communications, vol. 27, pp. 2073–2076, Aug. 2009. [11] H. Wymeersch, F. Penna, and V. Savic, “Uniformly reweighted belief propagation for estimation and detection in wireless networks,” IEEE Trans. Wireless Commun., vol. 11, no. 4, pp. 1587–1595, Apr. 2012. [12] J. Liu and R. C. de Lamare, “Low-latency reweighted belief propagation decoding for LDPC codes,” IEEE Commun. Lett., vol. 16, no. 10, pp. 1660–1663, Oct. 2012. [13] F. Liang, C. Shen, and F. Wu, “An Iterative BP-CNN Architecture for Channel Decoding,”IEEE Journal of Selected Topics in Signal Processing, vol. 12, pp. 144 – 159, Feb.2018. [14] L. Lugosch and W. J. Gross, “Neural offset min-sum decoding,” in 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, Jun 2017. [15] B. J.Wythoff, “Backpropagation neural networks. A tutorial,” Chemometrics and Intelligent Laboratory Systems, vol. 18, pp. 115–155, 1993. [16] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” in Nature, vol. 512, 2015, pp.436–444. [17] J. Liang and R. Liu, “Stacked denoising autoencoder and dropout together to prevent overfitting in deep neural network,” in 2015 8th International Congress on Image and Signal Processing (CISP), pp. 697–701, 2015. [18] T. R. Halford and K. M. Chugg, “An algorithm for counting short cycles in bipartite graph,” IEEE Trans. Inf. Theory, vol. 52, no. 1, pp. 287–292, Jan. 2006. [19] M. Karimi and A. H. Banihashemi, “A message-passing algorithm for counting short cycles in a graph,” in 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo), 2010, pp. 1–5. [20] K. Gracie and M.-H. Hamon, “Turbo and turbo-like codes: Principles and applications in telecommunications,” Proc. IEEE, vol. 95, no. 6, pp. 1228–1254, Jun. 2007. [21] E. Arikan, “ Systematic Polar Coding,” IEEE Communications Letters, vol. 15, no. 8, pp. 860–862, 2011. [22] C. Y. Wang, “Double Quasi-Cyclic Low-Density Parity Check Codec Design for 5G New Radio,” Master’s thesis, National Tsing Hua University, Hsinchu, Taiwan 300, R.O.C.,2017. [23] Nokia and A.-L. S. Bel, “LDPC design for eMBB(R1-1708829),” in 3GPP TSG RAN WG1 Meeting, Hangzhou, P.R. China, May 2017. [24] N. T. D. Linh, G. Wang, M. Jia, and G. Rugumira, “Performance Evaluation of Sum Product and Min-sum Stopping Node Algorithm for LDPC Decoding,” Information Technology Journal, vol. 11, pp. 1298–1303, 2012. [25] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” In International Conference on Learning Representations (ICLR), 2015. [26] J. Duchi, E. Hazan, and Y. Singer, “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” Journal of Machine Learning Research, p. 2121–2159,2011. [27] G. Hinton, N. Srivastava, and K. Swersky, “ Lecture 6a: Overview of mini-batch gradient descent,” Coursera Lecture slides https://class. coursera. org/neuralnets-2012-001/ lecture[online], 2012. |