|
[1] D. J. Love, R. W. Heath, V. K. N. Lau, D. Gesbert, B. D. Rao, and M. Andrews, “An overview of limited feedback in wireless communication systems,” IEEE J. Sel. Areas Commun., vol. 26, no. 8, pp. 1341–1365, Oct. 2008. [2] C.-K. Wen, W.-T. Shih, and S. Jin, “Deep learning for massive MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 7, no. 5, pp. 748–751, Oct. 2018. [3] Z. Lu, J. Wang, and J. Song, “Multi-resolution CSI feedback with deep learning in massive MIMO system,” in Proc. IEEE Int. Conf. Commun. (ICC), Dublin, Ireland. Jun. 2020, pp. 1–6. [4] Z. Liu, L. Zhang, and Z. Ding, “Exploiting bi-directional channel reciprocity in deep learning for low rate massive MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 8, no. 3, pp. 889–892, Jun. 2019. [5] L. Liu, C. Oestges, J. Poutanen, K. Haneda, P. Vainikainen, F. Quitin, F. Tufvesson, and P. D. Doncker, “The COST 2100 MIMO channel model,” IEEE Wireless Commun., vol. 19, no. 6, pp. 92–99, Dec. 2012. [6] T. Wang, C.-K. Wen, S. Jin, and G. Y. Li, “Deep learning-based CSI feedback approach for time-varying massive MIMO channels,” IEEE Wireless Commun. Lett., vol. 8, no. 2, pp. 416–419, Apr. 2019. [7] X. Li and H. Wu, “Spatio-temporal representation with deep neural recurrent network in MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 9, no. 5, pp. 653–657, May 2020. [8] R. Pascanu, C¸ . Gu¨lc¸ehre, K. Cho, and Y. Bengio, “How to construct deep recurrent neural networks,” in Proc. 2nd Int. Conf. Learn. Represent. (ICLR), Banff, Canada, Apr. 2014, pp. 1–13. [9] N. Srivastava, E. Mansimov, and R. Salakhutdinov, “Unsupervised learning of video representations using LSTMs,” in Proc. 32nd Int. Conf. Mach. Learn. (ICML), Lille, France, vol. 37, pp. 843-852, Jul. 2015. [10] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, Jul. 2006. [11] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997. [12] Y. Shen, S. Tan, A. Sordoni, and A. Courville, “Ordered neurons: Integrating tree structures into recurrent neural networks,” in Proc. 7th Int. Conf. Learn. Represent. (ICLR), New Orleans, LA, USA, May 2019, pp. 1–14. [13] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2673–2681, Nov. 1997. [14] Z. Cui, R. Ke, and Y. Wang, “Deep bidirectional and unidirectional LSTM recurrent neural network for network-wide traffic speed prediction,” in Proc. 6th Int. Workshop Urban Comput. (UrbComp), Halifax, Canada, Aug. 2017, pp. 1–11. [15] A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional LSTM and other neural network architectures,” Neural Netw., vol. 18, no. 5–6, pp. 602–610, Jul.–Aug. 2005. [16] A. Graves, N. Jaitly, and A. Mohamed, “Hybrid speech recognition with deep bidirectional LSTM,” in Proc. IEEE Workshop Autom. Speech Recogn. Underst. (ASRU), Olomouc, Czech Republic, Dec. 2013, pp. 273–278. [17] K. E. Baddour and N. C. Beaulieu, “Autoregressive modeling for fading channel simulation,” IEEE Trans. Wireless Commun., vol. 4, no. 4, pp. 1650–1662, Apr. 2005. [18] A. Jamoos, “Rayleigh fading channel simulation,” MATLAB Central File Exchange, Jun. 2006. [19] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. ICML Workshop Deep Learn. Audio, Speech, Language Process., Atlanta, GA, USA, Jun. 2013, pp. 1–6. [20] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv: 1505.00853v2 [cs.LG], Nov. 2015. [21] T. Tieleman and G. Hinton, “Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude,” COURSERA: Neural Netw. Mach. Learn., 2012. [22] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv: 1609.04747v2 [cs.LG], Jun. 2017. [23] S. Merity, N. S. Keskar, and R. Socher, “Regularizing and optimizing LSTM language models,” in Proc. 6th Int. Conf. Learn. Represent. (ICLR), Vancouver, Canada, Apr./May 2018, pp. 1–13. [24] Y. Gal and Z. Ghahramani, “A theoretically grounded application of dropout in recurrent neural networks,” in Proc. 30th Int. Conf. Neural Inf. Process. Syst. (NIPS), Barcelona, Spain, Dec. 2016, pp. 1027–1035. [25] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 56, pp. 1929–1958, Jun. 2014. [26] M. Soltani, V. Pourahmadi, A. Mirzaei, and H. Sheikhzadeh, “Deep learning-based channel estimation,” IEEE Commun. Lett., vol. 23, no. 4, pp. 652–655, Apr. 2019. |