|
[1] OpenAI, “GPT-4 technical report,” https://cdn.openai.com/papers/gpt-4.pdf, 2023. [2] S. Pichai and D. Hassabis, “Introducing gemini: our largest and most capable AI model,” Google. Retrieved December, 2023. [3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. [4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Adv. Neural Inf. Process. Syst., vol. 33, pp. 1877–1901, 2020. [5] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, “Scaling laws for neural language models,” arXiv preprint arXiv:2001.08361, 2020. [6] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. d. L. Casas, L. A. Hendricks, J. Welbl, A. Clark et al., “Training compute-optimal large language models,” arXiv preprint arXiv:2203.15556, 2022. [7] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., “Emergent abilities of large language models,” arXiv preprint arXiv:2206.07682, 2022. [8] J. Wei, Y. Tay, and Q. V. Le, “Inverse scaling can become u-shaped,” arXiv preprint arXiv:2211.02011, 2022. [9] M. Newman, Networks: an introduction. OUP Oxford, 2009. [10] C.-S. Chang, “A simple explanation for the phase transition in large language models with list decoding,” arXiv preprint arXiv:2303.13112, 2023. [11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” Adv. Neural Inf. Process. Syst., vol. 30, 2017. [12] H. Ramsauer, B. Sch¨afl, J. Lehner, P. Seidl, M. Widrich, T. Adler, L. Gruber, M. Holzleitner, M. Pavlovi´c, G. K. Sandve et al., “Hopfield networks is all you need,” arXiv preprint arXiv:2008.02217, 2020. [13] S. Arora and A. Goyal, “A theory for emergence of complex skills in language models,” arXiv:2307.15936, 2023. [14] R. Gallager, “Low-density parity-check codes,” IRE Trans. Inf. Theory, vol. 8, no. 1, pp. 21–28, 1962. [15] M. A. Shokrollahi, “New sequences of linear time erasure codes approaching the channel capacity,” in International Symposium on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes, Springer, 1999, pp. 65–76. [16] T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of capacityapproaching irregular low-density parity-check codes,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 619–637, 2001. [17] G. Liva, “Graph-based analysis and optimization of contention resolution diversity slotted ALOHA,” IEEE Trans. Commun., vol. 59, no. 2, pp. 477–487, 2011. [18] K. R. Narayanan and H. D. Pfister, “Iterative collision resolution for slotted ALOHA: An optimal uncoordinated transmission policy,” in Proc. of International Symposium on Turbo Codes and Iterative Information Processing (ISTC), 2012, pp. 136–139. [19] E. Paolini, G. Liva, and M. Chiani, “Random access on graphs: A survey and new results,” in Proc. of Asilomar Conference on Signals, Systems and Computers, 2012, pp. 1743–1747. [20] D. Jakoveti´c, D. Bajovi´c, D. Vukobratovi´c, and V. Crnojevi´c, “Cooperative slotted ALOHA for multi-base station systems,” IEEE Trans. Commun., vol. 63, no. 4, pp. 1443–1456, 2015. [21] C. Stefanovi´c and D. Vukobratovi´c, “Coded random access,” in ˇ Network Coding and Subspace Designs, Springer, 2018, pp. 339–359. [22] Y.-H. Chiang, Y.-J. Lin, C.-S. Chang, and Y.-W. P. Hong, “Parallel decoding of irsa with noise,” in Proc. of IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2022, pp. 320–326. [23] M. Luby, M. Mitzenmacher, and M. A. Shokrollahi, “Analysis of random processes via and-or tree evaluation,” in SODA, vol. 98, 1998, pp. 364–373. [24] M. Luby, M. Mitzenmacher, A. Shokrollah, and D. Spielman, “Analysis of low density codes and improved designs using irregular graphs,” in Proc. of the Annual ACM Symposium on Theory of Computing, 1998, pp. 249–258. [25] T. J. Richardson and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599–618, 2001. [26] C.-M. Chang, Y.-J. Lin, C.-S. Chang, and D.-S. Lee, “On the stability regions of coded Poisson receivers with multiple classes of users and receivers,” IEEE/ACM Trans. Netw., vol. 31, no. 1, pp. 234–247, 2022. [27] C.-H. Yu, L. Huang, C.-S. Chang, and D.-S. Lee, “Poisson receivers: a probabilistic framework for analyzing coded random access,” IEEE/ACM Trans. Netw., vol. 29, no. 2, pp. 862–875, 2021. [28] T.-H. Liu, C.-H. Yu, Y.-J. Lin, C.-M. Chang, C.-S. Chang, and D.-S. Lee, “ALOHA receivers: a network calculus approach for analyzing coded multiple access with SIC,” IEEE/ACM Trans. Netw., vol. 29, no. 2, pp. 862–875, 2021. [29] E. Paolini, G. Liva, and M. Chiani, “Graph-based random access for the collision channel without feedback: Capacity bound,” in Proc. of IEEE Global Communications Conference, 2011. [30] O. Ordentlich and Y. Polyanskiy, “Low complexity schemes for the random access Gaussian channel,” in Proc. of IEEE Int. Symp. Inf. Theory (ISIT), 2017, pp. 2528–2532. [31] W. Weaver, “Recent contributions to the mathematical theory of communication,” ETC: a review of general semantics, pp. 261–281, 1953. [32] C. E. Shannon, “Prediction and entropy of printed english,” The Bell System Technical Journal, vol. 30, no. 1, pp. 50–64, 1951. [33] H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, “Deep learning enabled semantic communication systems,” IEEE Trans. Signal Process., vol. 69, pp. 2663–2675, 2021. [34] Q. Zhou, R. Li, Z. Zhao, C. Peng, and H. Zhang, “Semantic communication with adaptive universal transformer,” IEEE Wireless Commun. Lett., vol. 11, no. 3, pp.453–457, 2022. [35] Q. Hu, G. Zhang, Z. Qin, Y. Cai, G. Yu, and G. Y. Li, “Robust semantic communications with masked VQ-VAE enabled codebook,” IEEE Trans. Wireless Commun.,pp. 1–1, 2023.
|