|
[1] Humble, N., & Mozelius, P. (2022). The threat, hype, and promise of artificial intelligence in education. *Discover Artificial Intelligence, 2*(22). [2] Fei-Fei, Li. (2024, May). With Spatial Intelligence, With spatial intelligence, AI Will Understand the Real World [Ted Talk]. https://youtu.be/y8NtMZ7VGmU?si=e18PxQKMD6_XqF-e . [3] Brown, Tom, et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901. [4] Sahoo, Pranab, et al. "A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications." arXiv preprint arXiv:2402.07927 (2024). [5] Wei, Jason, et al. "Chain-of-thought prompting elicits reasoning in large language models." Advances in neural information processing systems 35 (2022): 24824-24837. [6] Yao Yao, Zuchao Li, and Hai Zhao.” Beyond chain-of-thought, effective graph-of-thought reasoning in large language models.” arXiv preprint arXiv:2305.16582 (2023). [7] Zilong Wang, Hao Zhang, Chun-Liang Li, Julian Martin Eisenschlos, Vincent Perot, Zifeng Wang, Lesly Miculicich, Yasuhisa Fujii, Jingbo Shang, Chen-Yu Lee, and Tomas Pfister. “Chain-of-table: Evolving tables in the reasoning chain for table understanding” ( 2024). [8] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. “Tree of thoughts: Deliberate problem solving with large language models.” arXiv preprint arXiv:2305.10601, (2023). [9] Hanxu Hu, Hongyuan Lu, Huajian Zhang, Yun-Ze Song, Wai Lam, and Yue Zhang. Chain-of-symbol prompting elicits planning in large language models, (2023). [10] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. [11] Yucheng Zhou, Xiubo Geng, Tao Shen, Chongyang Tao, Guodong Long, Jian-Guang Lou, and Jianbing Shen. Thread of thought unraveling chaotic contexts. arXiv preprint arXiv:2311.08734, 2023. [12] Kojima, Takeshi, et al. "Large language models are zero-shot reasoners." Advances in neural information processing systems 35 (2022): 22199-22213. [13] Zhang, Zhuosheng, et al. "Automatic chain of thought prompting in large language models." arXiv preprint arXiv:2210.03493 (2022). [14] Kaelbling, Leslie Pack, Michael L. Littman, and Andrew W. Moore. "Reinforcement learning: A survey." Journal of artificial intelligence research 4 (1996): 237-285. [15] Silviana, Silviana, Andy Hardianto, and Dadang Hermawan. "The implementation of anthropometric measurement in designing the ergonomics work furniture." EUREKA: Physics and Engineering 3 (2022): 20-27. [16] ydor, Maciej, and Miloš Hitka. "Chair size design based on user height." Biomimetics 8.1 (2023): 57. [17] Ramesh, Aditya, et al. "Zero-shot text-to-image generation." International conference on machine learning. Pmlr, 2021. [18] Cobbe, Karl, et al. "Training verifiers to solve math word problems." arXiv preprint arXiv:2110.14168 (2021). [19] Talmor, Alon, et al. "CommonsenseQA: A question answering challenge targeting commonsense knowledge." arXiv preprint arXiv:1811.00937 (2018). [20] Achiam, Josh, et al. "Gpt-4 technical report." arXiv preprint arXiv:2303.08774 (2023). [21] Durante, Zane, et al. "Few-Shot Classification of Interactive Activities of Daily Living (InteractADL)." arXiv preprint arXiv:2406.01662 (2024). [22] Radford, Alec, et al. "Language models are unsupervised multitask learners." OpenAI blog 1.8 (2019): 9. [23] Brown, Tom, et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901. [24] Reimers, Nils, and Iryna Gurevych. "Sentence-bert: Sentence embeddings using siamese bert-networks." arXiv preprint arXiv:1908.10084 (2019).
|