|
國家教育研究院. (2021). 樂詞網 : 下載專區 教科書名詞. https://terms.naer.edu.tw/download/4/ 梁雲霞. (2009). 學生能夠自主學習嗎? 華人教師對自主學習觀點之探究. 小學教育國際研討會 2009 (International Conference on Primary Education 2009), 香港教育學院 (HKIEd) 主辦, 2009-11-25~ 27, 陳科名. (2022). 高中彈性學習時間之學生自主學習實施現況與反思-以屏東縣為例. 中等教育, 73(2). https://doi.org/10.6249/SE.202206_73(2).0013 Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., & Anadkat, S. (2023). Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Adiwardana, D., Luong, M.-T., So, D. R., Hall, J., Fiedel, N., Thoppilan, R., Yang, Z., Kulshreshtha, A., Nemade, G., & Lu, Y. (2020). Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977. Anthropic. (2024). Chain prompts. https://docs.anthropic.com/claude/docs/chain-prompts Bradeško, L., & Mladenić, D. (2012). A survey of chatbot systems through a loebner prize competition. Proceedings of Slovenian language technologies society eighth conference of language technologies, Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., & Askell, A. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. Cunningham-Nelson, S., Baktashmotlagh, M., & Boles, W. (2019). Visualizing student opinion through text analysis. IEEE Transactions on Education, 62(4), 305-311. Davis, M. R. (2014). District's Ambitious Personalized Learning Effort Shows Progress. Education week, 34(9), s13. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Devolder, A., van Braak, J., & Tondeur, J. (2012). Supporting self‐regulated learning in computer‐based learning environments: systematic review of effects of scaffolding in the domain of science education. Journal of Computer Assisted Learning, 28(6), 557-573. Dignath, C., & Büttner, G. (2008). Components of fostering self-regulated learning among students. A meta-analysis on intervention studies at primary and secondary school level. Metacognition and learning, 3, 231-264. Es, S., James, J., Espinosa-Anke, L., & Schockaert, S. (2023). Ragas: Automated evaluation of retrieval augmented generation. arXiv preprint arXiv:2309.15217. Feng, D., Shaw, E., Kim, J., & Hovy, E. (2006). An intelligent discussion-bot for answering student queries in threaded discussions. International Conference on Intelligent User Interfaces, Gautier, I., & Grave, E. (2021). Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. arXiv.org. https://doi.org/10.48550/arxiv.2007.01282 Goda, Y., Yamada, M., Matsukawa, H., Hata, K., & Yasunami, S. (2014). Conversation with a chatbot before an online EFL group discussion and the effects on critical thinking. The Journal of Information and Systems in Education, 13(1), 1-7. Haake, M., & Gulz, A. (2009). A look at the roles of look & roles in embodied pedagogical agents–a user preference perspective. International Journal of Artificial Intelligence in Education, 19(1), 39-71. Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. Khan, R. A., Jawaid, M., Khan, A. R., & Sajjad, M. (2023). ChatGPT - Reshaping medical education and clinical management. Pakistan journal of medical sciences, 39(2), 605-607. https://doi.org/10.12669/pjms.39.2.7653 Kim, J., Choi, S., Amplayo, R. K., & Hwang, S.-w. (2020, December). Retrieval-Augmented Controllable Review Generation. In D. Scott, N. Bel, & C. Zong, Proceedings of the 28th International Conference on Computational Linguistics Barcelona, Spain (Online). Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with educational chatbots: A systematic review. Education and Information Technologies, 28(1), 973-1018. https://doi.org/10.1007/s10639-022-11177-3 Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS digital health, 2(2), e0000198-e0000198. https://doi.org/10.1371/journal.pdig.0000198 Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Wen-tau, Y., Rocktäschel, T., Riedel, S., & Kiela, D. (2021). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv.org. https://doi.org/10.48550/arxiv.2005.11401 Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. Ram, O., Levine, Y., Dalmedigos, I., Muhlgay, D., Shashua, A., Leyton-Brown, K., & Shoham, Y. (2023). In-Context Retrieval-Augmented Language Models. Transactions of the Association for Computational Linguistics, 11, 1316-1331. https://doi.org/10.1162/tacl_a_00605 Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., & Azhar, F. (2023). Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. van der Graaf, J., Lim, L., Fan, Y., Kilgour, J., Moore, J., Gašević, D., Bannert, M., & Molenaar, I. (2022). The Dynamics Between Self-Regulated Learning and Learning Outcomes: an Exploratory Approach and Implications. Metacognition and learning, 17(3), 745-771. https://doi.org/10.1007/s11409-022-09308-9 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., & Le, Q. V. (2021). Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35, 24824-24837. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. Wiley, D. A., & Edwards, E. K. (2002). ONLINE SELF-ORGANIZING SOCIAL SYSTEMS The Decentralized Future of Online Learning. Quarterly review of distance education, 3(1). Wu, K., Wu, E., & Zou, J. (2024). How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs' internal prior. arXiv preprint arXiv:2404.10198. Zimmerman, B. J. (1986). Becoming a self-regulated learner: Which are the key subprocesses? Contemporary Educational Psychology, 11(4), 307-313. https://doi.org/https://doi.org/10.1016/0361-476X(86)90027-5
|