帳號:guest(3.145.52.195)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):蔡幸潔
作者(外文):Tsai, Hsing-Chieh
論文名稱(中文):科學論證結合對話機器人輔助媒體識讀之決策歷程研究:以辨別假新聞為例
論文名稱(外文):Exploring the Decision-Making Process of Scientific Argumentation Combined with Chatbot-Assisted Media Literacy: A Case Study on the Identification of Fake News
指導教授(中文):廖冠智
指導教授(外文):Liao, Guan-Ze
口試委員(中文):林倍伊
李松濤
口試委員(外文):Lin, Pei-Yi
Lee, Sung-Tao
學位類別:碩士
校院名稱:國立清華大學
系所名稱:學習科學與科技研究所
學號:110291505
出版年(民國):113
畢業學年度:112
語文別:中文
論文頁數:132
中文關鍵詞:媒體識讀科學論證敘事策略對話機器人
外文關鍵詞:Media LiteracyScience ArgumentationNarrative StrategyChatbot
相關次數:
  • 推薦推薦:0
  • 點閱點閱:29
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
新型冠狀肺炎(COVID-19)影響全球的社會與生活型態轉型,同時引發一系列的虛假訊息四處流竄,造成社會大眾人心惶惶,為了因應我國在後真相時代下的教育目標發展,其中的媒體識讀更譽為一項不可或缺的核心能力。近年隨著對話機器人在教育領域技術逐漸成熟,除了能有效提高學科知識的學習成效外,其高互動的特性更適用在培養媒體識讀能力的工具。因此,本研究目的為設計一套科學論證結合媒體識讀策略以輔助深層媒體識讀的對話框架,並創建 LINE 對話機器人—偵查小幫手,透過互動對話引導受測者進行假新聞解讀,深入探討受測者在對話機器人輔助下的決策路徑,以及觀察不同科系決策的關聯及其變化。
本研究採取前實驗設計之單組後測問卷,透過後台數據了解受測者的決策歷程與受測者對媒體識讀框架應用於對話機器人的知能表現與情意態度,並以半結構訪談探討受測者對偵查小幫手的看法與優化建議。研究結果發現:(1)受測者主要受到個人的學習經驗、與社會的交流經驗影響決策觀點,敘事策略則受個人的敘述習慣、以及被說服的程度所影響,其中以「個人敘述習慣」作為判斷題項之受測者理工背景居多,且較偏好連接敘事策略;(2)以「被說服的程度」判斷題項的受測者認為在對內容知識不熟悉的情況下,先因導果的連接敘事策略較能夠說服個人;反之,對問題的掌握度越高、對答案越有自信者,則會選擇由果導因的分類敘事策略。(3)受測者對於使用對話機器人輔助媒體識讀的科技接受度和學習動機具有正向態度,雖然部分受測者對於偵查小幫手的內容設計感到疑惑,仍認同偵查小幫手能夠輔助媒體識讀學習,以及本研究提供的深層媒體識讀框架能有效輔助使用者進行深層思考。
The COVID-19 has transformed global society and lifestyles, while also triggering a surge of misinformation, causing public anxiety. In response to the educational goals of our country in the post-truth era, media literacy is increasingly recognized as an essential skill. In recent years, as chatbot has matured in the field of education, it not only effectively enhances learning outcomes in subject knowledge but also its high interactivity makes it suitable for cultivating media literacy skills. Therefore, this study aims to design a scientific argumentation combined with media literacy strategies to assist in deep media literacy dialogue framework. It creates a LINE chatbot—Investigator Assistant to guide participants through the interpretation of fake news via interactive dialogue. It delves into the decision-making pathways of participants assisted by the chatbot and observes the correlations and variations in decision-making among different disciplines.
This study adopts a pre-experimental design with a single-group post-test questionnaire. Through backend data, it understands the decision-making process of participants, their cognitive performance, and affective attitudes towards applying media literacy framework to chatbot. Semi-structured interviews are conducted to explore participants' opinions on Investigator Assistant and provide optimization suggestions. The results reveal: (1) Participants' decision perspectives are mainly influenced by personal learning experiences and social interaction experiences. Narrative strategies are influenced by individual narrative habits and the degree of persuasiveness. Those with STEM backgrounds tend to prefer connection narrative strategies. (2) Participants find that, in unfamiliar situations, a narrative strategy connected from effect to cause is more persuasive, while those with higher problem mastery and confidence tend to choose a categorizing narrative strategy from cause to affect. (3) Participants have a positive attitude towards using chatbot to assist in media literacy, despite some confusion about the content design. They still recognize the Investigator Assistant's ability to aid in media literacy learning and believe that the deep media literacy framework provided in this study effectively assists users in deep thinking.
謝誌--------------------------------------------ii
摘要--------------------------------------------iii
Abstract---------------------------------------iv
目錄--------------------------------------------v
表目錄------------------------------------------vii
圖目錄------------------------------------------ix
第一章 緒論--------------------------------------1
第一節 研究動機與背景------------------------------1
第二節 研究目的與問題------------------------------4
第三節 研究範圍與限制------------------------------5
第四節 名詞釋義-----------------------------------6
第二章 文獻探討-----------------------------------8
第一節 媒體識讀教育--------------------------------8
第二節 媒體識讀教學與應用---------------------------12
第三節 科學論證教學與應用---------------------------16
第四節 假新聞相關研究------------------------------21
第五節 對話機器人應用與評估表現----------------------26
第六節 小結---------------------------------------33
第三章 研究方法------------------------------------34
第一節 研究對象------------------------------------34
第二節 研究設計------------------------------------37
第三節 研究工具------------------------------------40
第四節 資料蒐集與分析-------------------------------53
第四章 研究結果與分析-------------------------------57
第一節 不同科系背景與決策歷程之關聯性------------------57
第二節 使用對話機器人進行深層媒體識讀的行為表現----------78
第三節 使用對話機器人輔助媒體識讀的情意與認知表現---------81
第四節 使用對話機器人輔助媒體識讀的看法與建議------------85
第五章 研究結論與限制--------------------------------88
第一節 研究結論-------------------------------------88
第二節 限制與未來發展--------------------------------93
參考文獻-------------------------------------------94
附錄----------------------------------------------103
附錄一:查核平台競品分析比較一覽表---------------------103
附錄二:對話內容與流程設計----------------------------105
附錄三:專家內容效度之內容修改建議---------------------112
附錄四:題項編碼與結果呈現----------------------------119
附錄五:正式後測問卷---------------------------------125
附錄六:專家內容效度之問卷修改建議---------------------129

王伯源(2020)。透過行動學習遊戲app教學方式探討學習動機、學習態度、學習模式滿意度與學習成效-以媒體素養課程置入性行銷為例〔未出版之碩士論文,國立臺灣師範大學〕。取自華藝線上圖書館。
王敬詠(2012)。應用電腦冒險遊戲式學習於專業英文課程對提升學習成效、學習動機與態度之影響[未出版之碩士論文,淡江大學]。取自華藝線上圖書館。
台灣事實查核中心(2018年11月2日)。台灣事實查核中心獲得國際認證 成為IFCN國際事實查核聯盟第53個合作夥伴 為台灣串起打擊錯誤訊息的全球網絡。台灣科技媒體中心新聞稿。https://tfc-taiwan.org.tw/articles/219
台灣科技媒體中心(2020年10月21日)。【2020年科學媒體素養民調記者會】民眾願主動查證 可強化科學媒體識讀能力。台灣科技媒體中心新聞稿。https://smctw.tw/7559/
成露茜、羅曉南(2004)。媒體識讀_一個批判的開始。正中書局。
何吉森(2018)。假新聞之監理與治理探討。傳播研究與實踐,8(2),1-41。https://doi.org/10.6123/jcrp.2018.07_8(2).0001
吳翠珍、陳世敏(2007)。媒體素養教育。巨流。
林玉鵬、王維菁、陳炳宏(2020)。數位網路時代下媒體素養教育政策再思考。教育科學研究期刊,65(1),115-136。https://doi.org/10.6209/jories.202003_65(1).0005
林志能、洪振方(2008)。論證模式分析及其評量要素。科學教育月刊,312,2-18。
柯舜智(2021)。解讀媒體多元訊息及辨識不實資訊。載於陳炳宏、柯舜智(主編),媒體與資訊素養 數位公民培力(183-203頁)。五南文化事業。
洪逸文、湯宜佩(2016)。高中特色課程的開發與實施:以論證課程為例。教育科學研究期刊,11(1),23-57。
高郁婷(2020)。探討數位媒介與字幕輔助對國中生英文字彙學習成效、學習態度、學習動機及數位教材沉浸感之影響〔未出版之碩士論文,國立臺灣師範大學〕。取自華藝線上圖書館。
張珮珊、賴吉永、溫媺純(2017)。科學探究與實作課程的發展、實施與評量:以實驗室中的科學論證為核心之研究。科學教育學刊,25(4),355-389。https://doi.org/10.6173/cjse.2017.2504.03
張偉男、劉挺(2016)。聊天機器人技術的研究進展。中國人工智慧學會通訊,6(1),17-21。
葉乃靜(2022)。資訊素養教育的特徵與重要性。通識教育學刊,(30),11-32。
教育部(2002)。媒體素養教育政策白皮書。
教育部(2008)。國民中小學媒體素養教育推廣計畫。
教育部(2014)。十二年國民基本教育課程綱要-總綱。
莊雪華、黃繼仁(2012)。媒體識讀教育的發展及在中小學課程與教學的應用。課程與教學,15(1),35-65。https://doi.org/10.6384/ciq.201201.0036
許芮瑄、吳孟育(2021)。後真相時代下的賦權知能~媒體素養課程設計之分享。資優教育季刊,(155),25-32。https://doi.org/10.6218/geq.202104_(155).25-32
陳世敏(2008)。「媒體識讀」跟「媒體素養」有差別嗎?。卓越新聞電子報。https://www.feja.org.tw/37925 
陳柏廷(2021)。以對話引導空間景色感受之LINE對話機器人介面設計研究 [未出版之碩士論文,國立清華大學]。取自華藝線上圖書館。
陳美智、洪振方(2018)。高一學生科學探究能力對科學論證能力預測效果之研究:以科學證據概念為中介變項。高雄師大學報:自然科學與科技類,(45),43-84。取自華藝線上圖書館。
陳雅惠(2014)。探索網路新聞敘事新方向。新聞學研究,(121),127-165。https://doi.org/10.30386/MCR.201410_(121).0004
湯宜佩、張文馨、許瑛玿(2021)。針對高中科學論證教學研究回顧與評析。教育科學研究期刊,66(4),217-243。https://doi.org/10.6209/JORIES.202112_66(4).0008
隋奇融(2021)。以Go-Lab平台發展與實施科學探究實作評量[未出版之碩士論文,國立臺灣師範大學]。取自華藝線上圖書館。
黃兆璽、謝宗順、翁福元、胡世澤(2021)。假新聞與媒體識讀認知理論之分析探討。台灣教育研究期刊,2(6),283-298。
黃俊儒、賴雁蓉、羅尹悅、蔡旻諭、陳儀珈、隋昱嬋、蘇芸巧、范育綺、羅沐深、曾雅榮、許芝旖(2020)。新媒體判讀力:用科學思維讓假新聞無所遁形。方寸文創。
鄒心慈(2022)。以科技接受模型探討教保服務人員使用Super Simple線上學習平台實施「融入式英語教學」之研究-以桃園市公立幼兒園為例[未出版之碩士論文,健行科技大學]。臺灣博碩士論文知識加值系統。
鄒采倪(2022)。探討三個不同人稱觀點之聊天機器人對改善拖延的影響[未出版之碩士論文,國立政治大學]。華藝線上圖書館。
臺灣媒體觀察教育基金會(2012)。媒體素養發展沿革與概況。http://www.mediawatch.org.tw/work/literacy/development
關尚仁、祝心瑩(2013)。科學傳播系列課程作為科學學生通識教育媒體素養之探討。載於世新大學(主編),科學傳播論文集4(55-68)。世新大學
歐昱傑、楊淑晴、宋庭瑋、羅藝方(2022)。新冠肺炎謠言內容分析之探究。台灣公共衛生雜誌,41(1),51-68。https://doi.org/10.6288/TJPH.202202_41(1).110112
潘怡如、陳雅君、林煥祥(2018)。以科學新聞融入教學提升中學生自我效能及論證能力之探討。科學教育學刊,26(1),71-96。https://doi.org/10.6173/CJSE.201803_26(1).0004
蔡佩穎、張文華、林雅慧、張惠博(2012)。初探論證科學新聞對七年級學生生物學習之效益。中等教育,63(1),13-37。https://doi.org/10.6249/SE.2012.63.1.02
蔡蕙如(2016)。「媒體識讀」作為實踐「媒體改革」的反思。新聞學研究,(127),119-152。https://doi.org/10.30386/MCR.201604_(127).0004
賴志忠(2018)。營造多元探究情境培養學生問題解決能力。中等教育,69(3),142-154。取自華藝線上圖書館。
LINE(2022年4月6日)。LINE數位當責計畫 抗假三年有成 首度揭露「影響力報告」 累積78萬人訂閱LINE訊息查證官方帳號 查核速度成長20% 以台灣最大查證平台 守護數位環境。LINE官方網站。https://linecorp.com/zh-hant/pr/news/zh-hant/2022/4188
Macdonald, H.(2018)。後真相時代:當真相被操弄、利用,我們該如何看?如何聽?如何思考?(林麗雪、葉織茵;初版),三采文化。
Adamopoulou, E., & Moussiades, L. (2020). An overview of chatbot technology. In IFIP international conference on artificial intelligence applications and innovations (pp. 373-383). Springer, Cham. https://doi.org/10.1007/978-3-030-49186-4_31
Alonso, M. A., Vilares, D., Gómez-Rodríguez, C., & Vilares, J. (2021). Sentiment analysis for fake news detection. Electronics, 10(11), 1348. https://doi.org/10.3390/electronics10111348
Aufderheide, P. (1992). Media literacy: A report of the National Conference on Media Literacy. Washinton, DC: The apsen instute.
Bengio, Y., & LeCun, Y. (2007). Scaling learning algorithms towards AI. Large-scale kernel machines, 34(5), 1-41.
Blair, A. M. (2010). Too much to know: Managing scholarly information before the modern age. Yale University Press. https://docdrop.org/download_annotation_doc/Too-Much-to-Know_-Managing-Scho---Blair-Ann-M_-5eglr.pdf
Brandtzaeg, P. B., & Følstad, A. (2017). Why people use chatbots. In Internet Science: 4th International Conference, INSCI 2017, Thessaloniki, Greece, November 22-24, 2017, Proceedings 4 (pp. 377-392). Springer International Publishing. http://doi.org/10/1007/978-3-319-70284-1_30
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., & Askell, A. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
Cavagnetto, A. R. (2010). Argument to foster scientific literacy: A review of argument interventions in K–12 science contexts. Review of educational research, 80(3), 336-371. https://doi.org/10.3102/0034654310376953
Chen, Z., & Liu, B. (2018). Lifelong machine learning (Vol. 1). San Rafael: Morgan & Claypool Publishers.
Considine, D. (1995). An introduction to media literacy: The what, why and how to's. Telemedium, The Journal of Media Literacy, 41(2), i-vi.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. psychometrika, 16(3), 297-334.
Dahiya, M. (2017). A tool of conversation: Chatbot. International Journal of Computer Sciences and Engineering, 5(5), 158-161.
Davis, F. D. (1989). Technology acceptance model: TAM. Al-Suqri, MN, Al-Aufi, AS: Information Seeking Behavior and Technology Adoption, 205-219.
Dawson, V., & Venville, G. J. (2009). High‐school Students’ Informal Reasoning and Argumentation about Biotechnology: An indicator of scientific literacy? International Journal of Science Education, 31(11), 1421-1445. https://doi.org/10.1080/09500690801992870
Deveci Topal, A., Dilek Eren, C., & Kolburan Geçer, A. (2021). Chatbot application in a 5th grade science course. Education and Information Technologies, 26(5), 6241-6265. https://doi.org/10.1007/s10639-021-10627-8
Di Domenico, G., Sit, J., Ishizaka, A., & Nunan, D. (2021). Fake news, social media and marketing: A systematic review. Journal of Business Research, 124, 329-341.
Digital 2023:TAIWAN. (2023). https://datareportal.com/reports/digital-2023-taiwan
Dominick, J. R. (2013). The dynamics of mass communication:Media in transition. McGraw-Hill New York.
Erduran, S., & Jiménez-Aleixandre, M. P. (2008). Argumentation in science education. Perspectives from classroom-Based Research. Dordre-cht: Springer.
European Commisson. (2018). Tackling online disinformation: a European Approach. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0236
Foss, K.(1996). Rhetorical Criticism :Exploration & Practice, Waveland Press
Gallagher, B., & Berger, K. (2019). Why misinformation is about who you trust, not what you think. Nautilus, 69.
Georgakopoulou, A. (2011). Computer-mediated communication. Pragmatics in practice, 9, 93.
Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science advances, 5(1), eaau4586. http://doi.org/10.1126/sciadv.aau4586
Guo, B., Ding, Y., Sun, Y., Ma, S., Li, K., & Yu, Z. (2021). The mass, fake news, and cognition security. Frontiers of Computer Science, 15, 1-13.
Hiremath, G., Hajare, A., Bhosale, P., Nanaware, R., & Wagh, K. (2018). Chatbot for education system. International Journal of Advance Research, Ideas and Innovations in Technology, 4(3), 37-43.
Hobbs, R. (2010). Digital and media literacy: A plan of action. New York: The Aspen Institute.
Horne, B., & Adali, S. (2017, May). This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. In Proceedings of the international AAAI conference on web and social media (Vol. 11, No. 1, pp. 759-766). https://doi.org/10.1609/icwsm.v11i1.14976
Huang, W., Hew, K. F., & Fryer, L. K. (2022). Chatbots for language learning—Are they really useful? A systematic review of chatbot‐supported language learning. Journal of Computer Assisted Learning, 38(1), 237-257. https://doi.org/10.1111/jcal.12610
Hunt, E. (2016). What is fake news? How to spot it and what you can do to stop it. The Guardian, 17(12), 15-16.
Hussain, S., Ameri Sianaki, O., & Ababneh, N. (2019). A survey on conversational agents/chatbots classification and design techniques. Web, Artificial Intelligence and Network Applications: Proceedings of the Workshops of the 33rd International Conference on Advanced Information Networking and Applications (WAINA-2019) 33, pp. 946-956. Springer International Publishing.
Jeong, S.-H., Cho, H., & Hwang, Y. (2012). Media literacy interventions: A meta-analytic review. Journal of Communication, 62(3), 454-472. https://doi.org/10.1111/j.1460-2466.2012.01643.x
Jonassen, D. H., & Kim, B. (2010). Arguing to learn and learning to argue: Design justifications and guidelines. Educational Technology Research and Development, 58, 439-457. http://doi.org/10.1007/s11423-009-9143-8
Keller, J. M. (1987). Development and use of the ARCS model of instructional design. Journal of instructional development, 10(3), 2.
Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: An overview. Theory into practice, 41(4), 212-218.
Kuhn, D. (1992). Thinking as argument. Harvard Educational Review, 62(2), 155-179.
Labov, W. (1972). The transformation of experience in narrative syntax. In W. Labov (Ed.), Language in the inner city: Studies in the Black English vernacular (pp. 354-396). Philadelphia: University of Pennsylvania Press.
Lasswell, H. D. (1948). The structure and function of communication in society. The communication of ideas, 37(1), 136-139.
Laybats, C., & Tredinnick, L. (2016). Post truth, information, and emotion. In (Vol. 33, pp. 204-206): SAGE Publications Sage UK: London, England. https://doi.org/10.1177/0266382116680741
Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60. https://doi.org/10.1016/j.futures.2017.03.006
McNeill, K. L., & Pimentel, D. S. (2010). Scientific discourse in three urban classrooms: The role of the teacher in engaging high school students in argumentation. Science Education, 94(2), 203-229. https://doi.org/10.1002/sce.20364
Nimavat, K., & Champaneria, T. (2017). Chatbots: An overview types, architecture, tools and future possibilities. Int. J. Sci. Res. Dev, 5(7), 1019-1024.
Nuruzzaman, M., & Hussain, O. K. (2018). A survey on chatbot implementation in customer service industry through deep neural networks. 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), pp.54-61. IEEE.
Ouyang, D., He, B., Ghorbani, A., Yuan, N., Ebinger, J., Langlotz, C. P., Heidenreich, P. A., Harrington, R. A., Liang, D. H., & Ashley, E. A. (2020). Video-based AI for beat-to-beat assessment of cardiac function. Nature, 580(7802), 252-256. https://doi.org/10.1038/s41586-020-2145-8
Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management science, 66(11), 4944-4957. https://doi.org/10.1287/mnsc.2019.3478
Polkinghorne, D. E. (1988). Narrative knowing and the human sciences. Suny Press.
Potter, W. J. (2004). Theory of media literacy: A cognitive approach. Sage Publications.
Potter, W. J. (2018). Media literacy. Sage Publications.
Rastogi, A., Zang, X., Sunkara, S., Gupta, R., & Khaitan, P. (2020). Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. Proceedings of the AAAI Conference on Artificial Intelligence,34(5), pp.8689-8696. https://doi.org/10.1609/aaai.v34i05.6394
Razali, N. M., & Wah, Y. B. (2011). Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. Journal of statistical modeling and analytics, 2(1), 21-33.
Sadler, T. D. (2004). Informal reasoning regarding socioscientific issues: A critical review of research. Journal of Research in Science Teaching, 41(5), 513-536.
Scheufele, D. A., & Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the national academy of sciences, 116(16), 7662-7669. https://doi.org/10.1073/pnas.1805871115
Schmidt, H. C. (2013). Media literacy education from kindergarten to college: A comparison of how media literacy is addressed across the educational system. Journal of Media Literacy Education, 5(1), 3. https://doi.org/10.23860/jmle-5-1-3
Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 52(3/4), 591-611.
Shen, C., Kasra, M., Pan, W., Bassett, G. A., Malloch, Y., & O’Brien, J. F. (2019). Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New media & society, 21(2), 438-463. https://doi.org/10.1177/1461444818799526
Shi, F., Wang, J., Shi, J., Wu, Z., Wang, Q., Tang, Z., He, K., Shi, Y., & Shen, D. (2020). Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19. IEEE reviews in biomedical engineering, 14, 4-15. http://doi.org/10.1109/RBME.2020.2987975
Silverblatt, A. (1995). Media literacy: Keys to interpreting media messages. Bloomsbury Publishing USA.
Sperry, C. (2018). " Fake News" and Media Literacy: An Introduction. Social Education, 82(4), 206-207.
Tandoc Jr, E. C., Lim, D., & Ling, R. (2020). Diffusion of disinformation: How social media users respond to fake news and why. Journalism, 21(3), 381-398. https://doi.org/10.1177/1464884919868325
Tandoc Jr, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital journalism, 6(2), 137-153. https://doi.org/10.1080/21670811.2017.1360143
Thoman, E. (1993). skills and strategies for media education. Center for media and information literacy. https://www.medialit.org/reading-room/skills-strategies-media-education
Toulmin, S. E. (2003). The uses of argument. Cambridge university press.
Van den Broeck, E., Zarouali, B., & Poels, K. (2019). Chatbot advertising effectiveness: When does the message get through? Computers in Human Behavior, 98, 150-157. https://doi.org/10.1016/j.chb.2019.04.009
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. science, 359(6380), 1146-1151. http://doi.org/10.1126/science.aap9559
Walther, J. B. (2002). Cues filtered out, cues filtered in: Computer mediated communication and relationships. Handbook of interpersonal communication, 3, 529.
Wambsganss, T., Guggisberg, S., & Söllner, M. (2021). Arguebot: A conversational agent for adaptive argumentation feedback. Innovation Through Information Systems: Volume II: A Collection of Latest Research on Technology Issues. https://doi.org/10.1007/978-3-030-86797-3
Wambsganss, T., Kueng, T., Soellner, M., & Leimeister, J. M. (2021). ArgueTutor: An adaptive dialog-based learning system for argumentation skills. Proceedings of the 2021 CHI conference on human factors in computing systems. https://doi.org/10.1145/3411764.3445781
Wardle, C. (2017). Fake news. It’s complicated. First draft, 16, 1-11.
Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.
Wu, W.-H., & Liao, G.-Z. (2022). Design of a Chatbot Learning System: Case Study of a Reading Chatbot. Bulletin of the Technical Committee on Learning Technology (ISSN: 2306-0212), 22(1), 2-7.
Zarouali, B., Van den Broeck, E., Walrave, M., & Poels, K. (2018). Predicting consumer responses to a chatbot on Facebook. Cyberpsychology, Behavior, and Social Networking, 21(8), 491-497. https://doi.org/10.1089/cyber.2017.0518
Zeidler, D. L. (2014). Socioscientific issues as a curriculum emphasis. Theory, research, and practice. In NG Lederman & SK Abell (Eds.), Handbook of research on science education, 2, 697-726.
Zhou, X., & Zafarani, R. (2020). A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Computing Surveys (CSUR), 53(5), 1-40.
Zohar, A., & Nemet, F. (2002). Fostering students' knowledge and argumentation skills through dilemmas in human genetics. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 39(1), 35-62. https://doi.org/10.1002/tea.10008
Zumstein, D., & Hundertmark, S. (2017). CHATBOTS--AN INTERACTIVE TECHNOLOGY FOR PERSONALIZED COMMUNICATION, TRANSACTIONS AND SERVICES. IADIS International Journal on WWW/Internet, 15(1).
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *