|
Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138-52160. https://doi.org/10.1109/ACCESS.2018.2870052. Ahn, J., Kim, J., & Sung, Y. (2021). AI-powered recommendations: the roles of perceived similarity and psychological distance on persuasion. International Journal of Advertising, 40(8), 1366-1384." Allport, G. W. (1954). The nature of prejudice. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115. Ashktorab, Z., Liao, Q., Dugan, C., Johnson, J., Pan, Q., Zhang, W., Kumaravel, S., & Campbell, M. (2020). Human-AI Collaboration in a Cooperative Game Setting. Proceedings of the ACM on Human-Computer Interaction, 4, 1 - 20. https://doi.org/10.1145/3415167. Balakrishnan, J., & Dwivedi, Y. K. (2021). Conversational commerce: entering the next stage of AI-powered digital assistants. Annals of Operations Research, 1-35. Balakrishnan, J., Abed, S. S., & Jones, P. (2022). The role of meta-UTAUT factors, perceived anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services?. Technological Forecasting and Social Change, 180, 121692. Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics, 1, 71-81. Bhattacherjee, A., & Hikmet, N. (2007). Physicians' resistance toward healthcare information technology: a theoretical model and empirical test. European Journal of Information Systems, 16(6), 725-737. Borshuk, C. (2004). An Interpretive Investigation into Motivations for Outgroup Activism. The Qualitative Report, 9, 300-319. Broniarczyk, S. M., & Griffin, J. G. (2014). Decision difficulty in the age of consumer empowerment. Journal of Consumer Psychology, 24(4), 608-625. Brown, D., Ferris, D., Heller, D., & Keeping, L. (2007). Antecedents and consequences of the frequency of upward and downward social comparisons at work. Organizational Behavior and Human Decision Processes, 102, 59-75. https://doi.org/10.1016/J.OBHDP.2006.10.003. Bunde, E. (2021). AI-Assisted and Explainable Hate Speech Detection for Social Media Moderators - A Design Science Approach. , 1-10. https://doi.org/10.24251/HICSS.2021.154. Cai, C. J., Jongejan, J., & Holbrook, J. (2019, March). The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th international conference on intelligent user interfaces (pp. 258-262). Chasteen, Chasteen, A. L. (2005). Seeing eye-to-eye: Do intergroup biases operate similarly for younger and older adults? International Journal of Aging & Human Development, 61, 123-139. Chikobava, M., & Romeike, R. (2021). Towards an Operationalization of AI acceptance among Pre-service Teachers. The 16th Workshop in Primary and Secondary Computing Education. https://doi.org/10.1145/3481312.3481349. Chin, W. W. (2009). How to write up and report PLS analyses. In Handbook of partial least squares: Concepts, methods and applications (pp. 655-690). Berlin, Heidelberg: Springer Berlin Heidelberg. Chu, C., Leslie, K., Khan, S., Nyrup, R., & Grenier, A. (2022). AGEISM IN ARTIFICIAL INTELLIGENCE: A REVIEW. Innovation in Aging. https://doi.org/10.1093/geroni/igac059.2446. Chu, C., Nyrup, R., Leslie, K., Shi, J., Bianchi, A., Lyn, A., McNicholl, M., Khan, S., Rahimi, S., & Grenier, A. (2022). Digital Ageism: Challenges and Opportunities in Artificial Intelligence for Older Adults. The Gerontologist, 62, 947 - 955. https://doi.org/10.1093/geront/gnab167. Conati, C., Barral, O., Putnam, V., & Rieger, L. (2019). Toward personalized XAI: A case study in intelligent tutoring systems. Artif. Intell., 298, 103503. https://doi.org/10.1016/J.ARTINT.2021.103503. Corenblum, B., & Stephan, W. G. (2001). White fears and native apprehensions: An integrated threat theory approach to intergroup attitudes. Canadian Journal of Behavioral Science, 33, 251-268. Cronin, M. J. (2010). Smart products, smarter services: Strategies for embedded control. Cambridge University Press. Dai, L., & Xiao, R. (2016). The Influence of Social Comparison on Job Performance. Open Journal of Social Sciences, 04, 147-151. https://doi.org/10.4236/JSS.2016.47024. Dang, J., & Liu, L. (2022). Implicit theories of the human mind predict competitive and cooperative responses to AI robots. Computers in Human Behavior, 134, 107300. Dargham, M., Hachimi, H., & Boutalline, M. (2022). How AI is automating writing: The rise of robot writers. 2022 8th International Conference on Optimization and Applications (ICOA), 1-5. https://doi.org/10.1109/ICOA55659.2022.9934723. Das, A., & Rad, P. (2020). Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. ArXiv, abs/2006.11371. Drury, J. L., Scholtz, J., & Yanco, H. A. (2003, October). Awareness in human-robot interactions. In SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme-System Security and Assurance (Cat. No. 03CH37483) (Vol. 1, pp. 912-918). IEEE. Dweck, C. S., Chiu, C. Y., & Hong, Y. Y. (1995). Implicit theories and their role in judgments and reactions: A word from two perspectives. Psychological inquiry, 6(4), 267-285. Dweck, C. S., & Yeager, D. S. (2019). Mindsets: A view from two eras. Perspectives on Psychological science, 14(3), 481-496. Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological review, 95(2), 256. Eddleston, K. (2009). The effects of social comparisons on managerial career satisfaction and turnover intentions. Career Development International, 14, 87-110. https://doi.org/10.1108/13620430910933592. Eguchi, A. (2021, February). AI-robotics and ai literacy. In Educational Robotics International Conference (pp. 75-85). Cham: Springer International Publishing. Ehsan, U., Passi, S., Liao, Q. V., Chan, L., Lee, I., Muller, M., & Riedl, M. O. (2021). The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509. Ezer, N., Bruni, S., Cai, Y., Hepenstal, S., Miller, C., & Schmorrow, D. (2019). Trust Engineering for Human-AI Teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63, 322 - 326. https://doi.org/10.1177/1071181319631264. Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2) 117-140. Fischer, P., Kastenmüller, A., Frey, D., & Peus, C. (2009). Social comparison and information transmission in the work context. Journal of Applied Social Psychology, 39(1), 42-61. Floyd, D. L., Prentice‐Dunn, S., & Rogers, R. W. (2000). A meta‐analysis of research on protection motivation theory. Journal of applied social psychology, 30(2), 407-429. Frederick, C., Havitz, M., & Shaw, S. (1994). Social comparison in aerobic exercise classes propositions for analyzing motives and participation. Leisure Sciences, 16, 161-176. https://doi.org/10.1080/01490409409513228. Gardner, W., Gabriel, S., & Hochschild, L. (2002). When you and I are "we," you are not threatening: the role of self-expansion in social comparison. Journal of personality and social psychology, 82 2, 239-51. https://doi.org/10.1037/0022-3514.82.2.239. Gilboa, I., & Schmeidler, D. (2001). A theory of case-based decisions. , I-X, 1-199. https://doi.org/10.1017/CBO9780511493539. Gilpin, L., Bau, D., Yuan, B., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 80-89. https://doi.org/10.1109/DSAA.2018.00018. Gohel, P., Singh, P., & Mohanty, M. (2021). Explainable AI: current status and future directions. ArXiv, abs/2107.07045. Gunning, D. (2019). DARPA's explainable artificial intelligence (XAI) program. Proceedings of the 24th International Conference on Intelligent User Interfaces. https://doi.org/10.1145/3301275.3308446. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science robotics, 4(37), eaay7120. Hadarics, M., Szabó, Z. P., & Kende, A. (2020). The relationship between collective narcissism and group-based moral exclusion: The mediating role of intergroup threat and social distance. Journal of Social and Political Psychology, 8(2), 788-804. Hagras, H. (2018). Toward Human-Understandable, Explainable AI. Computer, 51, 28-36. https://doi.org/10.1109/MC.2018.3620965. Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.E., & Tatham, R.L. (2006). Multivariate data analysis (6th ed.). Upper Saddle River, NJ: Pearson Education Inc. Hargreaves Heap, S. P., Verschoor, A., & Zizzo, D. J. (2009). Out-group favouritism. Available at SSRN 1428937. Hassani, H., Silva, E. S., Unger, S., TajMazinani, M., & Mac Feely, S. (2020). Artificial intelligence (AI) or intelligence augmentation (IA): what is the future?. Ai, 1(2), 8. Henderson-King, E., Henderson-King, D., Zhermer, N., Posokhova, S., & Chiker, V. (1997). In-group favoritism and perceived similarity: A look at Russians' perceptions in the post-Soviet era. Personality and Social Psychology Bulletin, 23(10), 1013-1021. Hoffman, D. L., & Novak, T. (2015). Emergent experience and the connected consumer in the smart home assemblage and the internet of things. Available at SSRN 2648786. Hong, Y. Y., Chiu, C. Y., Dweck, C. S., Lin, D. M. S., & Wan, W. (1999). Implicit theories, attributions, and coping: a meaning system approach. Journal of Personality and Social psychology, 77(3), 588. Huang, H., Cheng, L., Sun, P., & Chou, S. (2021). The Effects of Perceived Identity Threat and Realistic Threat on the Negative Attitudes and Usage Intentions Toward Hotel Service Robots: The Moderating Effect of the Robot’s Anthropomorphism. International Journal of Social Robotics, 13, 1599 - 1611. https://doi.org/10.1007/s12369-021-00752-2. Jain, H., Padmanabhan, B., Pavlou, P., & Raghu, T. (2021). Editorial for the Special Section on Humans, Algorithms, and Augmented Intelligence: The Future of Work, Organizations, and Society. Inf. Syst. Res., 32, 675-687. https://doi.org/10.1287/isre.2021.1046. Jetten, J., Spears, R., & Manstead, A. S. (1996). Intergroup norms and intergroup discrimination: distinctive self-categorization and social identity effects. Journal of personality and social psychology, 71(6), 1222. Jetten, J., Spears, R., & Manstead, A. S. (1998). Defining dimensions of distinctiveness: Group variability makes a difference to differentiation. Journal of Personality and Social Psychology, 74(6), 1481. Jones, C., Hine, D., & Marks, A. (2017). The Future is Now: Reducing Psychological Distance to Increase Public Engagement with Climate Change. Risk Analysis, 37. https://doi.org/10.1111/risa.12601. Jussupow, E., Spohrer, K., & Heinzl, A. (2022). Identity threats as a reason for resistance to artificial intelligence: survey study with medical students and professionals. JMIR Formative Research, 6(3), e28750. Kashima, H., Oyama, S., Arai, H., & Mori, J. (2022). Trustworthy Human Computation: A Survey. ArXiv, abs/2210.12324. https://doi.org/10.48550/arXiv.2210.12324. Kenny, E. M., Ford, C., Quinn, M., & Keane, M. T. (2021). Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence, 294, 103459. Kiesler, S., Sproull, L., & Waters, K. (1996). A prisoner’s dilemma experiment on cooperation with people and human-like computers. Journal of Personality and Social Psychology, 70(1), 47–65 Kim, D., Song, Y., Kim, S., Lee, S., Wu, Y., Shin, J., & Lee, D. (2023). How should the results of artificial intelligence be explained to users?-Research on consumer preferences in user-centered explainable artificial intelligence. Technological Forecasting and Social Change, 188, 122343. Klein, J., Moon, Y., & Picard, R. W. (1999, May). This computer responds to user frustration. In CHI'99 extended abstracts on Human factors in computing systems (pp. 242-243). Knight, K. (1990). Connectionist ideas and algorithms. Commun. ACM, 33, 58-74. https://doi.org/10.1145/92755.92764. Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S. J., & Doshi-Velez, F. (2019, October). Human evaluation of models built for interpretability. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 7, pp. 59-67). Lepri, B., Oliver, N., & Pentland, A. (2021). Ethical machines: The human-centric use of artificial intelligence. iScience, 24. https://doi.org/10.1016/j.isci.2021.102249. Li, X., & Sung, Y. (2021). Anthropomorphism brings us closer: The mediating role of psychological distance in User–AI assistant interactions. Computers in Human Behavior, 118, 106680. Lim, B. Y., Dey, A. K., & Avrahami, D. (2009, April). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2119-2128). Liu, C. F., & Cheng, T. J. (2015). Exploring critical factors influencing physicians’ acceptance of mobile electronic medical records based on the dual-factor model: A validation in Taiwan. BMC medical informatics and decision making, 15(1), 1-12. Liu, C. F., Chen, Z. C., Kuo, S. C., & Lin, T. C. (2022). Does AI explainability affect physicians’ intention to use AI?. International Journal of Medical Informatics, 168, 104884 Lucia-Palacios, L., & Pérez-López, R. (2021). Effects of home voice assistants' autonomy on instrusiveness and usefulness: direct, indirect, and moderating effects of interactivity. Journal of Interactive Marketing, 56, 41-54. McLean, G., Osei-Frimpong, K., & Barhorst, J. (2021). Alexa, do voice assistants influence consumer brand engagement? – Examining the role of AI powered voice assistants in influencing consumer brand engagement. Journal of Business Research, 124, 312-328. https://doi.org/10.1016/j.jbusres.2020.11.045. Meas, M., Machlev, R., Kose, A., Tepljakov, A., Loo, L., Levron, Y., Petlenkov, E., & Belikov, J. (2022). Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI). Sensors (Basel, Switzerland), 22. https://doi.org/10.3390/s22176338. Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2020). Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities. Information Systems Management, 39, 53 - 63. https://doi.org/10.1080/10580530.2020.1849465. Messé, L. A., Hymes, R. W., & MacCoun, R. J. (1986). Group categorization and distributive justice decisions. Justice in social relations, 227-248. Mirbabaie, M., Brünker, F., Möllmann, N. R., & Stieglitz, S. (2022). The rise of artificial intelligence–understanding the AI identity threat at the workplace. Electronic Markets, 1-27. Morry, M., & Sucharyna, T. (2016). Relationship social comparison interpretations and dating relationship quality, behaviors, and mood. Personal Relationships, 23, 554-576. https://doi.org/10.1111/PERE.12143. Mussweiler, T., Rueter, K., & Epstude, K. (2004). The man who wasn't there: Subliminal social comparison standards influence self-evaluation. Journal of Experimental Social Psychology, 40, 689-696. https://doi.org/10.1016/J.JESP.2004.01.004. Nahavandi, S. (2019). Trust in Autonomous Systems-iTrust Lab: Future Directions for Analysis of Trust With Autonomous Systems. IEEE Systems, Man, and Cybernetics Magazine, 5, 52-59. https://doi.org/10.1109/MSMC.2019.2916239. Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941. Ng, D. T. K., Wu, W., Leung, J. K. L., & Chu, S. K. W. (2023, July). Artificial Intelligence (AI) literacy questionnaire with confirmatory factor analysis. In 2023 IEEE International Conference on Advanced Learning Technologies (ICALT) (pp. 233-235). IEEE. Ng, D., Leung, J., Chu, K., & Qiao, M. (2021). AI Literacy: Definition, Teaching, Evaluation and Ethical Issues. Proceedings of the Association for Information Science and Technology, 58. https://doi.org/10.1002/pra2.487. Noreen, U., Shafique, A., Ahmed, Z., & Ashfaq, M. (2023). Banking 4.0: Artificial Intelligence (AI) in Banking Industry & Consumer’s Perspective. Sustainability. https://doi.org/10.3390/su15043682. Nunnally, J. (1994). Psychometric theory. Ognibene, D., Baldissarri, C., & Manfredi, A. (2023). Does ChatGPT pose a threat to human identity?. Parise, S., Kiesler, S., Sproull, L., & Waters, K. (1996, November). My partner is a real dog: Cooperation with social agents. In Proceedings of the 1996 ACM conference on Computer supported cooperative work (pp. 399-408). Park, G., Chung, J., & Lee, S. (2023). Human vs. machine-like representation in chatbot mental health counseling: the serial mediation of psychological distance and trust on compliance intention. Current Psychology, 1-12. Park, H., Ahn, D., Hosanagar, K., & Lee, J. (2021). Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3411764.3445304. Pipp, S., Shaver, P., Jennings, S., Lamborn, S., & Fischer, K. W. (1985). Adolescents' theories about the development of their relationships with parents. Journal of personality and social psychology, 48(4), 991. Ridley, M. (2022). Explainable Artificial Intelligence (XAI). Information Technology and Libraries. https://doi.org/10.6017/ital.v41i2.14683. Rijsdijk, S. A., & Hultink, E. J. (2009). How today's consumers perceive tomorrow's smart products. Journal of Product Innovation Management, 26(1), 24-42. Saarela, M., & Jauhiainen, S. (2021). Comparison of feature importance measures as explanations for classification models. SN Applied Sciences, 3, 1-12. https://doi.org/10.1007/s42452-021-04148-9. Saaty, T. (1990). How to Make a Decision: The Analytic Hierarchy Process. Interfaces, 24, 19-43. https://doi.org/10.1287/INTE.24.6.19. Sankaran, S., Zhang, C., Funk, M., Aarts, H., & Markopoulos, P. (2020). Do I have a say?: Using conversational agents to re-imagine human-machine autonomy. Proceedings of the 2nd Conference on Conversational User Interfaces. https://doi.org/10.1145/3405755.3406135. Sanneman, L., & Shah, J. A. (2020). A situation awareness-based framework for design and evaluation of explainable AI. In Explainable, Transparent Autonomous Agents and Multi-Agent Systems: Second International Workshop, EXTRAAMAS 2020, Auckland, New Zealand, May 9–13, 2020, Revised Selected Papers 2 (pp. 94-110). Springer International Publishing. Schmitt, B. (2020). Speciesism: an obstacle to AI and robot adoption. Marketing Letters, 31, 3-6. https://doi.org/10.1007/s11002-019-09499-3. Soni, V. D. (2020). Challenges and Solution for Artificial Intelligence in Cybersecurity of the USA. Available at SSRN 3624487. Speith, T. (2022, June). A review of taxonomies of explainable artificial intelligence (XAI) methods. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 2239-2250). Statista (2023) Market size and revenue comparison for artificial intelligence worldwide from 2018 to 2030. https://www.statista.com/statistics/941835/artificial-intelligence-market-size-revenue-comparisons/ Stephan, W. G., Ybarra, O., & Rios, K. (2015). Intergroup threat theory. In Handbook of prejudice, stereotyping, and discrimination (pp. 255-278). Psychology Press. Stephan, W., Ybarra, O., & Bachman, G. (1999). Prejudice toward immigrants. Journal of Applied Social Psychology, 29, 2221–2237. doi:10.1111/j.1559-1816.1999.tb00107.x Summerfield, C., & Parpart, P. (2021). Normative Principles for Decision-Making in Natural Environments. Annual review of psychology. https://doi.org/10.1146/annurev-psych-020821-104057. Tajfel, H., & Turner, J. C. (1978). Intergroup behavior. Introducing social psychology, 401, 466. Tajfel, H., Turner, J. C., Austin, W. G., & Worchel, S. (1979). An integrative theory of intergroup conflict. Organizational identity: A reader, 56(65), 9780203505984-16. Taylor, D. A., & Altman, I. (1987). Communication in interpersonal relationships: Social penetration processes. Thórisson, K., & Helgasson, H. (2012). Cognitive architectures and autonomy: A comparative review. Journal of Artificial General Intelligence, 3(2), 1. Tiwari, R. (2023). Explainable AI (XAI) and its Applications in Building Trust and Understanding in AI Decision Making. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT. https://doi.org/10.55041/ijsrem17592. Totschnig, W. (2020). Fully Autonomous AI. Science and Engineering Ethics, 26, 2473 - 2485. https://doi.org/10.1007/s11948-020-00243-z. Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance. Psychological review, 117 2, 440-63 . https://doi.org/10.1037/a0018963. Väänänen, K., Sankaran, S., Lopez, M., & Zhang, C. (2021). Editorial: Respecting Human Autonomy through Human-Centered AI. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.807566. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS quarterly, 425-478. Waefler, T., & Schmid, U. (2020). Explainability is not Enough : Requirements for Human-AI-Partnership in Complex Socio-Technical Systems. , 185-194. https://doi.org/10.34190/EAIR.20.007. Wang, C., & Peng, K. (2023). AI Experience Predicts Identification with Humankind. Behavioral Sciences, 13. https://doi.org/10.3390/bs13020089. Wang, P. X., Kim, S., & Kim, M. (2023). Robot anthropomorphism and job insecurity: The role of social comparison. Journal of Business Research, 164, 114003. Wang, X., & Yin, M. (2021, April). Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. In 26th international conference on intelligent user interfaces (pp. 318-328). Weitz, K., Schiller, D., Schlagowski, R., Huber, T., & André, E. (2019). "Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. https://doi.org/10.1145/3308532.3329441. Wood, E., Ange, B., & Miller, D. (2021). Are We Ready to Integrate Artificial Intelligence Literacy into Medical School Curriculum: Students and Faculty Survey. Journal of Medical Education and Curricular Development, 8. https://doi.org/10.1177/23821205211024078. Yang, G., Ye, Q., & Xia, J. (2021). Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. An International Journal on Information Fusion, 77, 29 - 52. https://doi.org/10.1016/j.inffus.2021.07.016. Yogeeswaran, K., Złotowski, J., Livingstone, M., Bartneck, C., Sumioka, H., & Ishiguro, H. (2016). The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. Journal of Human-Robot Interaction, 5(2), 29-47. Yu, L., & Li, Y. (2022). Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort. Behavioral Sciences, 12. https://doi.org/10.3390/bs12050127. Zhao, L., Wu, X., & Luo, H. (2022). Developing AI Literacy for Primary and Middle School Teachers in China: Based on a Structural Equation Modeling Analysis. Sustainability. https://doi.org/10.3390/su142114549. Zhou, L., Paul, S., Demirkan, H., Yuan, L., Spohrer, J., Zhou, M., & Basu, J. (2021). Intelligence Augmentation: Towards Building Human- machine Symbiotic Relationship. AIS Transactions on Human-Computer Interaction. https://doi.org/10.17705/1thci.00149
|