DOI QR코드

DOI QR Code

Evaluating the Current State of ChatGPT and Its Disruptive Potential: An Empirical Study of Korean Users

  • Jiwoong Choi (Management Information Systems, Business School, Seoul National University) ;
  • Jinsoo Park (Management Information Systems, Business School, Seoul National University) ;
  • Jihae Suh (Management Information Systems, Business School, Seoul National University of Science & Technology)
  • Received : 2023.08.31
  • Accepted : 2023.10.16
  • Published : 2023.12.31

Abstract

This study investigates the perception and adoption of ChatGPT (a large language model (LLM)-based chatbot created by OpenAI) among Korean users and assesses its potential as the next disruptive innovation. Drawing on previous literature, the study proposes perceived intelligence and perceived anthropomorphism as key differentiating factors of ChatGPT from earlier AI-based chatbots. Four individual motives (i.e., perceived usefulness, ease of use, enjoyment, and trust) and two societal motives (social influence and AI anxiety) were identified as antecedents of ChatGPT acceptance. A survey was conducted within two Korean online communities related to artificial intelligence, the findings of which confirm that ChatGPT is being used for both utilitarian and hedonic purposes, and that perceived usefulness and enjoyment positively impact the behavioral intention to adopt the chatbot. However, unlike prior expectations, perceived ease-of-use was not shown to exert significant influence on behavioral intention. Moreover, trust was not found to be a significant influencer to behavioral intention, and while social influence played a substantial role in adoption intention and perceived usefulness, AI anxiety did not show a significant effect. The study confirmed that perceived intelligence and perceived anthropomorphism are constructs that influence the individual factors that influence behavioral intention to adopt and highlights the need for future research to deconstruct and explore the factors that make ChatGPT "enjoyable" and "easy to use" and to better understand its potential as a disruptive technology. Service developers and LLM providers are advised to design user-centric applications, focus on user-friendliness, acknowledge that building trust takes time, and recognize the role of social influence in adoption.

Keywords

Acknowledgement

This study was supported by the Institute of Management Research at Seoul National University.

References

  1. Agarwal, R., and Karahanna, E. (2000). Time flies when you're having fun: Cognitive absorption and beliefs about information technology usage. MIS Quarterly, 24(4), 665. https://doi.org/10.2307/3250951 
  2. Al Shamsi, J. H., Al-Emran, M., and Shaalan, K. (2022). Understanding key drivers affecting students' use of artificial intelligence-based voice assistants. Education and Information Technologies, 27(6), 8071-8091. https://doi.org/10.1007/s10639-022-10947-3 
  3. Ameca-Engineered Arts. (2022, January). Ameca, The Future Face of Robotics. Retrieved from https://www.engineeredarts.co.uk/robot/ameca/ 
  4. Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183-189. https://doi.org/10.1016/j.chb.2018.03.051 
  5. Bain & Company. (2023, February 21). Bain x OpenAI. Bain. Retrieved from https://www.bain.com/vector-digital/partnerships-alliance-ecosystem/openai-alliance/ 
  6. Bartneck, C., Kulic, D., Croft, E., and Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71-81. https://doi.org/10.1007/s12369-008-0001-3 
  7. Bawack, R., and Desveaud, K. (2022). Consumer adoption of artificial intelligence: A review of theories and antecedents. Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2022.526 
  8. Belanche, D., Casalo, L. V., and Flavian, C. (2019). Artificial Intelligence in FinTech: Understanding robo-advisors adoption among customers. Industrial Management & Data Systems, 119(7), 1411-1430. https://doi.org/10.1108/IMDS-08-2018-0368 
  9. Blagic, D. (2022, December 20). Why is the user experience of ChatGPT so powerful? Medium. Retrieved from https://uxdesign.cc/why-is-the-userexperience-of-chatgpt-so-powerful-509e803e0122 
  10. Blut, M., Wang, C., Wunderlich, N. V., and Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49(4), 632-658. https://doi.org/10.1007/s11747-020-00762-y 
  11. Bonn, M. A., Kim, W. G., Kang, S., and Cho, M. (2016). Purchasing wine online: The effects of social influence, perceived usefulness, perceived ease of use, and wine involvement. Journal of Hospitality Marketing & Management, 25(7), 841-869. https://doi.org/10.1080/19368623.2016.1115382 
  12. Brown, J., Broderick, A. J., and Lee, N. (2007). Word of mouth communication within online communities: Conceptualizing the online social network. Journal of Interactive Marketing, 21(3), 2-20. https://doi.org/10.1002/dir.20082 
  13. Brownell, A. (2023, January 13). ChatGPT and You: Language Generation's Uncanny Valley. Geek Culture. Retrieved from https://medium.com/geekculture/chatgpt-you-language-generations-uncanny-valley-a91684325b0a 
  14. Cao, Y., Zhou, L., Lee, S., Cabello, L., Chen, M., and Hershcovich, D. (2023). Assessing cross- cultural alignment between ChatGPT and human societies: An empirical study (arXiv:2303.17466). arXiv. http://arxiv.org/abs/2303.17466 
  15. Chandler, J., and Schwarz, N. (2010). Use does not wear ragged the fabric of friendship: Thinking of objects as alive makes people less willing to replace them. Journal of Consumer Psychology, 20(2), 138-145. https://doi.org/10.1016/j.jcps.2009.12.008 
  16. Chen, B. X., Grant, N., and Weise, K. (2023, March 15). How Siri, Alexa and Google Assistant lost the A.I. race. The New York Times. Retrieved from https://www.nytimes.com/2023/03/15/technology/siri-alexa-google-assistant-artificial-intelligence.html 
  17. Cheng, X., Zhang, X., Cohen, J., and Mou, J. (2022). Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59(3), 102940. https://doi.org/10.1016/j.ipm.2022.102940 
  18. Chin, W. W. (Ed.). (2010). Bootstrap cross-valication indices for PLS path model assesment. In Handbook of Partial Least Squares: Concept, Methods, Applications (pp. 83-97). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-32827-8 
  19. Chiou, J. S. (2006). Service quality, trust, specific asset investment, and expertise: Direct and indirect effects in a satisfaction-loyalty framework. Journal of the Academy of Marketing Science, 34(4), 613-627. https://doi.org/10.1177/0092070306286934 
  20. Christensen, C. M. (1997). The innovator's dilemma: When new technologies cause great firms to fail. Harvard Business School Press. 
  21. Chui, M., Roberts, R., and Yee, L. (2022, December 20). How generative AI & ChatGPT will change business | McKinsey. QuantunBlack AI by McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/generative-ai-is-here-how-tools-like-chatgpt-could-change-your-business 
  22. Chung, M., Ko, E., Joung, H., and Kim, S. J. (2020). Chatbot e-service and customer satisfaction regarding luxury brands. Journal of Business Research, 117, 587-595. https://doi.org/10.1016/j.jbusres.2018.10.004 
  23. Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159. https://doi.org/10.1037/0033-2909.112.1.155 
  24. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008 
  25. Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Social Psychology, 22(14), 1111-1132. https://doi.org/10.1111/j.1559-1816.1992.tb00945.x 
  26. de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., and Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331-349. https://doi.org/10.1037/xap0000092 
  27. Destephe, M., Brandao, M., Kishi, T., Zecca, M., Hashimoto, K., and Takanishi, A. (2015). Walking in the uncanny valley: Importance of the attractiveness on the acceptance of a robot as a working partner. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.00204 
  28. Eisingerich, A. B., and Bell, S. J. (2008). Perceived service quality and customer trust: Does enhancing customers' service knowledge matter? Journal of Service Research, 10(3), 256-268. https://doi.org/10.1177/1094670507310769 
  29. Fernandes, T., and Oliveira, E. (2021). Understanding consumers' acceptance of automated technologies in service encounters: Drivers of digital voice assistants adoption. Journal of Business Research, 122, 180-191. https://doi.org/10.1016/j.jbusres.2020.08.058 
  30. Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A., Lally, A., Murdock, J. W., Nyberg, E., Prager, J., Schlaefer, N., and Welty, C. (2010). Building Watson: An overview of the DeepQA project. AI Magazine, 31(3), Article 3. https://doi.org/10.1609/aimag.v31i3.2303 
  31. Fornell, C., and Larcker, D. F. (1981). Structural equation models with unobservable variables and measurement error: Algebra and statistics. Journal of Marketing Research, 18(3), 382-388. https://doi.org/10.1177/002224378101800313 
  32. Fulford, I., and Ng, A. (2023, April). ChatGPT Prompt Engineering for Developers. DeepLearning.AI. Retrieved from https://www.deeplearning.ai/shortcourses/chatgpt-prompt-engineering-for-developers/ 
  33. Future of Life Institute. (2015). Autonomous Weapons Open Letter: AI & Robotics Researchers. Future of Life Institute. Retrieved from https://futureoflife.org/open-letter/open-letter-autonomous-weapons-ai-robotics/ 
  34. Future of Life Institute. (2023). Pause Giant AI Experiments: An Open Letter. Future of Life Institute. Retrieved from https://futureoflife.org/open-letter/pause-giant-ai-experiments/ 
  35. Gefen, D., and Straub, D. W. (2003). Managing User Trust in B2C e-Services. E-Service Journal, 2(2), 7-24. https://doi.org/10.2979/esj.2003.2.2.7 
  36. Gerow, J. E., Ayyagari, R., Thatcher, J. B., and Roth, P. L. (2013). Can we have fun @ work? The role of intrinsic motivation for utilitarian systems. European Journal of Information Systems, 22(3), 360-380. https://doi.org/10.1057/ejis.2012.25 
  37. Gow, G. (2023, April 9). Top 5 AI risks in the era of ChatGPT and Generative AI. Forbes. Retrieved from https://www.forbes.com/sites/glenngow/2023/04/09/top-5-ai-risks-in-the-era-of-chatgpt-and-generative-ai/ 
  38. Guggemos, J., Seufert, S., and Sonderegger, S. (2020). Humanoid robots in higher education: Evaluating the acceptance of Pepper in the context of an academic writing course using the UTAUT. British Journal of Educational Technology, 51(5), 1864-1883. https://doi.org/10.1111/bjet.13006 
  39. Ha, H. (2004). Factors influencing consumer perceptions of brand trust online. Journal of Product & Brand Management, 13(5), 329-342. https://doi.org/10.1108/10610420410554412 
  40. Hair, J. F., Risher, J. J., Sarstedt, M., and Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2-24. https://doi.org/10.1108/EBR-11-2018-0203 
  41. Han, S. (2023, February 12). ChatGPT does my work for me? A larger potential infleunce than Alpha Go, and perhaps a threat to our jobs [Newspaper]. Korea Lecturer News. Retrieved from https://www.lecturernews.com/news/articleView.html?idxno=118958 
  42. Han, S., and Yang, H. (2018). Understanding adoption of intelligent personal assistants: A parasocial relationship perspective. Industrial Management & Data Systems, 118(3), 618-636. https://doi.org/10.1108/IMDS-05-2017-0214 
  43. Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., and Ahmad, H. (2022). "I think this is the most disruptive technology": Exploring Sentiments of ChatGPT Early Adopters using Twitter Data (arXiv:2212.05856). arXiv. http://arxiv.org/abs/2212.05856 
  44. Hart, M., and Porter, G. (2004). The impact of cognitive and other factors on the perceived usefulness of OLAP. Journal of Computer Information Systems, 45(1), 47-56. https://doi.org/10.1080/08874417.2004.11645816 
  45. Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115-135. https://doi.org/10.1007/s11747-014-0403-8 
  46. Hoff, M., and Zinkula, J. (2023, February 11). How 6 workers are using ChatGPT to make their jobs easier. Business Insider. Retrieved from https://www.businessinsider.com/how-to-use-chatgpt-artificialintelligence-making-these-jobs-easier-2023-2 
  47. Hsu, C. L., and Lin, J. C. C. (2016). Effect of perceived value and social influences on mobile app stickiness and in-app purchase intention. Technological Forecasting and Social Change, 108, 42-53. https://doi.org/10.1016/j.techfore.2016.04.012 
  48. Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base-Analyst note. Reuters. Retrieved from https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ 
  49. Im, I., Hong, S., and Kang, M. S. (2011). An international comparison of technology adoption. Information & Management, 48(1), 1-8. https://doi.org/10.1016/j.im.2010.09.001 
  50. Johnson, D. G., and Verdicchio, M. (2017). AI anxiety. Journal of the Association for Information Science and Technology, 68(9), 2267-2270. https://doi.org/10.1002/asi.23867 
  51. Joreskog, K. G. (1971). Simultaneous factor analysis in several populations. Psychometrika, 36(1), 409-426. https://doi.org/10.1007/BF02291366 
  52. Kamis, A., Koufaris, M., and Stern, T. (2008). Using an attribute-based decision support system for user-customized products online: An experimental investigation. MIS Quarterly, 32(1), 159. https://doi.org/10.2307/25148832 
  53. Kim, B., and Han, I. (2011). The role of utilitarian and hedonic values and their antecedents in a mobile data service environment. Expert Systems with Applications, 38(3), 2311-2318. https://doi.org/10.1016/j.eswa.2010.08.019 
  54. Kim, G. (2023, February 22). Easily automate work processes, create content, and be used for education? A movement to share daily tips and tricks on ChatGPT use on the rise. [Newspaper]. Tech M. Retrieved from https://www.techm.kr/news/articleView.html?idxno=107159 
  55. Kim, J., Merrill Jr., K., and Collins, C. (2021). AI as a friend or assistant: The mediating role of perceived usefulness in social AI vs. functional AI. Telematics and Informatics, 64, 101694. https://doi.org/10.1016/j.tele.2021.101694 
  56. Kim, Y., and Sundar, S. S. (2012). Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior, 28(1), 241-250. https://doi.org/10.1016/j.chb.2011.09.006 
  57. Kock, N. (2015). Common method bias in PLS-SEM: A full collinearity assessment approach. International Journal of E-Collaboration (IJeC), 11(4), 1-10. https://doi.org/10.4018/ijec.2015100101 
  58. Kocon, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydlo, D., Baran, J., Bielaniewicz, J., Gruza, M., Janz, A., Kanclerz, K., Kocon, A., Koptyra, B., Mieleszczenko-Kowszewicz, W., Milkowski, P., Oleksy, M., Piasecki, M., Radlinski, L., Wojtasik, K., Wozniak, S., and Kazienko, P. (2023). ChatGPT: Jack of all trades, master of none (arXiv:2302.10724). arXiv. Retrieved from http://arxiv.org/abs/2302.10724 
  59. Kose, D. B., Morschheuser, B., and Hamari, J. (2019). Is it a tool or a toy? How user's conception of a system's purpose affects their experience and use. International Journal of Information Management, 49, 461-474. https://doi.org/10.1016/j.ijinfomgt.2019.07.016 
  60. Koufaris, M. (2002). Applying the technology acceptance model and flow theory to online consumer behavior. Information Systems Research, 13(2), 205-223. https://doi.org/10.1287/isre.13.2.205.83 
  61. Kruger, J., and Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self- assessments. Journal of Personality and Social Psychology, 77(6), 1121. https://doi.org/10.1037//0022-3514.77.6.1121 
  62. Latikka, R., Turja, T., and Oksanen, A. (2019). Self-efficacy and acceptance of robots. Computers in Human Behavior, 93, 157-163. https://doi.org/10.1016/j.chb.2018.12.017 
  63. Lee, S. (2023, March 6). Korean govenrment ramping up the pace towards transition to AI. A government guideline on ChatGPT will be released within first half of this year. Aju News. Retrieved from https://www.ajunews.com/view/20230306150422040 
  64. Lee, S. G., Trimi, S., and Kim, C. (2013). The impact of cultural differences on technology adoption. Journal of World Business, 48(1), 20-29. https://doi.org/10.1016/j.jwb.2012.06.003 
  65. Legg, S., and Hutter, M. (2007). A collection of definitions of intelligence (arXiv:0706.3639). arXiv. Retrieved from http://arxiv.org/abs/0706.3639 
  66. Lew, S., Tan, G. W. H., Loh, X. M., Hew, J. J., and Ooi, K. B. (2020). The disruptive mobile wallet in the hospitality industry: An extended mobile technology acceptance model. Technology in Society, 63, 101430. https://doi.org/10.1016/j.techsoc.2020.101430 
  67. Lewis, W., Agarwal, R. K., and Sambamurthy, V. (2003). Sources of influence on beliefs about information technology use: An empirical study of knowledge workers. MIS Quarterly, 27(4), 657. https://doi.org/10.2307/30036552 
  68. Li, J., and Huang, J. S. (2020). Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technology in Society, 63, 101410. https://doi.org/10.1016/j.techsoc.2020.101410 
  69. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig, G. (2021). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing (arXiv:2107.13586). arXiv. Retrieved from http://arxiv.org/abs/2107.13586 
  70. Lowe, R., and Leike, J. (2022, January 27). Aligning language models to follow instructions [Corporate blog]. OpenAI Blog. Retrieved from https://openai.com/research/instruction-following 
  71. Mani, Z., and Chouk, I. (2018). Consumer resistance to innovation in services: Challenges and barriers in the internet of things era. Journal of Product Innovation Management, 35(5), 780-807. https://doi.org/10.1111/jpim.12463 
  72. Marks, G. (2022, December 13). On CRM: Is ChatGPT over hyped? Forbes. Retrieved from https://www.forbes.com/sites/quickerbettertech/2022/12/13/on-crm-is-chatgpt-over-hyped/ 
  73. McCarthy, J., and Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial intelligence. In Readings in Artificial Intelligence (pp. 431-450). Elsevier. https://doi.org/10.1016/B978-0-934613-03-3.50033-7 
  74. McKnight, H. D., Choudhury, V., and Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334-359.  https://doi.org/10.1287/isre.13.3.334.81
  75. Meshram, S., Naik, N., Vr, M., More, T., and Kharche, S. (2021). Conversational AI: Chatbots. 2021 International Conference on Intelligent Technologies (CONIT), 1-6. https://doi.org/10.1109/CONIT51480.2021.9498508 
  76. Meuter, M. L., Bitner, M. J., Ostrom, A. L., and Brown, S. W. (2005). Choosing among alternative service delivery modes: An investigation of customer trial of self-service technologies. Journal of Marketing, 69(2), 61-83. https://doi.org/10.1509/jmkg.69.2.61.60759 
  77. Milmo, D. (2023, February 2). ChatGPT reaches 100 million users two months after launch. The Guardian. Retrieved from https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-millionusers-open-ai-fastest-growing-app 
  78. Moore, G. C., and Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192-222. https://doi.org/10.1287/isre.2.3.192 
  79. Moriuchi, E. (2021). An empirical study on anthropomorphism and engagement with disembodied AIs and consumers' re-use behavior. Psychology & Marketing, 38(1), 21-42. https://doi.org/10.1002/mar.21407 
  80. Moussawi, S., Koufaris, M., and Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electronic Markets, 31(2), 343-364. https://doi.org/10.1007/s12525-020-00411-w 
  81. Moussawi, S., Koufaris, M., and College, B. (2019). Perceived intelligence and perceived anthropomorphism of personal intelligent agents: Scale development and validation. In Proceedings of Hawaii International Conference on System Sciences (HICSS), Maui, HI 
  82. Myatt, S. (2023, February 23). How the federal government is looking at ChatGPT - GovCon Wire. GovCon Wire. https://www.govconwire.com/2023/02/how-the-federal-government-is-looking-at-chatgpt/ 
  83. Nielsen, N. (2023, April 2). ChatGPT lifts business professionals' productivity and improves work quality. Nielsen Norman Group. Retrieved from https://www.nngroup.com/articles/chatgpt-productivity/ 
  84. Omoge, A. P., Gala, P., and Horky, A. (2022). Disruptive technology and AI in the banking industry of an emerging market. International Journal of Bank Marketing, 40(6), 1217-1247. https://doi.org/10.1108/IJBM-09-2021-0403 
  85. OpenAI. (2022, November 30). Introducing ChatGPT. Retrieved from https://openai.com/blog/chatgpt 
  86. Panagiotopoulos, I., and Dimitrakopoulos, G. (2018). An empirical investigation on consumers' intentions towards autonomous driving. Transportation Research Part C: Emerging Technologies, 95, 773-784. https://doi.org/10.1016/j.trc.2018.08.013 
  87. Pariseau, B. (2023, March 2). ChatGPT API sets stage for new wave of enterprise apps | TechTarget. Tech Target. Retrieved from https://www.techtarget.com/searchsoftwarequality/news/365531781/ChatGPT-API-sets-stage-for-new-wave-of-enterprise-apps 
  88. Pillai, A., and Mukherjee, J. (2011). User acceptance of hedonic versus utilitarian social networking web sites. Journal of Indian Business Research, 3(3), 180-191. https://doi.org/10.1108/17554191111157047 
  89. Pillai, R., and Sivathanu, B. (2020). Adoption of AI-based chatbots for hospitality and tourism. International Journal of Contemporary Hospitality Management, 32(10), 3199-3226. https://doi.org/10.1108/IJCHM-04-2020-0259 
  90. Pitardi, V., and Marriott, H. R. (2021). Alexa, she's not human but… Unveiling the drivers of consumers' trust in voice-based artificial intelligence. Psychology & Marketing, 38(4), 626-642. https://doi.org/10.1002/mar.21457 
  91. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., and Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. https://doi.org/10.1037/0021-9010.88.5.879 
  92. Qiu, L., and Benbasat, I. (2009). Evaluating anthropomorphic product recommendation agents: A social relationship perspective to designing information systems. Journal of Management Information Systems, 25(4), 145-182. https://doi.org/10.2753/MIS0742-1222250405 
  93. Richardson, A. (2023, April 25). ChatGPT 101: Tens of thousands of people taking ChatGPT training courses to stay ahead in job market. Yahoo Finance. https://finance.yahoo.com/news/chatgpt-101-tensthousands-people-193336191.html 
  94. RIngle, C. M., Wende, S., and Becker, J. M. (2022). SmartPLS4 [Computer software]. Oststeinbek: SmartPLS GmbH. Retrieved from http://www.smartpls.com. 
  95. Ringle, C. M., Sarstedt, M., and Straub, D. (2012). Editor's comments: A critical look at the use of PLS-SEM in "MIS Quarterly." MIS Quarterly, 36(1), iii. https://doi.org/10.2307/41410402 
  96. Rogers, E. M. (1962). Diffusion of Innovations. Simon and Schuster. 
  97. Rosenbaum, E. (2023, February 11). The ChatGPT AI hype cycle is peaking, but even tech skeptics don't expect a bust. CNBC. Retrieved from https://www.cnbc.com/2023/02/11/chatgpt-ai-hype -cycle-is-peaking-but-even-tech-skeptics-doubt-abust.html 
  98. Rotman, D. (2023, March 25). How ChatGPT will revolutionize the economy MIT Technology Review [MIT Tech Review]. | MIT Technology Review. Retrieved from https://www.technologyreview.com/2023/03/25/1070275/chatgpt-revolutionize-economy-decide-what-looks-like/ 
  99. Russell, S. J., and Norvig, P. (2010). Artificial Intelligence a modern approach (3rd ed.). Pearson Education, Inc. 
  100. Schmelzer, R. (2019, October 31). Should we be afraid of AI? Forbes. Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/10/31/should-we-be-afraid-of-ai/ 
  101. Schmidthuber, L., Maresch, D., and Ginner, M. (2020). Disruptive technologies and abundance in the service sector-Toward a refined technology acceptance model. Technological Forecasting and Social Change, 155, 119328. https://doi.org/10.1016/j.techfore.2018.06.017 
  102. Schuetzler, R. M., Grimes, G. M., Giboney, J. S., and Rosser, H. K. (2021). Deciding whether and how to deploy chatbots. MIS Quarterly Executive, 1-15. https://doi.org/10.17705/2msqe.00039 
  103. Shahriar, S., and Hayawi, K. (2023). Let's have a chat! A conversation with ChatGPT: Technology, applications, and limitations. Artificial Intelligence and Applications. https://doi.org/10.47852/bonviewAIA3202939 
  104. Sharma, N., and Patterson, P. G. (1999). The impact of communication effectiveness and service quality on relationship commitment in consumer, professional services. Journal of Services Marketing, 13(2), 151-170. https://doi.org/10.1108/08876049910266059 
  105. Sheehan, B., Jin, H. S., and Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research, 115, 14-24. https://doi.org/10.1016/j.jbusres.2020.04.030 
  106. Shieh, J. (2023, April 29). Best practices for prompt engineering with OpenAI API | OpenAI Help Center. OpenAI Help Center. Retrieved from https://help.openai.com/en/articles/6654000-bestpractices-for-prompt-engineering-with-openai-api 
  107. Shin, M., Song, S. W., and Chock, T. M. (2019). Uncanny valley effects on friendship decisions in virtual social networking service. Cyberpsychology, Behavior, and Social Networking, 22(11), 700-705. https://doi.org/10.1089/cyber.2019.0122 
  108. Shinn, N., Labash, B., and Gopinath, A. (2023). Reflexion: An autonomous agent with dynamic memory and self-reflection (arXiv:2303.11366). arXiv. Retrieved from http://arxiv.org/abs/2303.11366 
  109. Shmueli, G., and Koppius, O. R. (2011). Predictive analytics in information systems research. MIS Quarterly, 35(3), 553-572.  https://doi.org/10.2307/23042796
  110. Sommer, J. (2023, March 8). Is ChatGPT making the world fall victim to the Dunning-Kruger effect? | LinkedIn [Linedin]. Retrieved from https://www.linkedin.com/pulse/chatgpt-making-worldfall-victim-donning-kruger-effect-jesper-sommer/ 
  111. Song, S. W., and Shin, M. (2022). Uncanny valley effects on chatbot trust, purchase intention, and adoption intention in the context of e-commerce: The moderating role of avatar familiarity. International Journal of Human-Computer Interaction, 1-16. https://doi.org/10.1080/10447318.2022.2121038 
  112. Sun, H., and Zhang, P. (2006). Causal relationships between perceived enjoyment and perceived ease of use: An alternative approach. Journal of the Association for Information Systems, 7(9), 618-645. https://doi.org/10.17705/1jais.00100 
  113. Suseno, Y., Chang, C., Hudik, M., and Fang, E. S. (2022). Beliefs, anxiety and change readiness for artificial intelligence adoption among human resource managers: The moderating role of high-performance work systems. The International Journal of Human Resource Management, 33(6), 1209-1236. https://doi.org/10.1080/09585192.2021.1931408 
  114. Taherdoost, H. (2018). A review of technology acceptance and adoption models and theories. Procedia Manufacturing, 22, 960-967. https://doi.org/10.1016/j.promfg.2018.03.137 
  115. Tellez, A. (2023, March 8). These Major Companies -From Snap To Salesforce-Are All Using ChatGPT. Forbes. Retrieved from https://www.forbes.com/sites/anthonytellez/2023/03/03/thesemajor-companies-from-snap-to-instacart--are-allusing-chatgpt/ 
  116. Turing, A. M. (1950). Computer machinery and intelligence. Retrieved from https://academic.oup.com/mind/article/LIX/236/433/986238?url=http://szyxflb.com 
  117. Vadino, C. (2020, November 10). Council post: Why building trust is just as important as building your brand. Forbes. Retrieved from https://www.forbes.com/sites/forbescommunicationscouncil/2020/11/10/why-building-trust-is-just-as-important-as-building-your-brand/ 
  118. Van Der Heijden, H. (2004). User acceptance of hedonic information systems. MIS Quarterly, 28(4), 695. https://doi.org/10.2307/25148660 
  119. Venkatesh, V., and Brown, S. A. (2001). A longitudinal investigation of personal computers in homes: Adoption determinants and emerging challenges. MIS Quarterly, 25(1), 71. https://doi.org/10.2307/3250959 
  120. Venkatesh, V., and Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186-204. https://doi.org/10.1287/mnsc.46.2.186.11926 
  121. Venkatesh, V., Davis, F., and Morris, M. (2007). Dead or alive? The development, trajectory and future of technology adoption research. AIS Educator Journal, 8, 267-286. https://doi.org/10.17705/1jais.00120 
  122. Venkatesh, V., and Morris, M. G. (2000). Why don't men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior. MIS Quarterly, 24(1), 115. https://doi.org/10.2307/3250981 
  123. Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540 
  124. Venkatesh, V., Thong, J. Y. L., and Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157. https://doi.org/10.2307/41410412 
  125. Wakefield, R. L., and Whitten, D. (2006). Mobile computing: A user study on hedonic/utilitarian mobile device usage. European Journal of Information Systems, 15(3), 292-300. https://doi.org/10.1057/palgrave.ejis.3000619 
  126. Wang, L. C., Baker, J., Wagner, J. A., and Wakefield, K. (2007). Can a retail web site be social? Journal of Marketing, 71(3), 143-157. https://doi.org/10.1509/jmkg.71.3.143 
  127. Wang, Y. Y., and Wang, Y. S. (2022). Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interactive Learning Environments, 30(4), 619-634. https://doi.org/10.1080/10494820.2019.1674887 
  128. Waytz, A., Heafner, J., and Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117. https://doi.org/10.1016/j.jesp.2014.01.005 
  129. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., and Zhou, D. (2023). Chain-of-thought prompting elicits reasoning in large language models (arXiv: 2201.11903). arXiv. Retrieved from http://arxiv.org/abs/2201.11903 
  130. Westfall, C. (2023, January 26). BuzzFeed to use ChatGPT's AI for content creation, stock up 200%+. Forbes. Retrieved from https://www.forbes.com/sites/chriswestfall/2023/01/26/buzzfeed-to-use-chatgpts-ai-for-content-creation-stock-up-200/ 
  131. Wixom, B. H., and Todd, P. A. (2005). A theoretical integration of user satisfaction and technology acceptance. Information Systems Research, 16(1), 85-102. https://doi.org/10.1287/isre.1050.0042 
  132. Wu, J., and Lu, X. (2013). Effects of extrinsic and intrinsic motivators on using utilitarian, hedonic, and dual-purposed information systems: A meta-analysis. Journal of the Association for Information Systems, 14(3), 153-191. https://doi.org/10.17705/1jais.00325 
  133. Wunker, S. (2023, February 16). Disruptive innovation and ChatGPT - Three lessons from the smartphone's emergence. Forbes. Retrieved from https://www.forbes.com/sites/stephenwunker/2023/02/16/disruptive-innovation-and-chatgpt--three-lessons-from-the-smartphones-emergence/ 
  134. Xiao, B., and Benbasat, I. (2007). E-Commerce product recommendation agents: Use, characteristics, and impact. MIS Quarterly, 31(1), 137. https://doi.org/10.2307/25148784 
  135. Xu, Y., Zhou, D., and Ma, J. (2019). Scholar-friend recommendation in online academic communities: An approach based on heterogeneous network. Decision Support Systems, 119, 1-13. https://doi.org/10.1016/j.dss.2019.01.004 
  136. You, S., Kim, J. H., Lee, S., Kamat, V., and Robert, L. P. (2018). Enhancing perceived safety in human-robot collaborative construction using immersive virtual environments. Automation in Construction, 96, 161-170. https://doi.org/10.1016/j.autcon.2018.09.008 
  137. Yu, C. S. (2012). Factors affecting individuals to adopt mobile banking: Empirical evidence from the UTAUT model. Journal of Electronic Commerce Research, 13(2), 104-121. 
  138. Zak, P. J. (2017, January 1). The neuroscience of trust. Harvard Business Review. Retrieved from https://hbr.org/2017/01/the-neuroscience-of-trust 
  139. Zhang, T., Tao, D., Qu, X., Zhang, X., Lin, R., and Zhang, W. (2019). The roles of initial trust and perceived risk in public's acceptance of automated vehicles. Transportation Research Part C: Emerging Technologies, 98, 207-220. https://doi.org/10.1016/j.trc.2018.11.018 
  140. Zhang, T., Tao, D., Qu, X., Zhang, X., Zeng, J., Zhu, H., and Zhu, H. (2020). Automated vehicle acceptance in China: Social influence and initial trust are key determinants. Transportation Research Part C: Emerging Technologies, 112, 220-233. https://doi.org/10.1016/j.trc.2020.01.027 
  141. Zhou, T., Lu, Y., and Wang, B. (2010). Integrating TTF and UTAUT to explain mobile banking user adoption. Computers in Human Behavior, 26(4), 760-767. https://doi.org/10.1016/j.chb.2010.01.013