• Title/Summary/Keyword: Trust in AI

Search Result 65, Processing Time 0.02 seconds

How Trust in Human-like AI-based Service on Social Media Will Influence Customer Engagement: Exploratory Research to Develop the Scale of Trust in Human-like AI-based Service

  • Jin Jingchuan;Shali Wu
    • Asia Marketing Journal
    • /
    • v.26 no.2
    • /
    • pp.129-144
    • /
    • 2024
  • This research is on how people's trust in human-like AI-based service will influence customer engagement (CE). This study will discuss the relationship between trust and CE and explore how people's trust in AI affects CE when they lack knowledge of the company/brand. Items from the philosophical study of trust were extracted to build a scale suitable for trust in AI. The scale's reliability was ensured, and six components of trust in AI were merged into three dimensions: trust based on Quality Assurance, Risk-taking, and Corporate Social Responsibility. Trust based on quality assurance and risk-taking is verified to positively impact customer engagement, and the feelings about AI-based service fully mediate between all three dimensions of trust in AI and CE. The new trust scale for human-like AI-based services on social media sheds light on further research. The relationship between trust in AI and CE provides a theoretical basis for subsequent research.

User Factors and Trust in ChatGPT: Investigating the Relationship between Demographic Variables, Experience with AI Systems, and Trust in ChatGPT (사용자 특성과 ChatGPT 신뢰의 관계 : 인구통계학적 변수와 AI 경험의 영향)

  • Park Yeeun;Jang Jeonghoon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.4
    • /
    • pp.53-71
    • /
    • 2023
  • This study explores the relationship between various user factors and the level of trust in ChatGPT, a sophisticated language model exhibiting human-like capabilities. Specifically, we considered demographic characteristics such as age, education, gender, and major, along with factors related to previous AI experience, including duration, frequency, proficiency, perception, and familiarity. Through a survey of 140 participants, comprising 71 females and 69 males, we collected and analyzed the data to see how these user factors have a relationship with trust in ChatGPT. Both descriptive and inferential statistical methods, encompassing multiple linear regression models, were employed in our analysis. Our findings reveal significant relationships between user factors such as gender, the perception of prior AI interactions, self-evaluated proficiency, and Trust in ChatGPT. This research not only enhances our understanding of trust in artificial intelligence but also offers valuable insights for AI developers and practitioners in the field.

A Study of the Behavioral Intention on Conversational ChatGPT for Tourism Information Search Service: Focusing on the Role of Cognitive and Affective Trust (ChatGPT, 대화형 인공지능 관광 검색 서비스의 행동의도에 대한 연구: 인지적 신뢰와 정서적 신뢰의 역할을 중심으로)

  • Minsung Kim;Chulmo Koo
    • Information Systems Review
    • /
    • v.26 no.1
    • /
    • pp.119-149
    • /
    • 2024
  • This study investigates the antecedents and mechanisms influencing trust and behavioral intentions formation towards new AI chatbots, such as ChatGPT, as travel information searching services. Analyzing the roles of variables such as familiarity, novelty, personal innovativeness, information quality and perceived anthropomorphism, the research elucidates the impact of these factors on users' cognitive and affective trust, ultimately affecting their intention to adopt information and sustain the use of the AI chatbot. Results indicate that perceived familiarity and information quality positively influence both cognitive and affective trust, whereas perceived novelty contributes positively only to cognitive trust. Additionally, the personal innovativeness of new AI chatbot users was found to weaken the effect of familiarity on perceived trust, while the perceived level of anthropomorphism of the chatbot amplified the effects of novelty and familiarity on cognitive trust. These findings underscore the importance of considering factors such as familiarity, personal innovativeness, information quality and anthropomorphism in the design and implementation of AI chatbots, affecting trust and behavioral intention.

An Evaluation of Determinants to Viewer Acceptance of Artificial Intelligence-based News Anchor (인공지능(AI) 기술 기반의 뉴스 앵커에 대한 수용 의도의 선행요인 연구)

  • Shin, Ha-Yan;Kweon, Sang-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.4
    • /
    • pp.205-219
    • /
    • 2021
  • The present study identified determinants to user acceptance of artificial intelligence(AI)-based news anchor. Our conceptual model included three constructs of ability, benevolence, and integrity to determine whether these three constructs are predictive of trust perceived from AI news anchor. This work further examined the influences of social presence, anthropomorphism, perceived usefulness, understanding as well as trust as immediate determinants to user acceptance. The conceptual model was validated on survey data collected from 513 respondents. A series of scale refinement process was conducted by the examination of data normality, common method bias, structure of latent variables as well as internal consistency. In addition, a confirmatory factor analysis was performed to assess the extent to which the sample data collected from survey study measures the constructs adequately. The results from the analysis of structural equation model indicated that, (1) two constructs of ability and integrity were found to be significantly predictive of perceived trust, and (2) anthropomorphism, perceived usefulness, and trust emerged as significant and positive predictors of user acceptance of AI-based news anchor.

The Structural Relationships of between AI-based Voice Recognition Service Characteristics, Interactivity and Intention to Use (AI기반 음성인식 서비스 특성과 상호 작용성 및 이용 의도 간의 구조적 관계)

  • Lee, SeoYoung
    • Journal of Information Technology Services
    • /
    • v.20 no.5
    • /
    • pp.189-207
    • /
    • 2021
  • Voice interaction combined with artificial intelligence is poised to revolutionize human-computer interactions with the advent of virtual assistants. This paper is analyzing interactive elements of AI-based voice recognition services such as sympathy, assurance, intimacy, and trust on intention to use. The questionnaire was carried out for 284 smartphone/smart TV users in Korea. The collected data was analyzed by structural equation model analysis and bootstrapping. The key results are as follows. First, AI-based voice recognition service characteristics such as sympathy, assurance, intimacy, and trust have positive effects on interactivity with the AI-based voice recognition service. Second, the interactivity with the AI-based voice recognition service has positive effects on intention to use. Third, AI-based voice recognition service characteristics such as interactional enjoyment and intimacy have directly positive effects on intention to use. Fourth, AI-based voice recognition service characteristics such as sympathy, assurance, intimacy and trust have indirectly positive effects on intention to use the AI-based voice recognition service by mediating the effect of the interactivity with the AI-based voice recognition service. It is meaningful to investigate factors affecting the interactivity and intention to use voice recognition assistants. It has practical and academic implications.

Effects on the continuous use intention of AI-based voice assistant services: Focusing on the interaction between trust in AI and privacy concerns (인공지능 기반 음성비서 서비스의 지속이용 의도에 미치는 영향: 인공지능에 대한 신뢰와 프라이버시 염려의 상호작용을 중심으로)

  • Jang, Changki;Heo, Deokwon;Sung, WookJoon
    • Informatization Policy
    • /
    • v.30 no.2
    • /
    • pp.22-45
    • /
    • 2023
  • In research on the use of AI-based voice assistant services, problems related to the user's trust and privacy protection arising from the experience of service use are constantly being raised. The purpose of this study was to investigate empirically the effects of individual trust in AI and online privacy concerns on the continued use of AI-based voice assistants, specifically the impact of their interaction. In this study, question items were constructed based on previous studies, with an online survey conducted among 405 respondents. The effect of the user's trust in AI and privacy concerns on the adoption and continuous use intention of AI-based voice assistant services was analyzed using the Heckman selection model. As the main findings of the study, first, AI-based voice assistant service usage behavior was positively influenced by factors that promote technology acceptance, such as perceived usefulness, perceived ease of use, and social influence. Second, trust in AI had no statistically significant effect on AI-based voice assistant service usage behavior but had a positive effect on continuous use intention. Third, the privacy concern level was confirmed to have the effect of suppressing continuous use intention through interaction with trust in AI. These research results suggest the need to strengthen user experience through user opinion collection and action to improve trust in technology and alleviate users' concerns about privacy as governance for realizing digital government. When introducing artificial intelligence-based policy services, it is necessary to disclose transparently the scope of application of artificial intelligence technology through a public deliberation process, and the development of a system that can track and evaluate privacy issues ex-post and an algorithm that considers privacy protection is required.

The Effect of Motivated Consumer Innovativeness on Perceived Value and Intention to Use for Senior Customers at AI Food Service Store

  • LEE, JeungSun;KWAK, Min-Kyu;CHA, Seong-Soo
    • Journal of Distribution Science
    • /
    • v.19 no.9
    • /
    • pp.91-100
    • /
    • 2021
  • Purpose: This study investigates the use intention of artificial intelligence (AI) food service stores for senior customers, which are becoming a trend in the service industry. Research design, data and methodology: For the study, the extended technology acceptance model (TAM) and motivated consumer innovativeness (MCI) variables, proven by existing researchers, were used. In addition to the effect of motivated consumer innovativeness on customer value, we investigated the effect of customer value on trust and use intention. For the study, 520 questionnaires were distributed online by an expert survey agency. Data was verified through validity and reliability. Results: The analysis results of the research hypothesis verified that functionally motivated consumer innovativeness (fMCI), hedonically motivated consumer innovativeness (hMCI), and socially motivated consumer innovativeness (sMCI) all had positive effects on usefulness and enjoyment. Furthermore, usefulness had a statistically significant positive effect on trust, but perceived enjoyment did not; trust was found to positively affect the intention to use. Conclusions: We compared the moderating effects of seniors' gender and age (at 60) between groups. Although there was no moderating effect of age, it was verified that regarding the effect of usefulness on trust, the male group showed a greater influence than the female group.

An Exploratory Study on the Trustworthiness Analysis of Generative AI (생성형 AI의 신뢰도에 대한 탐색적 연구)

  • Soyon Kim;Ji Yeon Cho;Bong Gyou Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.79-90
    • /
    • 2024
  • This study focused on user trust in ChatGPT, a generative AI technology, and explored the factors that affect usage status and intention to continue using, and whether the influence of trust varies depending on the purpose. For this purpose, the survey was conducted targeting people in their 20s and 30s who use ChatGPT the most. The statistical analysis deploying IBM SPSS 27 and SmartPLS 4.0. A structural equation model was formulated on the foundation of Bhattacherjee's Expectation-Confirmation Model (ECM), employing path analysis and Multi-Group Analysis (MGA) for hypothesis validation. The main findings are as follows: Firstly, ChatGPT is mainly used for specific needs or objectives rather than as a daily tool. The majority of users are cognizant of its hallucination effects; however, this did not hinder its use. Secondly, the hypothesis testing indicated that independent variables such as expectation- confirmation, perceived usefulness, and user satisfaction all exert a positive influence on the dependent variable, the intention for continuance intention. Thirdly, the influence of trust varied depending on the user's purpose in utilizing ChatGPT. trust was significant when ChatGPT is used for information retrieval but not for creative purposes. This study will be used to solve reliability problems in the process of introducing generative AI in society and companies in the future and to establish policies and derive improvement measures for successful employment.

A Study on User Continuance Intention of Conversational Generative AI Services: Focused on Task-Technology Fit (TTF) and Trust (대화형 생성AI 서비스 사용자의 지속사용의도에 관한 연구: 과업-기술적합(TTF)과 신뢰를 중심으로)

  • Seunggyu Ann;Hyunchul Ahn
    • Information Systems Review
    • /
    • v.26 no.1
    • /
    • pp.193-218
    • /
    • 2024
  • This study identified factors related to the technological characteristics of conversational generative AI services and the user's task characteristics. Then, it analyzed the effects of task-technology fit on user satisfaction and continued use. The effects of trust, which represents the degree of users' belief in the information provided by generative AI, on task-technology fit, user satisfaction, and user continuance intention were also examined. A survey was conducted among users of various age groups, and 198 questionnaires were collected and analyzed using SmartPLS 4.0 to validate the proposed model. As a result of hypothesis testing, it was confirmed that language fluency and interactivity among technology characteristics and ambiguity among task characteristics significantly affect user satisfaction and intention to continue using via task-technology fit. However, creativity among skill characteristics and time flexibility among task characteristics did not significantly affect task-technology fit, and trust did not directly affect task-technology fit and intention to continue using, but only positively affected user satisfaction. The results of this study can provide meaningful implications for vendors who want to develop and provide conversational generative AI services or companies who want to adopt generative AI technology to improve business productivity.

A Comparative Study of Potential Job Candidates' Perceptions of an AI Recruiter and a Human Recruiter (인공지능 인사담당자와 인간 인사담당자에 대한 잠재적 입사지원자들의 인식 비교 연구)

  • Min, Jihyun;Kim, Sinae;Park, Yonguk;Sohn, Young Woo
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.5
    • /
    • pp.191-202
    • /
    • 2018
  • Artificial intelligence (AI) is already being utilized in certain personnel selection processes in organizations; AI will eventually make even final decisions for personnel selection. The present study investigated potential job candidates' perceptions of an AI recruiter by comparing the selection procedures carried out by an AI recruiter to those carried out by a human recruiter. For this study college students in South Korea were recruited. They were each shown one of two recruitment scenarios (human recruiter vs. AI recruiter; between-subject design) followed by questionnaires measuring their satisfaction with the selection procedures and procedural justice, their trust in the recruiter, and their belief in a just world. Results show that potential job candidates were more satisfied with the selection procedures used by the AI recruiter than the human recruiter; they perceived the procedures as fairer than those used by the human recruiter. In addition, potential job candidates' trust in the AI recruiter was significantly higher than their trust in the human recruiter. This study also explored whether potential job candidates' perceptions of the AI and human recruiter were contingent upon their beliefs in a just world. The present study suggests a direction for future research.