• Title/Summary/Keyword: trust in artificial intelligence

Search Result 57, Processing Time 0.026 seconds

The association between the social presence and trust of chatbots and the sociodemographic characteristics of artificial intelligence chatbots users in general hospitals : focusing on sex and age (의료기관 인공지능 챗봇 이용자의 인구사회학적 특성과 챗봇의 사회적 실재감 및 신뢰감의 관련성 연구 - 성별과 연령 중심으로)

  • Seung Won Jung;Seo Yeon Hwang;Gi Eun Choi;Eun Young Jo;Jin Wook Lee;Jin Young Nam
    • Korea Journal of Hospital Management
    • /
    • v.28 no.3
    • /
    • pp.27-38
    • /
    • 2023
  • Objectives: This study explores the impact of age groups on social presence and trust among users of medical artificial intelligence chatbots. Furthermore, we investigate the existence of gender differences within these relationships. Method: We collected data through a survey from people who had interacted with general hospital chatbot services, either by making reservations or seeking consultations. Multiple linear regression analysis was conducted to examine the relationship between general characteristics of study population and social presence and trust of artificial intelligence chatbots. Additionally, we conducted stratified analysis to confirm the presence of gender differences within these relationship. Results: Among 300 participants, those aged 50 and older had higher social presence of artificial intelligence chatbots and greater trust of artificial intelligence chatbots (social presence, 𝛽=0.543, p=0.003; trust, 𝛽=0.787, p=0.000). In stratified by sex, women aged 50 and older had higher social presence and trust of artificial intelligence chatbots compared to those in their 30s age group (social presence, 𝛽 = 0.925, p=0.002; trust, 𝛽=0.645, p=:0.007). However, there was no statistically significant relationship between age and chatbot social presence and trust in men. Conclusion: This study demonstrates that advanced age plays a significant roles in users' social presence and trust in medical artificial intelligence chatbots. Futhermore, our findings reveal gender differences with women aged 50 and older showing the most substantial levels of social presence and trust. Therefore, it is expected that this finding can serve as valuable evidence to enhance the satisfaction of medical institution service users, offering crucial insights into the effective utilization of chatbot services.

  • PDF

XAI Research Trends Using Social Network Analysis and Topic Modeling (소셜 네트워크 분석과 토픽 모델링을 활용한 설명 가능 인공지능 연구 동향 분석)

  • Gun-doo Moon;Kyoung-jae Kim
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.1
    • /
    • pp.53-70
    • /
    • 2023
  • Artificial intelligence has become familiar with modern society, not the distant future. As artificial intelligence and machine learning developed more highly and became more complicated, it became difficult for people to grasp its structure and the basis for decision-making. It is because machine learning only shows results, not the whole processes. As artificial intelligence developed and became more common, people wanted the explanation which could provide them the trust on artificial intelligence. This study recognized the necessity and importance of explainable artificial intelligence, XAI, and examined the trends of XAI research by analyzing social networks and analyzing topics with IEEE published from 2004, when the concept of artificial intelligence was defined, to 2022. Through social network analysis, the overall pattern of nodes can be found in a large number of documents and the connection between keywords shows the meaning of the relationship structure, and topic modeling can identify more objective topics by extracting keywords from unstructured data and setting topics. Both analysis methods are suitable for trend analysis. As a result of the analysis, it was found that XAI's application is gradually expanding in various fields as well as machine learning and deep learning.

A Study on the Reliability of Voice Payment Interface (음성결제 인터페이스의 신뢰도에 관한 연구)

  • Gwon, Hyeon Jeong;Lee, Jee Yeon
    • Journal of the Korean Society for information Management
    • /
    • v.38 no.3
    • /
    • pp.101-140
    • /
    • 2021
  • As the payment service sector actively embraces artificial intelligence technology, "Voice Payments" is becoming a trend in contactless payment services. Voice payment services can execute payments faster and more intuitively through "voice," the most natural means of communication for humans. In this study, we selected richness, intimacy, and autonomy as factors for building trust with artificial intelligence agents. We wanted to determine whether the trust will be formed if the factors were applied to the voice payment services. The experiment results showed that the higher the richness and autonomy of the voice payment interface and the lower the intimacy, the higher the trust. In addition, the two-way interaction effects of richness and autonomy were significant. We analyzed and synthesized the collected short-answer system to identify users' anxiety when using voice payment services and proposed speech interface design ideas to increase their trust in the voice payment.

Factors Influencing User's Satisfaction in ChatGPT Use: Mediating Effect of Reliability (ChatGPT 사용 만족도에 미치는 영향 요인: 신뢰성의 매개효과)

  • Ki Ho Park;Jun Hu Li
    • Journal of Information Technology Services
    • /
    • v.23 no.2
    • /
    • pp.99-116
    • /
    • 2024
  • Recently, interest in ChatGPT has been increasing. This study investigated the factors influencing the satisfaction of users using ChatGPT service, a chatbot system based on artificial intelligence technology. This paper empirically analyzed causality between the four major factors of service quality, system quality, information quality, and security as independent variables and user satisfaction of ChatGPT as dependent variable. In addition, the mediating effect of reliability between the independent variables and user's satisfaction was analyzed. As a result of this research, except for information quality, among the quality factors, security and reliability had a positive causality with use satisfaction. Reliability played a mediating role between quality factors, security, and user satisfaction. However, among quality factors, the mediating effect of reliability between service quality and user's satisfaction was not significant. In conclusion, in order to increase user satisfaction with new technology-based services, it is important to create trust among users. The research results sought to emphasize the importance of user trust in establishing development and operation strategies for artificial intelligence systems, including ChatGPT.

How Trust in Human-like AI-based Service on Social Media Will Influence Customer Engagement: Exploratory Research to Develop the Scale of Trust in Human-like AI-based Service

  • Jin Jingchuan;Shali Wu
    • Asia Marketing Journal
    • /
    • v.26 no.2
    • /
    • pp.129-144
    • /
    • 2024
  • This research is on how people's trust in human-like AI-based service will influence customer engagement (CE). This study will discuss the relationship between trust and CE and explore how people's trust in AI affects CE when they lack knowledge of the company/brand. Items from the philosophical study of trust were extracted to build a scale suitable for trust in AI. The scale's reliability was ensured, and six components of trust in AI were merged into three dimensions: trust based on Quality Assurance, Risk-taking, and Corporate Social Responsibility. Trust based on quality assurance and risk-taking is verified to positively impact customer engagement, and the feelings about AI-based service fully mediate between all three dimensions of trust in AI and CE. The new trust scale for human-like AI-based services on social media sheds light on further research. The relationship between trust in AI and CE provides a theoretical basis for subsequent research.

AI Voice Agent and Users' Response (AI 음성 에이전트의 음성 특성에 대한 사용자 반응 연구)

  • Beak, Seung Ju;Jung, Yoon Hyuk
    • The Journal of Information Systems
    • /
    • v.31 no.2
    • /
    • pp.137-158
    • /
    • 2022
  • Purpose As artificial intelligence voice agents (AIVA) have been widely adopted in services, diverse forms of their voices, which are the main interface with users, have been experimented. The purpose of this study is to examine how users evaluate vocal characteristics (gender, voice pitch, and voice pace) of AIVA, depending on prior research on human voice attractiveness. Design/methodology/approach This study employed an experimental survey which 516 participated in. Each participant was randomly assigned into one of eight situations (e.g., male - higher pitch - faster pace) and listened a AIVA voice sample, which introduce weather information. Next, a participant answered three consequence factors (attractiveness, trust, and anthropomorphism). Findings The results reveal that female voices of AIVA were perceived as more attractive and trustworthy than male voices. As far as voice pitch goes, while lower-pitch voices were preferred in female voices, higher-pitch voices were preferred in male voices. Finally, faster voices of AIVA were more attractive than slower voices.

Evaluating the Current State of ChatGPT and Its Disruptive Potential: An Empirical Study of Korean Users

  • Jiwoong Choi;Jinsoo Park;Jihae Suh
    • Asia pacific journal of information systems
    • /
    • v.33 no.4
    • /
    • pp.1058-1092
    • /
    • 2023
  • This study investigates the perception and adoption of ChatGPT (a large language model (LLM)-based chatbot created by OpenAI) among Korean users and assesses its potential as the next disruptive innovation. Drawing on previous literature, the study proposes perceived intelligence and perceived anthropomorphism as key differentiating factors of ChatGPT from earlier AI-based chatbots. Four individual motives (i.e., perceived usefulness, ease of use, enjoyment, and trust) and two societal motives (social influence and AI anxiety) were identified as antecedents of ChatGPT acceptance. A survey was conducted within two Korean online communities related to artificial intelligence, the findings of which confirm that ChatGPT is being used for both utilitarian and hedonic purposes, and that perceived usefulness and enjoyment positively impact the behavioral intention to adopt the chatbot. However, unlike prior expectations, perceived ease-of-use was not shown to exert significant influence on behavioral intention. Moreover, trust was not found to be a significant influencer to behavioral intention, and while social influence played a substantial role in adoption intention and perceived usefulness, AI anxiety did not show a significant effect. The study confirmed that perceived intelligence and perceived anthropomorphism are constructs that influence the individual factors that influence behavioral intention to adopt and highlights the need for future research to deconstruct and explore the factors that make ChatGPT "enjoyable" and "easy to use" and to better understand its potential as a disruptive technology. Service developers and LLM providers are advised to design user-centric applications, focus on user-friendliness, acknowledge that building trust takes time, and recognize the role of social influence in adoption.

Bankruptcy Prediction with Explainable Artificial Intelligence for Early-Stage Business Models

  • Tuguldur Enkhtuya;Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.58-65
    • /
    • 2023
  • Bankruptcy is a significant risk for start-up companies, but with the help of cutting-edge artificial intelligence technology, we can now predict bankruptcy with detailed explanations. In this paper, we implemented the Category Boosting algorithm following data cleaning and editing using OpenRefine. We further explained our model using the Shapash library, incorporating domain knowledge. By leveraging the 5C's credit domain knowledge, financial analysts in banks or investors can utilize the detailed results provided by our model to enhance their decision-making processes, even without extensive knowledge about AI. This empowers investors to identify potential bankruptcy risks in their business models, enabling them to make necessary improvements or reconsider their ventures before proceeding. As a result, our model serves as a "glass-box" model, allowing end-users to understand which specific financial indicators contribute to the prediction of bankruptcy. This transparency enhances trust and provides valuable insights for decision-makers in mitigating bankruptcy risks.

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

User Factors and Trust in ChatGPT: Investigating the Relationship between Demographic Variables, Experience with AI Systems, and Trust in ChatGPT (사용자 특성과 ChatGPT 신뢰의 관계 : 인구통계학적 변수와 AI 경험의 영향)

  • Park Yeeun;Jang Jeonghoon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.4
    • /
    • pp.53-71
    • /
    • 2023
  • This study explores the relationship between various user factors and the level of trust in ChatGPT, a sophisticated language model exhibiting human-like capabilities. Specifically, we considered demographic characteristics such as age, education, gender, and major, along with factors related to previous AI experience, including duration, frequency, proficiency, perception, and familiarity. Through a survey of 140 participants, comprising 71 females and 69 males, we collected and analyzed the data to see how these user factors have a relationship with trust in ChatGPT. Both descriptive and inferential statistical methods, encompassing multiple linear regression models, were employed in our analysis. Our findings reveal significant relationships between user factors such as gender, the perception of prior AI interactions, self-evaluated proficiency, and Trust in ChatGPT. This research not only enhances our understanding of trust in artificial intelligence but also offers valuable insights for AI developers and practitioners in the field.