• Title/Summary/Keyword: XAI (Explainable AI)

Search Result 29, Processing Time 0.028 seconds

Fault diagnosis of linear transfer robot using XAI

  • Taekyung Kim;Arum Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.121-138
    • /
    • 2024
  • Artificial intelligence is crucial to manufacturing productivity. Understanding the difficulties in producing disruptions, especially in linear feed robot systems, is essential for efficient operations. These mechanical tools, essential for linear movements within systems, are prone to damage and degradation, especially in the LM guide, due to repetitive motions. We examine how explainable artificial intelligence (XAI) may diagnose wafer linear robot linear rail clearance and ball screw clearance anomalies. XAI helps diagnose problems and explain anomalies, enriching management and operational strategies. By interpreting the reasons for anomaly detection through visualizations such as Class Activation Maps (CAMs) using technologies like Grad-CAM, FG-CAM, and FFT-CAM, and comparing 1D-CNN with 2D-CNN, we illustrates the potential of XAI in enhancing diagnostic accuracy. The use of datasets from accelerometer and torque sensors in our experiments validates the high accuracy of the proposed method in binary and ternary classifications. This study exemplifies how XAI can elucidate deep learning models trained on industrial signals, offering a practical approach to understanding and applying AI in maintaining the integrity of critical components such as LM guides in linear feed robots.

XAI(Explainable AI) 기법을 이용한 선박기관 이상탐지 시스템 개발

  • Habtemariam Duguma Yeshitla;Agung Nugraha;Antariksa Gian
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.11a
    • /
    • pp.289-290
    • /
    • 2022
  • 본 연구에서는 선박의 중요부품인 메인엔진에서 수집되는 센서 데이터를 사용하여 선박 메인엔진의 이상치를 탐지하는 시스템을 소개한다. 본 시스템의 특장점은 이상치 탐지 뿐만 아니라, 이상치의 센서별 기여도를 정량화 함으로써, 이상치 발생을 유형화 하고 추가적인 분석을 가능하게 해준다. 또한 웹 인터페이스 형태의 편리한 UI를 개발하여 사용자들이 보다 편리하게 이상치

  • PDF

Study on the Impact of XAI Explanation Levels on Cognitive Load and User Satisfaction : Focusing on Risk Levels in Financial AI Systems

  • No-Ah Han;Yoo-Jin Hwang;Zoon-Ky Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.49-59
    • /
    • 2024
  • In this paper, we examine the impact of XAI explanations on user satisfaction and cognitive load according to the risk levels defined in the EU AI Act. XAI aims to make the internal processes of complex AI models understandable to humans and is widely used in both academia and industry. The importance and value of XAI are continuously rising; however, there has been little research determining the necessary level of explanation according to AI system risk levels. To address this gap, we designed an experiment with 120 participants, divided into 8 groups, each exposed to one of four levels of explainability(XAI) within low-risk and high-risk financial AI systems. A quantitative approach was used to measure cognitive load, user satisfaction, mental effort, and the clarity of the material design across the different AI system interfaces. The results indicate that the amount of information in explanations significantly affects cognitive load and user satisfaction, depending on the risk level. However, the impact of the level of explanation on user satisfaction was mediated by the material design, which determined how easily the information was understood. This research provides practical, regulatory, and academic contributions by offering guidelines for determining the necessary level of explanation based on AI system risk levels.

Analysis of the impact of mathematics education research using explainable AI (설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석)

  • Oh, Se Jun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.435-455
    • /
    • 2023
  • This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

Application of XAI Models to Determine Employment Factors in the Software Field : with focus on University and Vocational College Graduates (소프트웨어 분야 취업 결정 요인에 대한 XAI 모델 적용 연구 : 일반대학교와 전문대학 졸업자를 중심으로)

  • Kwon Joonhee;Kim Sungrim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.1
    • /
    • pp.31-45
    • /
    • 2024
  • The purpose of this study is to explain employment factors in the software field. For it, the Graduates Occupational Mobility Survey by the Korea employment information service is used. This paper proposes employment models in the software field using machine learning. Then, it explains employment factors of the models using explainable artificial intelligence. The models focus on both university graduates and vocational college graduates. Our works explain and interpret both black box model and glass box model. The SHAP and EBM explanation are used to interpret black box model and glass box model, respectively. The results describes that positive employment impact factors are major, vocational education and training, employment preparation setting semester, and intern experience in the employment models. This study provides a job preparation guide to universitiy and vocational college students that want to work in software field.

Development of System for Enhancing the Quality of Power Generation Facilities Failure History Data Based on Explainable AI (XAI) (XAI 기반 발전설비 고장 기록 데이터 품질 향상 시스템 개발)

  • Kim Yu Rim;Park Jeong In;Park Dong Hyun;Kang Sung Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.3
    • /
    • pp.479-493
    • /
    • 2024
  • Purpose: The deterioration in the quality of failure history data due to differences in interpretation of failures among workers at power plants and the lack of consistency in the way failures are recorded negatively impacts the efficient operation of power plants. The purpose of this study is to propose a system that classifies power generation facilities failures consistently based on the failure history text data created by the workers. Methods: This study utilizes data collected from three coal unloaders operated by Korea Midland Power Co., LTD, from 2012 to 2023. It classifies failures based on the results of Soft Voting, which incorporates the prediction probabilities derived from applying the predict_proba technique to four machine learning models: Random Forest, Logistic Regression, XGBoost, and SVM, along with scores obtained by constructing word dictionaries for each type of failure using LIME, one of the XAI (Explainable Artificial Intelligence) methods. Through this, failure classification system is proposed to improve the quality of power generation facilities failure history data. Results: The results of this study are as follows. When the power generation facilities failure classification system was applied to the failure history data of Continuous Ship Unloader, XGBoost showed the best performance with a Macro_F1 Score of 93%. When the system proposed in this study was applied, there was an increase of up to 0.17 in the Macro_F1 Score for Logistic Regression compared to when the model was applied alone. All four models used in this study, when the system was applied, showed equal or higher values in Accuracy and Macro_F1 Score than the single model alone. Conclusion: This study propose a failure classification system for power generation facilities to improve the quality of failure history data. This will contribute to cost reduction and stability of power generation facilities, as well as further improvement of power plant operation efficiency and stability.

Yoga Poses Image Classification and Interpretation Using Explainable AI (XAI) (XAI 를 활용한 설명 가능한 요가 자세 이미지 분류 모델)

  • Yu Rim Park;Hyon Hee Kim
    • Annual Conference of KIPS
    • /
    • 2023.05a
    • /
    • pp.590-591
    • /
    • 2023
  • 최근 사람들의 건강에 대한 관심이 많아지고 다양한 운동 컨텐츠가 확산되면서 실내에서 운동을 할 수 있는 기회가 많아졌다. 하지만, 전문가의 도움없이 정확하지 않은 동작을 수행하다 큰 부상을 입을 위험성이 높다. 본 연구는 CNN 기반 요가 자세 분류 모델을 생성하고 설명가능 인공지능 기술을 적용하여 예측 결과에 대한 해석을 제시한다. 사용자에게 설명성과 신뢰성 있는 모델을 제공하여 자신에게 맞게 올바른 자세를 결정할 수 있고, 무리한 동작으로 부상을 입을 확률 또한 낮출 수 있을 것으로 보인다.

A Case Study on the Effect of the Artificial Intelligence Storytelling(AI+ST) Learning Method (인공지능 스토리텔링(AI+ST) 학습 효과에 관한 사례연구)

  • Yeo, Hyeon Deok;Kang, Hye-Kyung
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.5
    • /
    • pp.495-509
    • /
    • 2020
  • This study is a theoretical research to explore ways to effectively learn AI in the age of intelligent information driven by artificial intelligence (hereinafter referred to as AI). The emphasis is on presenting a teaching method to make AI education accessible not only to students majoring in mathematics, statistics, or computer science, but also to other majors such as humanities and social sciences and the general public. Given the need for 'Explainable AI(XAI: eXplainable AI)' and 'the importance of storytelling for a sensible and intelligent machine(AI)' by Patrick Winston at the MIT AI Institute [33], we can find the significance of research on AI storytelling learning model. To this end, we discuss the possibility through a pilot study targeting general students of an university in Daegu. First, we introduce the AI storytelling(AI+ST) learning method[30], and review the educational goals, the system of contents, the learning methodology and the use of new AI tools in the method. Then, the results of the learners are compared and analyzed, focusing on research questions: 1) Can the AI+ST learning method complement algorithm-driven or developer-centered learning methods? 2) Whether the AI+ST learning method is effective for students and thus help them to develop their AI comprehension, interest and application skills.

Understanding Customer Purchasing Behavior in E-Commerce using Explainable Artificial Intelligence Techniques (XAI 기법을 이용한 전자상거래의 고객 구매 행동 이해)

  • Lee, Jaejun;Jeong, Ii Tae;Lim, Do Hyun;Kwahk, Kee-Young;Ahn, Hyunchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.387-390
    • /
    • 2021
  • 최근 전자 상거래 시장이 급격한 성장을 이루면서 고객들의 급변하는 니즈를 파악하는 것이 기업들의 수익에 직결되는 요소로 인식되고 있다. 이에 기업들은 고객들의 니즈를 신속하고 정확하게 파악하기 위해, 기축적된 고객 관련 각종 데이터를 활용하려는 시도를 강화하고 있다. 기존 시도들은 주로 구매 행동 예측에 중점을 두었으나 고객 행동의 전후 과정을 해석하는데 있어 어려움이 존재했다. 본 연구에서는 고객이 구매한 상품을 확정 또는 환불하는 행동을 취할 때 해당 행동이 발생하는데 있어 어떤 요소들이 작용하였는지를 파악하고, 어떤 고객이 환불할 지를 예측하는 예측 모형을 새롭게 제시한다. 예측 모형 구현에는 트리 기반 앙상블 방법을 사용해 예측력을 높인 XGBoost 기법을 적용하였으며, 고객 의도에 영향을 미치는 요소들을 파악하기 위하여 대표적인 설명가능한 인공지능(XAI) 기법 중 하나인 SHAP 기법을 적용하였다. 이를 통해 특정 고객 행동에 대한 각 요인들의 전반적인 영향 뿐만 아니라, 각 개별 고객에 대해서도 어떤 요소가 환불결정에 영향을 미쳤는지 파악할 수 있었다. 이를 통해 기업은 고객 개개인의 의사 결정에 영향을 미치는 요소를 파악하여 개인화 마케팅에 사용할 수 있을 것으로 기대된다.

  • PDF

A Study on Classification Models for Predicting Bankruptcy Based on XAI (XAI 기반 기업부도예측 분류모델 연구)

  • Jihong Kim;Nammee Moon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.333-340
    • /
    • 2023
  • Efficient prediction of corporate bankruptcy is an important part of making appropriate lending decisions for financial institutions and reducing loan default rates. In many studies, classification models using artificial intelligence technology have been used. In the financial industry, even if the performance of the new predictive models is excellent, it should be accompanied by an intuitive explanation of the basis on which the result was determined. Recently, the US, EU, and South Korea have commonly presented the right to request explanations of algorithms, so transparency in the use of AI in the financial sector must be secured. In this paper, an artificial intelligence-based interpretable classification prediction model was proposed using corporate bankruptcy data that was open to the outside world. First, data preprocessing, 5-fold cross-validation, etc. were performed, and classification performance was compared through optimization of 10 supervised learning classification models such as logistic regression, SVM, XGBoost, and LightGBM. As a result, LightGBM was confirmed as the best performance model, and SHAP, an explainable artificial intelligence technique, was applied to provide a post-explanation of the bankruptcy prediction process.