• Title/Summary/Keyword: Explainable AI

Search Result 51, Processing Time 0.023 seconds

Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning

  • Gil-Sun Hong;Miso Jang;Sunggu Kyung;Kyungjin Cho;Jiheon Jeong;Grace Yoojin Lee;Keewon Shin;Ki Duk Kim;Seung Min Ryu;Joon Beom Seo;Sang Min Lee;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.24 no.11
    • /
    • pp.1061-1080
    • /
    • 2023
  • Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.

Performance Analysis of Explainers for Sentiment Classifiers of Movie Reviews (영화평 감성 분석기를 대상으로 한 설명자의 성능 분석)

  • Park, Cheon-Young;Lee, Kong Joo
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.563-568
    • /
    • 2020
  • 본 연구에서는 블랙박스로 알려진 딥러닝 모델에 설명 근거를 제공할 수 있는 설명자 모델을 적용해 보았다. 영화평 감성 분석을 위해 MLP, CNN으로 구성된 딥러닝 모델과 결정트리의 앙상블인 Gradient Boosting 모델을 이용하여 감성 분류기를 구축하였다. 설명자 모델로는 기울기(gradient)을 기반으로 하는 IG와 레이어 사이의 가중치(weight)을 기반으로 하는 CAM, 그리고 설명가능한 대리 모델을 이용하는 LIME과 입력 속성에 대한 선형모델을 추정하는 SHAP을 사용하였다. 설명자 모델의 특성을 보기 위하여 히트맵과 관련성 높은 N개의 속성을 추출해 보았다. 설명자가 제공하는 기여도에 따라 입력 속성을 제거해 가며 분류기 성능 변화를 측정하는 정량적 평가도 수행하였다. 또한, 사람의 판단 근거와의 일치도를 살펴볼 수 있는 '설명 근거 정확도'라는 새로운 평가 방법을 제안하여 적용해 보았다.

  • PDF

Yoga Poses Image Classification and Interpretation Using Explainable AI (XAI) (XAI 를 활용한 설명 가능한 요가 자세 이미지 분류 모델)

  • Yu Rim Park;Hyon Hee Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.590-591
    • /
    • 2023
  • 최근 사람들의 건강에 대한 관심이 많아지고 다양한 운동 컨텐츠가 확산되면서 실내에서 운동을 할 수 있는 기회가 많아졌다. 하지만, 전문가의 도움없이 정확하지 않은 동작을 수행하다 큰 부상을 입을 위험성이 높다. 본 연구는 CNN 기반 요가 자세 분류 모델을 생성하고 설명가능 인공지능 기술을 적용하여 예측 결과에 대한 해석을 제시한다. 사용자에게 설명성과 신뢰성 있는 모델을 제공하여 자신에게 맞게 올바른 자세를 결정할 수 있고, 무리한 동작으로 부상을 입을 확률 또한 낮출 수 있을 것으로 보인다.

Trend in eXplainable Machine Learning for Intelligent Self-organizing Networks (지능형 Self-Organizing Network를 위한 설명 가능한 기계학습 연구 동향)

  • D.S. Kwon;J.H. Na
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.6
    • /
    • pp.95-106
    • /
    • 2023
  • As artificial intelligence has become commonplace in various fields, the transparency of AI in its development and implementation has become an important issue. In safety-critical areas, the eXplainable and/or understandable of artificial intelligence is being actively studied. On the other hand, machine learning have been applied to the intelligence of self-organizing network (SON), but transparency in this application has been neglected, despite the critical decision-makings in the operation of mobile communication systems. We describes concepts of eXplainable machine learning (ML), along with research trends, major issues, and research directions. After summarizing the ML research on SON, research directions are analyzed for explainable ML required in intelligent SON of beyond 5G and 6G communication.

Application of XAI Models to Determine Employment Factors in the Software Field : with focus on University and Vocational College Graduates (소프트웨어 분야 취업 결정 요인에 대한 XAI 모델 적용 연구 : 일반대학교와 전문대학 졸업자를 중심으로)

  • Kwon Joonhee;Kim Sungrim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.1
    • /
    • pp.31-45
    • /
    • 2024
  • The purpose of this study is to explain employment factors in the software field. For it, the Graduates Occupational Mobility Survey by the Korea employment information service is used. This paper proposes employment models in the software field using machine learning. Then, it explains employment factors of the models using explainable artificial intelligence. The models focus on both university graduates and vocational college graduates. Our works explain and interpret both black box model and glass box model. The SHAP and EBM explanation are used to interpret black box model and glass box model, respectively. The results describes that positive employment impact factors are major, vocational education and training, employment preparation setting semester, and intern experience in the employment models. This study provides a job preparation guide to universitiy and vocational college students that want to work in software field.

A Study on Classification Models for Predicting Bankruptcy Based on XAI (XAI 기반 기업부도예측 분류모델 연구)

  • Jihong Kim;Nammee Moon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.333-340
    • /
    • 2023
  • Efficient prediction of corporate bankruptcy is an important part of making appropriate lending decisions for financial institutions and reducing loan default rates. In many studies, classification models using artificial intelligence technology have been used. In the financial industry, even if the performance of the new predictive models is excellent, it should be accompanied by an intuitive explanation of the basis on which the result was determined. Recently, the US, EU, and South Korea have commonly presented the right to request explanations of algorithms, so transparency in the use of AI in the financial sector must be secured. In this paper, an artificial intelligence-based interpretable classification prediction model was proposed using corporate bankruptcy data that was open to the outside world. First, data preprocessing, 5-fold cross-validation, etc. were performed, and classification performance was compared through optimization of 10 supervised learning classification models such as logistic regression, SVM, XGBoost, and LightGBM. As a result, LightGBM was confirmed as the best performance model, and SHAP, an explainable artificial intelligence technique, was applied to provide a post-explanation of the bankruptcy prediction process.

Development of Type 2 Prediction Prediction Based on Big Data (빅데이터 기반 2형 당뇨 예측 알고리즘 개발)

  • Hyun Sim;HyunWook Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.999-1008
    • /
    • 2023
  • Early prediction of chronic diseases such as diabetes is an important issue, and improving the accuracy of diabetes prediction is especially important. Various machine learning and deep learning-based methodologies are being introduced for diabetes prediction, but these technologies require large amounts of data for better performance than other methodologies, and the learning cost is high due to complex data models. In this study, we aim to verify the claim that DNN using the pima dataset and k-fold cross-validation reduces the efficiency of diabetes diagnosis models. Machine learning classification methods such as decision trees, SVM, random forests, logistic regression, KNN, and various ensemble techniques were used to determine which algorithm produces the best prediction results. After training and testing all classification models, the proposed system provided the best results on XGBoost classifier with ADASYN method, with accuracy of 81%, F1 coefficient of 0.81, and AUC of 0.84. Additionally, a domain adaptation method was implemented to demonstrate the versatility of the proposed system. An explainable AI approach using the LIME and SHAP frameworks was implemented to understand how the model predicts the final outcome.

Trustworthy AI Framework for Malware Response (악성코드 대응을 위한 신뢰할 수 있는 AI 프레임워크)

  • Shin, Kyounga;Lee, Yunho;Bae, ByeongJu;Lee, Soohang;Hong, Heeju;Choi, Youngjin;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.1019-1034
    • /
    • 2022
  • Malware attacks become more prevalent in the hyper-connected society of the 4th industrial revolution. To respond to such malware, automation of malware detection using artificial intelligence technology is attracting attention as a new alternative. However, using artificial intelligence without collateral for its reliability poses greater risks and side effects. The EU and the United States are seeking ways to secure the reliability of artificial intelligence, and the government announced a reliable strategy for realizing artificial intelligence in 2021. The government's AI reliability has five attributes: Safety, Explainability, Transparency, Robustness and Fairness. We develop four elements of safety, explainable, transparent, and fairness, excluding robustness in the malware detection model. In particular, we demonstrated stable generalization performance, which is model accuracy, through the verification of external agencies, and developed focusing on explainability including transparency. The artificial intelligence model, of which learning is determined by changing data, requires life cycle management. As a result, demand for the MLops framework is increasing, which integrates data, model development, and service operations. EXE-executable malware and documented malware response services become data collector as well as service operation at the same time, and connect with data pipelines which obtain information for labeling and purification through external APIs. We have facilitated other security service associations or infrastructure scaling using cloud SaaS and standard APIs.

Development of a Resort's Cross-selling Prediction Model and Its Interpretation using SHAP (리조트 교차판매 예측모형 개발 및 SHAP을 이용한 해석)

  • Boram Kang;Hyunchul Ahn
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.195-204
    • /
    • 2022
  • The tourism industry is facing a crisis due to the recent COVID-19 pandemic, and it is vital to improving profitability to overcome it. In situations such as COVID-19, it would be more efficient to sell additional products other than guest rooms to customers who have visited to increase the unit price rather than adopting an aggressive sales strategy to increase room occupancy to increase profits. Previous tourism studies have used machine learning techniques for demand forecasting, but there have been few studies on cross-selling forecasting. Also, in a broader sense, a resort is the same accommodation industry as a hotel. However, there is no study specialized in the resort industry, which is operated based on a membership system and has facilities suitable for lodging and cooking. Therefore, in this study, we propose a cross-selling prediction model using various machine learning techniques with an actual resort company's accommodation data. In addition, by applying the explainable artificial intelligence XAI(eXplainable AI) technique, we intend to interpret what factors affect cross-selling and confirm how they affect cross-selling through empirical analysis.

IoT-Based Health Big-Data Process Technologies: A Survey

  • Yoo, Hyun;Park, Roy C.;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.974-992
    • /
    • 2021
  • Recently, the healthcare field has undergone rapid changes owing to the accumulation of health big data and the development of machine learning. Data mining research in the field of healthcare has different characteristics from those of other data analyses, such as the structural complexity of the medical data, requirement for medical expertise, and security of personal medical information. Various methods have been implemented to address these issues, including the machine learning model and cloud platform. However, the machine learning model presents the problem of opaque result interpretation, and the cloud platform requires more in-depth research on security and efficiency. To address these issues, this paper presents a recent technology for Internet-of-Things-based (IoT-based) health big data processing. We present a cloud-based IoT health platform and health big data processing technology that reduces the medical data management costs and enhances safety. We also present a data mining technology for health-risk prediction, which is the core of healthcare. Finally, we propose a study using explainable artificial intelligence that enhances the reliability and transparency of the decision-making system, which is called the black box model owing to its lack of transparency.