• Title/Summary/Keyword: Explainability

Search Result 27, Processing Time 0.029 seconds

A Study on the Explainability of Inception Network-Derived Image Classification AI Using National Defense Data (국방 데이터를 활용한 인셉션 네트워크 파생 이미지 분류 AI의 설명 가능성 연구)

  • Kangun Cho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.256-264
    • /
    • 2024
  • In the last 10 years, AI has made rapid progress, and image classification, in particular, are showing excellent performance based on deep learning. Nevertheless, due to the nature of deep learning represented by a black box, it is difficult to actually use it in critical decision-making situations such as national defense, autonomous driving, medical care, and finance due to the lack of explainability of judgement results. In order to overcome these limitations, in this study, a model description algorithm capable of local interpretation was applied to the inception network-derived AI to analyze what grounds they made when classifying national defense data. Specifically, we conduct a comparative analysis of explainability based on confidence values by performing LIME analysis from the Inception v2_resnet model and verify the similarity between human interpretations and LIME explanations. Furthermore, by comparing the LIME explanation results through the Top1 output results for Inception v3, Inception v2_resnet, and Xception models, we confirm the feasibility of comparing the efficiency and availability of deep learning networks using XAI.

A sensitivity analysis of machine learning models on fire-induced spalling of concrete: Revealing the impact of data manipulation on accuracy and explainability

  • Mohammad K. al-Bashiti;M.Z. Naser
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.409-423
    • /
    • 2024
  • Using an extensive database, a sensitivity analysis across fifteen machine learning (ML) classifiers was conducted to evaluate the impact of various data manipulation techniques, evaluation metrics, and explainability tools. The results of this sensitivity analysis reveal that the examined models can achieve an accuracy ranging from 72-93% in predicting the fire-induced spalling of concrete and denote the light gradient boosting machine, extreme gradient boosting, and random forest algorithms as the best-performing models. Among such models, the six key factors influencing spalling were maximum exposure temperature, heating rate, compressive strength of concrete, moisture content, silica fume content, and the quantity of polypropylene fiber. Our analysis also documents some conflicting results observed with the deep learning model. As such, this study highlights the necessity of selecting suitable models and carefully evaluating the presence of possible outcome biases.

Relationships between Children's Social Development and Day Care Quality, Child-care Experience and Family Characteristics (탁아기관의 질, 탁아경험 및 가족특성과 아동의 사회성발달과의 관계)

  • Yang, Yeon Suk;Cho, Bok Hee
    • Korean Journal of Child Studies
    • /
    • v.17 no.2
    • /
    • pp.181-193
    • /
    • 1996
  • The purpose of this study was: (1) to examine relationships between social development and day care quality, child-care experience and family characteristics, and (2) to investigate the explainability of those related variables for social development. Subjects for this study were 252 4-year-old children and their mothers from 32 day care centers in Seoul. Harms & Clifford's Early Childhood Environment Rating Scale was used to measure the quality of day care. The main results were as follows: (1) Day care quality, child-care experience and family characteristics were significantly related to social development. (2) Child's gender, months of age, mother's child rearing attitude, the length of child-care experience, overall quality of day care, and group size significantly predicted social development. 33% of the variance of social development was explained by these variables. The relative influence of these variables to the prediction of social development was about the same.

  • PDF

Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization (Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명)

  • Zhang, Chenglong;Ahn, Hyunchul
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.85-95
    • /
    • 2022
  • This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST'(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm's inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word's attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.

Performance Evaluation of FPN-Attention Layered Model for Improving Visual Explainability of Object Recognition (객체 인식 설명성 향상을 위한 FPN-Attention Layered 모델의 성능 평가)

  • Youn, Seok Jun;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1311-1314
    • /
    • 2022
  • DNN을 사용하여 객체 인식 과정에서 객체를 잘 분류하기 위해서는 시각적 설명성이 요구된다. 시각적 설명성은 object class에 대한 예측을 pixel-wise attribution으로 표현해 예측 근거를 해석하기 위해 제안되었다, Scale-invariant한 특징을 제공하도록 설계된 pyramidal features 기반 backbone 구조는 object detection 및 classification 등에서 널리 쓰이고 있으며, 이러한 특징을 갖는 feature pyramid를 trainable attention mechanism에 적용하고자 할 때 계산량 및 메모리의 복잡도가 증가하는 문제가 있다. 본 논문에서는 일반적인 FPN에서 객체 인식 성능과 설명성을 높이기 위한 피라미드-주의집중 계층네트워크 (FPN-Attention Layered Network) 방식을 제안하고, 실험적으로 그 특성을 평가하고자 한다. 기존의 FPN만을 사용하였을 때 객체 인식 과정에서 설명성을 향상시키는 방식이 객체 인식에 미치는 정도를 정량적으로 평가하였다. 제안된 모델의 적용을 통해 낮은 computing 오버헤드 수준에서 multi-level feature를 고려한 시각적 설명성을 개선시켜, 결괴적으로 객체 인식 성능을 향상 시킬 수 있음을 실험적으로 확인할 수 있었다.

  • PDF

A Research on Explainability of the Medical AI Model based on Attention and Attention Flow Graph (어텐션과 어텐션 흐름 그래프를 활용한 의료 인공지능 모델의 설명가능성 연구)

  • Lee, You-Jin;Chae, Dong-Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.520-522
    • /
    • 2022
  • 의료 인공지능은 특정 진단에서 높은 정확도를 보이지만 모델의 신뢰성 문제로 인해 활발하게 쓰이지 못하고 있다. 이에 따라 인공지능 모델의 진단에 대한 원인 설명의 필요성이 대두되었고 설명가능한 의료 인공지능에 관한 연구가 활발히 진행되고 있다. 하지만 MRI 등 의료 영상 인공지능 분야에서 주로 진행되고 있으며, 이미지 형태가 아닌 전자의무기록 데이터 (Electronic Health Record, EHR) 를 기반으로 한 모델의 설명가능성 연구는 EHR 데이터 자체의 복잡성 때문에 활발하게 진행 되지 않고 있다. 본 논문에서는 전자의무기록 데이터인 MIMIC-III (Medical Information Mart for Intensive Care) 를 전처리 및 그래프로 표현하고, GCT (Graph Convolutional Transformer) 모델을 학습시켰다. 학습 후, 어텐션 흐름 그래프를 시각화해서 모델의 예측에 대한 직관적인 설명을 제공한다.

Diagnosis of Calcification of Lung Nodules on the Chest X-ray Images using Gray-Level based Analysis (흉부 X-ray 영상 내 폐 결절의 석회화 여부 진단을 위한 화소 밝기 분석 기법)

  • Hyeon-Jin Choi;Dong-Yeon Yoo;Joo-Sung Sun;Jung-Won Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.681-683
    • /
    • 2023
  • 폐암은 전 세계적으로 사망률이 가장 높은 암 질환으로, 조기 발견 및 신속한 치료를 위해서는 흉부 X-ray 영상 내 악성 결절을 놓치지 않는 것이 중요하다. 그러나 흉부 X-ray 영상은 정밀도의 한계로 진단 결과에 대한 신뢰도가 낮아, 이를 보조하는 도구의 개발이 요구된다. 기존의 폐암 진단 보조 도구는 학습 기반의 기법으로, 진단 결과에 대한 설명성(explainability)이 없다는 위험성을 갖는다. 이에 본 논문에서는 통계 분석에 기반한 결절의 석회화 여부 진단 기법을 제안한다. 제안하는 기법은 결절과 해부학적 구조물의 밝기 차 분포로부터 석회화 여부를 판단하며, 그 결과 민감도 65.22%, 특이도 88.48%, 정확도 83.41%의 성능을 보였다.

Trustworthy AI Framework for Malware Response (악성코드 대응을 위한 신뢰할 수 있는 AI 프레임워크)

  • Shin, Kyounga;Lee, Yunho;Bae, ByeongJu;Lee, Soohang;Hong, Heeju;Choi, Youngjin;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.1019-1034
    • /
    • 2022
  • Malware attacks become more prevalent in the hyper-connected society of the 4th industrial revolution. To respond to such malware, automation of malware detection using artificial intelligence technology is attracting attention as a new alternative. However, using artificial intelligence without collateral for its reliability poses greater risks and side effects. The EU and the United States are seeking ways to secure the reliability of artificial intelligence, and the government announced a reliable strategy for realizing artificial intelligence in 2021. The government's AI reliability has five attributes: Safety, Explainability, Transparency, Robustness and Fairness. We develop four elements of safety, explainable, transparent, and fairness, excluding robustness in the malware detection model. In particular, we demonstrated stable generalization performance, which is model accuracy, through the verification of external agencies, and developed focusing on explainability including transparency. The artificial intelligence model, of which learning is determined by changing data, requires life cycle management. As a result, demand for the MLops framework is increasing, which integrates data, model development, and service operations. EXE-executable malware and documented malware response services become data collector as well as service operation at the same time, and connect with data pipelines which obtain information for labeling and purification through external APIs. We have facilitated other security service associations or infrastructure scaling using cloud SaaS and standard APIs.

The Prediction of Cryptocurrency Prices Using eXplainable Artificial Intelligence based on Deep Learning (설명 가능한 인공지능과 CNN을 활용한 암호화폐 가격 등락 예측모형)

  • Taeho Hong;Jonggwan Won;Eunmi Kim;Minsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.129-148
    • /
    • 2023
  • Bitcoin is a blockchain technology-based digital currency that has been recognized as a representative cryptocurrency and a financial investment asset. Due to its highly volatile nature, Bitcoin has gained a lot of attention from investors and the public. Based on this popularity, numerous studies have been conducted on price and trend prediction using machine learning and deep learning. This study employed LSTM (Long Short Term Memory) and CNN (Convolutional Neural Networks), which have shown potential for predictive performance in the finance domain, to enhance the classification accuracy in Bitcoin price trend prediction. XAI(eXplainable Artificial Intelligence) techniques were applied to the predictive model to enhance its explainability and interpretability by providing a comprehensive explanation of the model. In the empirical experiment, CNN was applied to technical indicators and Google trend data to build a Bitcoin price trend prediction model, and the CNN model using both technical indicators and Google trend data clearly outperformed the other models using neural networks, SVM, and LSTM. Then SHAP(Shapley Additive exPlanations) was applied to the predictive model to obtain explanations about the output values. Important prediction drivers in input variables were extracted through global interpretation, and the interpretation of the predictive model's decision process for each instance was suggested through local interpretation. The results show that our proposed research framework demonstrates both improved classification accuracy and explainability by using CNN, Google trend data, and SHAP.

Understanding Interactive and Explainable Feedback for Supporting Non-Experts with Data Preparation for Building a Deep Learning Model

  • Kim, Yeonji;Lee, Kyungyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • v.9 no.2
    • /
    • pp.90-104
    • /
    • 2020
  • It is difficult for non-experts to build machine learning (ML) models at the level that satisfies their needs. Deep learning models are even more challenging because it is unclear how to improve the model, and a trial-and-error approach is not feasible since training these models are time-consuming. To assist these novice users, we examined how interactive and explainable feedback while training a deep learning network can contribute to model performance and users' satisfaction, focusing on the data preparation process. We conducted a user study with 31 participants without expertise, where they were asked to improve the accuracy of a deep learning model, varying feedback conditions. While no significant performance gain was observed, we identified potential barriers during the process and found that interactive and explainable feedback provide complementary benefits for improving users' understanding of ML. We conclude with implications for designing an interface for building ML models for novice users.