• Title/Summary/Keyword: AI Reliability

Search Result 174, Processing Time 0.029 seconds

Roadmap Toward Certificate Program for Trustworthy Artificial Intelligence

  • Han, Min-gyu;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.10 no.3
    • /
    • pp.59-65
    • /
    • 2021
  • In this paper, we propose the AI certification standardization activities for systematic research and planning for the standardization of trustworthy artificial intelligence (AI). The activities will be in two-fold. In the stage 1, we investigate the scope and possibility of standardization through AI reliability technology research targeting international standards organizations. And we establish the AI reliability technology standard and AI reliability verification for the feasibility of the AI reliability technology/certification standards. In the stage 2, based on the standard technical specifications established in the previous stage, we establish AI reliability certification program for verification of products, systems and services. Along with the establishment of the AI reliability certification system, a global InterOp (Interoperability test) event, an AI reliability certification international standard meetings and seminars are to be held for the spread of AI reliability certification. Finally, TAIPP (Trustworthy AI Partnership Project) will be established through the participation of relevant standards organizations and industries to overall maintain and develop standards and certification programs to ensure the governance of AI reliability certification standards.

The Impact of Artificial Intelligence Adoption in Candidates Screening and Job Interview on Intentions to Apply (채용 전형에서 인공지능 기술 도입이 입사 지원의도에 미치는 영향)

  • Lee, Hwanwoo;Lee, Saerom;Jung, Kyoung Chol
    • The Journal of Information Systems
    • /
    • v.28 no.2
    • /
    • pp.25-52
    • /
    • 2019
  • Purpose Despite the recent increase in the use of selection tools using artificial intelligence (AI), far less is known about the effectiveness of them in recruitment and selection research. Design/methodology/approach This paper tests the impact of AI-based initial screening and interview on intentions to apply. We also examine the moderating role of individual difference (i.e., reliability on technology) in the relationship. Findings Using policy-capturing with undergraduate students at a large university in South Korea, this study showed that AI-based interview has a negative effect on intentions to apply, where AI-based initial screening has no effect. These results suggest that applicants may have a negative feeling of AI-based interview, but they may not AI-based initial screening. In other words, AI-based interview can reduce application rates, but AI-based screening not. Results also indicated that the relationship between AI-based initial screening and intentions to apply is moderated by the level of applicant's reliability on technology. Specifically, respondents with high levels of reliability are more likely than those with low levels of reliability to apply for firms using AI-based initial screening. However, the moderating role of reliability was not significant in the relationship between the AI interview and the applying intention. Employing uncertainty reduction theory, this study indicated that the relationship between AI-based selection tools and intentions to apply is dynamic, suggesting that organizations should carefully manage their AI-based selection techniques throughout the recruitment and selection process.

The Expectation of Medical Artificial Intelligence of Students Majoring in Health in Convergence Era (융복합 시대에 일부 보건계열 전공 학생들의 의료용 인공지능에 대한 기대도)

  • Moon, Ja-Young;Sim, Seon-Ju
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.9
    • /
    • pp.97-104
    • /
    • 2018
  • The purpose of this study was to investigate the expectation toward medical artificial intelligence(AI) of students in majoring health, and to utilize it as a basic data for widespread use of medical AI for 500 students majoring in health science at Cheonan city. The awareness of AI was 18.6%, the reliability of AI was 24.8%, and agreement to use of medical AI was 38%. Also, the higher the awareness and reliability of AI were, the higher the expectation of AI was. As a result, education on medical AI in the major field should be a cornerstone for the development of an effective healthcare environment utilizing medical AI by raising awareness, reliability and expectation of AI.

A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP

  • Park, Ji Hun;Jo, Hye Seon;Lee, Sang Hyun;Oh, Sang Won;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.4
    • /
    • pp.1271-1287
    • /
    • 2022
  • When abnormal operating conditions occur in nuclear power plants, operators must identify the occurrence cause and implement the necessary mitigation measures. Accordingly, the operator must rapidly and accurately analyze the symptom requirements of more than 200 abnormal scenarios from the trends of many variables to perform diagnostic tasks and implement mitigation actions rapidly. However, the probability of human error increases owing to the characteristics of the diagnostic tasks performed by the operator. Researches regarding diagnostic tasks based on Artificial Intelligence (AI) have been conducted recently to reduce the likelihood of human errors; however, reliability issues due to the black box characteristics of AI have been pointed out. Hence, the application of eXplainable Artificial Intelligence (XAI), which can provide AI diagnostic evidence for operators, is considered. In conclusion, the XAI to solve the reliability problem of AI is included in the AI-based diagnostic algorithm. A reliable intelligent diagnostic assistant based on a merged diagnostic algorithm, in the form of an operator support system, is developed, and includes an interface to efficiently inform operators.

The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI) (설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안)

  • Jung Il Ok;Choi Woo Bin;Kim Su Chul
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.101-110
    • /
    • 2022
  • As the cases of using artificial intelligence in various fields increase, attempts to solve various issues through artificial intelligence in the intrusion detection field are also increasing. However, the black box basis, which cannot explain or trace the reasons for the predicted results through machine learning, presents difficulties for security professionals who must use it. To solve this problem, research on explainable AI(XAI), which helps interpret and understand decisions in machine learning, is increasing in various fields. Therefore, in this paper, we propose an explanatory AI to enhance the reliability of machine learning-based intrusion detection prediction results. First, the intrusion detection model is implemented through XGBoost, and the description of the model is implemented using SHAP. And it provides reliability for security experts to make decisions by comparing and analyzing the existing feature importance and the results using SHAP. For this experiment, PKDD2007 dataset was used, and the association between existing feature importance and SHAP Value was analyzed, and it was verified that SHAP-based explainable AI was valid to give security experts the reliability of the prediction results of intrusion detection models.

A Study on Factors Influencing AI Learning Continuity : Focused on Business Major Students

  • Park, So Hyun
    • The Journal of Information Systems
    • /
    • v.32 no.4
    • /
    • pp.189-210
    • /
    • 2023
  • Purpose This study aims to investigate factors that positively influence the continuous Artificial Intelligence(AI) Learning Continuity of business major students. Design/methodology/approach To evaluate the impact of AI education, a survey was conducted among 119 business-related majors who completed a software/AI course. Frequency analysis was employed to examine the general characteristics of the sample. Furthermore, factor analysis using Varimax rotation was conducted to validate the derived variables from the survey items, and Cronbach's α coefficient was used to measure the reliability of the variables. Findings Positive correlations were observed between business major students' AI Learning Continuity and their AI Interest, AI Awareness, and Data Analysis Capability related to their majors. Additionally, the study identified that AI Project Awareness and AI Literacy Capability play pivotal roles as mediators in fostering AI Learning Continuity. Students who acquired problem-solving skills and related technologies through AI Projects Awareness showed increased motivation for AI Learning Continuity. Lastly, AI Self-Efficacy significantly influences students' AI Learning Continuity.

ETRI AI Strategy #7: Preventing Technological and Social Dysfunction Caused by AI (ETRI AI 실행전략 7: AI로 인한 기술·사회적 역기능 방지)

  • Kim, T.W.;Choi, S.S.;Yeon, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.7
    • /
    • pp.67-76
    • /
    • 2020
  • Because of the development and spread of artificial intelligence (AI) technology, new security threats and adverse AI functions have emerged as a real problem in the process of diversifying areas of use and introducing AI-based products and services to users. In response, it is necessary to develop new AI-based technologies in the field of information protection and security. This paper reviews topics such as domestic and international trends on false information detection technology, cyber security technology, and trust distribution platform technology, and it establishes the direction of the promotion of technology development. In addition, the development of international trends in ethical AI guidelines to ensure the human-centered ethical validity of AI development processes and final systems in parallel with technology development are analyzed and discussed. ETRI has developed AI policing technology, information protection, and security technologies as well as derived tasks and implementation strategies to prepare ethical AI development guidelines to ensure the reliability of AI based on its capabilities.

A Study on the Problems of AI-based Security Control (AI 기반 보안관제의 문제점 고찰)

  • Ahn, Jung-Hyun;Choi, Young-Ryul;Baik, Nam-Kyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.452-454
    • /
    • 2021
  • Currently, the security control market is operating based on AI technology. The reason for using AI is to detect large amounts of logs and big data between security equipment, and to alleviate time and human problems. However, problems are still occurring in the application of AI. The security control market is responding to many problems other than the problems introduced in this paper, and this paper attempts to deal with five problems. We would like to consider problems that arise in applying AI technology to security control environments such as 'AI model selection', 'AI standardization problem', 'Big data accuracy', 'Security Control Big Data Accuracy and AI Reliability', 'responsibility material problem', and 'lack of AI validity.'

  • PDF

The Requirements Analysis of Data Management and Model Reliability for Smart Factory Predictive Maintenance AI Model Development (스마트팩토리 예지보전 AI 모델 개발을 위한 데이터 관리 및 모델 신뢰성 요구사항 분석)

  • Jinse Kim;Jung-Won Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.644-646
    • /
    • 2023
  • 스마트팩토리는 협동 로봇과 같은 프로그래머블한 설비의 유기적인 협업을 통해 최적화된 공정을 수행한다. 따라서 수집되는 센서 데이터의 특징과 환경 조건의 복잡도가 높아, 예지보전을 위한 AI 소프트웨어의 개발 시 요구사항 기반의 체계적인 개발 및 검증이 필수적이다. 본 논문에서는 AI 소프트웨어의 요구사항을 사용자와 시스템 관점에서 정의하고, AI 모델 개발 프로세스와 스마트팩토리 예지보전 측면에서 분석한다. 도출된 요구사항을 CNN 기반의 협동 로봇 기어 마모 예측 모델의 개발에 적용하여 데이터 관리와 모델 신뢰성 관점의 요구사항을 분석 및 검증하였다.

Research on the evaluation model for the impact of AI services

  • Soonduck Yoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.191-202
    • /
    • 2023
  • This study aims to propose a framework for evaluating the impact of artificial intelligence (AI) services, based on the concept of AI service impact. It also suggests a model for evaluating this impact and identifies relevant factors and measurement approaches for each item of the model. The study classifies the impact of AI services into five categories: ethics, safety and reliability, compliance, user rights, and environmental friendliness. It discusses these five categories from a broad perspective and provides 21 detailed factors for evaluating each category. In terms of ethics, the study introduces three additional factors-accessibility, openness, and fairness-to the ten items initially developed by KISDI. In the safety and reliability category, the study excludes factors such as dependability, policy, compliance, and awareness improvement as they can be better addressed from a technical perspective. The compliance category includes factors such as human rights protection, privacy protection, non-infringement, publicness, accountability, safety, transparency, policy compliance, and explainability.For the user rights category, the study excludes factors such as publicness, data management, policy compliance, awareness improvement, recoverability, openness, and accuracy. The environmental friendliness category encompasses diversity, publicness, dependability, transparency, awareness improvement, recoverability, and openness.This study lays the foundation for further related research and contributes to the establishment of relevant policies by establishing a model for evaluating the impact of AI services. Future research is required to assess the validity of the developed indicators and provide specific evaluation items for practical use, based on expert evaluations.