• Title/Summary/Keyword: AI ethics Measurement indicators

Search Result 3, Processing Time 0.015 seconds

A Study on the Artificial Intelligence Ethics Measurement indicators for the Protection of Personal Rights and Property Based on the Principles of Artificial Intelligence Ethics (인공지능 윤리원칙 기반의 인격권 및 재산보호를 위한 인공지능 윤리 측정지표에 관한 연구)

  • So, Soonju;Ahn, Seongjin
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.111-123
    • /
    • 2022
  • Artificial intelligence, which is developing as the core of an intelligent information society, is bringing convenience and positive life changes to humans. However, with the development of artificial intelligence, human rights and property are threatened, and ethical problems are increasing, so alternatives are needed accordingly. In this study, the most controversial artificial intelligence ethics problem in the dysfunction of artificial intelligence was aimed at researching and developing artificial intelligence ethical measurement indicators to protect human personality rights and property first under artificial intelligence ethical principles and components. In order to research and develop artificial intelligence ethics measurement indicators, various related literature, focus group interview(FGI), and Delphi surveys were conducted to derive 43 items of ethics measurement indicators. By survey and statistical analysis, 40 items of artificial intelligence ethics measurement indicators were confirmed and proposed through descriptive statistics analysis, reliability analysis, and correlation analysis for ethical measurement indicators. The proposed artificial intelligence ethics measurement indicators can be used for artificial intelligence design, development, education, authentication, operation, and standardization, and can contribute to the development of safe and reliable artificial intelligence.

Development of Measurement Indicators by Type of Risk of AI Robots (인공지능 로봇의 위험성 유형별 측정지표 개발)

  • Hyun-kyoung Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.97-108
    • /
    • 2024
  • Ethical and technical problems are becoming serious as the industrialization of artificial intelligence robots becomes active, research on risk is insufficient. In this situation, the researcher developed 52 verified indicators that can measure the body, rights, property, and social risk of artificial intelligence robots. In order to develop measurement indicators for each type of risk of artificial intelligence robots, 11 experts were interviewed in-depth after IRB deliberation. IIn addition, 328 workers in various fields where artificial intelligence robots can be introduced were surveyed to verify their fieldwork, and statistical verification such as exploratory factor analysis, reliability analysis, correlation analysis, and multiple regression analysis was verifyed to measure validity and reliability. It is expected that the measurement indicators presented in this paper will be widely used in the development, certification, education, and policies of standardized artificial intelligence robots, and become the cornerstone of the industrialization of artificial intelligence robots that are socially sympathetic and safe.

Research on the evaluation model for the impact of AI services

  • Soonduck Yoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.191-202
    • /
    • 2023
  • This study aims to propose a framework for evaluating the impact of artificial intelligence (AI) services, based on the concept of AI service impact. It also suggests a model for evaluating this impact and identifies relevant factors and measurement approaches for each item of the model. The study classifies the impact of AI services into five categories: ethics, safety and reliability, compliance, user rights, and environmental friendliness. It discusses these five categories from a broad perspective and provides 21 detailed factors for evaluating each category. In terms of ethics, the study introduces three additional factors-accessibility, openness, and fairness-to the ten items initially developed by KISDI. In the safety and reliability category, the study excludes factors such as dependability, policy, compliance, and awareness improvement as they can be better addressed from a technical perspective. The compliance category includes factors such as human rights protection, privacy protection, non-infringement, publicness, accountability, safety, transparency, policy compliance, and explainability.For the user rights category, the study excludes factors such as publicness, data management, policy compliance, awareness improvement, recoverability, openness, and accuracy. The environmental friendliness category encompasses diversity, publicness, dependability, transparency, awareness improvement, recoverability, and openness.This study lays the foundation for further related research and contributes to the establishment of relevant policies by establishing a model for evaluating the impact of AI services. Future research is required to assess the validity of the developed indicators and provide specific evaluation items for practical use, based on expert evaluations.