• 제목/요약/키워드: AI Fairness

검색결과 26건 처리시간 0.018초

The Regulation of AI: Striking the Balance Between Innovation and Fairness

  • Kwang-min Lee
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권12호
    • /
    • pp.9-22
    • /
    • 2023
  • 본 논문에서는 인공지능의 무한한 발전 가능성을 유지하면서 공정성과 윤리적 책임을 유지하는 AI 규제에 대한 균형 잡힌 방안을 제시합니다. AI 시스템이 일상생활에 점점 더 통합됨에 따라, 특정 인구 집단에 대한 편견과 불이익을 방지하기 위한 규제 개발이 필수적입니다. 본 논문에서는 책임 있는 개발과 적용을 보장하기 위해 AI 애플리케이션의 규제 프레임워크와 사례 분석 연구를 진행합니다. 본 논문을 통하여 AI 규제에 대한 지속적인 논의를 이끌어내며, 혁신과 공정성 사이의 균형을 맞추는 정책을 수립을 제안합니다.

패션 온라인 플랫폼의 AI 알고리즘 가격설정에 대한 가격 공정성 지각 (Price Fairness Perception on the AI Algorithm Pricing of Fashion Online Platform)

  • 정하억;추호정;윤남희
    • 한국의류학회지
    • /
    • 제45권5호
    • /
    • pp.892-906
    • /
    • 2021
  • This study explores the effects of providing information on the price fairness perception and intention of continuous use in an online fashion platform, given a price difference due to AI algorithm pricing. We investigated the moderating roles of price inequality (loss vs. gain) and technology insecurity. The experiments used four stimuli based on price inequality (loss vs. gain) and information provision (provided or not) on price inequality. We developed a mock website and offered a scenario on the product presentation based on an AI algorithm pricing. Participants in their 20s and 30s were randomly allocated to one of the stimuli. To test the hypotheses, a total of 257 responses were analyzed using Process Macro 3.4. According to the results, price fairness perception mediated between information provision and continuous use intention when consumers saw the price inequality as a gain. When the consumers perceived high technology insecurity, information provision affected the intention of continuous use mediated by price fairness perception.

Exploratory Analysis of AI-based Policy Decision-making Implementation

  • SunYoung SHIN
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권1호
    • /
    • pp.203-214
    • /
    • 2024
  • This study seeks to provide implications for domestic-related policies through exploratory analysis research to support AI-based policy decision-making. The following should be considered when establishing an AI-based decision-making model in Korea. First, we need to understand the impact that the use of AI will have on policy and the service sector. The positive and negative impacts of AI use need to be better understood, guided by a public value perspective, and take into account the existence of different levels of governance and interests across public policy and service sectors. Second, reliability is essential for implementing innovative AI systems. In most organizations today, comprehensive AI model frameworks to enable and operationalize trust, accountability, and transparency are often insufficient or absent, with limited access to effective guidance, key practices, or government regulations. Third, the AI system is accountable. The OECD AI Principles set out five value-based principles for responsible management of trustworthy AI: inclusive growth, sustainable development and wellbeing, human-centered values and fairness values and fairness, transparency and explainability, robustness, security and safety, and accountability. Based on this, we need to build an AI-based decision-making system in Korea, and efforts should be made to build a system that can support policies by reflecting this. The limiting factor of this study is that it is an exploratory study of existing research data, and we would like to suggest future research plans by collecting opinions from experts in related fields. The expected effect of this study is analytical research on artificial intelligence-based decision-making systems, which will contribute to policy establishment and research in related fields.

Learning fair prediction models with an imputed sensitive variable: Empirical studies

  • Kim, Yongdai;Jeong, Hwichang
    • Communications for Statistical Applications and Methods
    • /
    • 제29권2호
    • /
    • pp.251-261
    • /
    • 2022
  • As AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerging. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.

공학전공 대학생의 AI 로봇에 대한 윤리적 민감성 (Engineering Students' Ethical Sensitivity on Artificial Intelligence Robots)

  • 이현옥;고연주
    • 공학교육연구
    • /
    • 제25권6호
    • /
    • pp.23-37
    • /
    • 2022
  • This study evaluated the engineering students' ethical sensitivity to an AI emotion recognition robot scenario and explored its characteristics. For data collection, 54 students (27 majoring in Convergence Electronic Engineering and 27 majoring in Computer Software) were asked to list five factors regarding the AI robot scenario. For the analysis of ethical sensitivity, it was checked whether the students acknowledged the AI ethical principles in the AI robot scenario, such as safety, controllability, fairness, accountability, and transparency. We also categorized students' levels as either informed or naive based on whether or not they infer specific situations and diverse outcomes and feel a responsibility to take action as engineers. As a result, 40.0% of students' responses contained the AI ethical principles. These include safety 57.1%, controllability 10.7%, fairness 20.5%, accountability 11.6%, and transparency 0.0%. More students demonstrated ethical sensitivity at a naive level (76.8%) rather than at the informed level (23.2%). This study has implications for presenting an ethical sensitivity evaluation tool that can be utilized professionally in educational fields and applying it to engineering students to illustrate specific cases with varying levels of ethical sensitivity.

ML-based Interactive Data Visualization System for Diversity and Fairness Issues

  • Min, Sey;Kim, Jusub
    • International Journal of Contents
    • /
    • 제15권4호
    • /
    • pp.1-7
    • /
    • 2019
  • As the recent developments of artificial intelligence, particularly machine-learning, impact every aspect of society, they are also increasingly influencing creative fields manifested as new artistic tools and inspirational sources. However, as more artists integrate the technology into their creative works, the issues of diversity and fairness are also emerging in the AI-based creative practice. The data dependency of machine-learning algorithms can amplify the social injustice existing in the real world. In this paper, we present an interactive visualization system for raising the awareness of the diversity and fairness issues. Rather than resorting to education, campaign, or laws on those issues, we have developed a web & ML-based interactive data visualization system. By providing the interactive visual experience on the issues in interesting ways as the form of web content which anyone can access from anywhere, we strive to raise the public awareness of the issues and alleviate the important ethical problems. In this paper, we present the process of developing the ML-based interactive visualization system and discuss the results of this project. The proposed approach can be applied to other areas requiring attention to the issues.

인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로 (A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service)

  • 김하영;허정윤;권호창
    • 지능정보연구
    • /
    • 제28권3호
    • /
    • pp.259-278
    • /
    • 2022
  • 인공지능(AI)의 확산과 함께 금융 분야에서도 상품추천, 고객 응대 자동화, 이상거래탐지, 신용 심사 등 다양한 인공지능 기반 서비스가 확대되고 있다. 하지만 데이터에 기반한 기계학습의 특성상 신뢰성과 관련된 문제 발생과 예상하지 못한 사회적 논란도 함께 발생하고 있다. 인공지능의 효용은 극대화하고 위험과 부작용은 최소화할 수 있는 신뢰할 수 있는 인공지능에 대한 필요성은 점점 더 커지고 있다. 이러한 배경에서 본 연구는 소비자의 금융 생활에 직접 영향을 끼치는 인공지능 기반 개인신용평가의 공정성 확보를 위한 체크리스트 제안을 통해 인공지능 기반 금융서비스에 대한 신뢰 향상에 기여하고자 하였다. 인공지능 신뢰성의 주요 핵심 요소인 투명성, 안전성, 책무성, 공정성 중 포용 금융의 관점에서 자동화된 알고리즘의 혜택을 사회적 차별 없이 모두가 누릴 수 있도록 공정성을 연구 대상으로 선정하였다. 문헌 연구를 통해 공정성이 영향을 끼치는 서비스 운용의 전 과정을 데이터, 알고리즘, 사용자의 세 개의 영역으로 구분하고, 12가지 하위 점검 항목과 항목별 세부 권고안으로 체크리스트를 구성하였다. 구성한 체크리스트는 이해관계자(금융 분야 종사자, 인공지능 분야 종사자, 일반 사용자)별 계층적 분석과정(AHP)을 통해 점검 항목에 대한 상대적 중요도 및 우선순위를 도출하였다. 이해관계자별 중요도에 따라 세 개의 그룹으로 분류하여 분석한 결과 학습데이터와 비금융정보 활용에 대한 타당성 검증 및 신규 유입 데이터 모니터링의 필요성 등 실용적 측면에서 구체적인 점검 사항을 파악하였고, 금융 소비자인 일반 사용자의 경우 결과에 대한 해석 오류 및 편향성 확인에 대한 중요도를 높게 평가한다는 것을 확인할 수 있었다. 본 연구의 결과가 더 공정한 인공지능 기반 금융서비스의 구축과 운영에 기여할 수 있기를 기대한다.

On Power Splitting under User-Fairness for Correlated Superposition Coding NOMA in 5G System

  • Chung, Kyuhyuk
    • International journal of advanced smart convergence
    • /
    • 제9권2호
    • /
    • pp.68-75
    • /
    • 2020
  • Non-orthogonal multiple access (NOMA) has gained the significant attention in the fifth generation (5G) mobile communication, which enables the advanced smart convergence of the artificial intelligence (AI), the internet of things (IoT), and many of the state-of-the-art technologies. Recently, correlated superposition coding (SC) has been proposed in NOMA, to achieve the near-perfect successive interference cancellation (SIC) bit-error rate (BER) performance for the stronger channel users, and to mitigate the severe BER performance degradation for the weaker channel users. In the correlated SC NOMA scheme, the stronger channel user BER performance is even better than the perfect SIC BER performance, for some range of the power allocation factor. However, such excessively good BER performance is not good for the user-fairness, i.e., the more power to the weaker channel user and the less power to the stronger channel user, because the excessively good BER performance of the stronger channel user results in the worse BER performance of the weaker channel user. Therefore, in this paper, we propose the power splitting to establish the user-fairness between both users. First, we derive a closed-form expression for the power splitting factor. Then it is shown that in terms of BER performance, the user-fairness is established between the two users. In result, the power splitting scheme could be considered in correlated SC NOMA for the user-fairness.

인공지능기술 윤리성 인식 척도개발 연구 (Development and Validation of Ethical Awareness Scale for AI Technology)

  • 김도연;고영화
    • 디지털융복합연구
    • /
    • 제20권1호
    • /
    • pp.71-86
    • /
    • 2022
  • 본 연구의 목적은 인공지능 기술 또는 서비스를 수용하는 사용자의 윤리성 인식을 측정하기 위한 척도 개발 및 타당화에 있다. 이를 위해 인공지능 윤리성 관련 문헌 분석을 통해 구성개념 및 속성을 확인하였다. 전국의 10대-70대 남녀 133명(개방형 설문:1차 문항), 273명(예비조사:2차 문항), 500명(본조사:최종 문항)을 대상으로 실시한 온라인 설문조사 결과를 확인적 요인분석에 의해 정제하여 최종적으로 인공지능기술 윤리성 척도를 개발하였다. 인공지능기술 윤리성 인식 척도는 총 4개 요인(투명성, 안전성, 공정성, 책임성) 16개 문항으로 개발하여 일반적인 인공지능기술 관련 윤리성 인식을 세부 요인별로 측정할 수 있도록 하였다. 개발된 척도를 활용하여 다양한 분야의 측정 변인들과의 관련성을 밝힐 수 있을 것이며, 인공지능기술 발전의 초기 단계에서 윤리성 인식을 높이기 위한 기초 데이터를 제공하는 데 중요한 역할을 할 것으로 기대한다.

On Power Calculation for First and Second Strong Channel Users in M-user NOMA System

  • Chung, Kyuhyuk
    • International journal of advanced smart convergence
    • /
    • 제9권3호
    • /
    • pp.49-58
    • /
    • 2020
  • Non-orthogonal multiple access (NOMA) has been recognized as a significant technology in the fifth generation (5G) and beyond mobile communication, which encompasses the advanced smart convergence of the artificial intelligence (AI) and the internet of things (IoT). In NOMA, since the channel resources are shared by many users, it is essential to establish the user fairness. Such fairness is achieved by the power allocation among the users, and in turn, the less power is allocated to the stronger channel users. Especially, the first and second strong channel users have to share the extremely small amount of power. In this paper, we consider the power optimization for the two users with the small power. First, the closed-form expression for the power allocation is derived and then the results are validated by the numerical results. Furthermore, with the derived analytical expression, for the various channel environments, the optimal power allocation is investigated and the impact of the channel gain difference on the power allocation is analyzed.