• Title/Summary/Keyword: AI ethics

Search Result 95, Processing Time 0.022 seconds

Analysis of Domestic Research Trends in AI Ethics Education (인공지능윤리교육의 국내 연구 동향 분석)

  • Kim Kyeongju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.4
    • /
    • pp.29-44
    • /
    • 2023
  • This study examined research trends in AI ethics education and attempted to suggest a direction for AI ethics education. As a result of the research, two studies were conducted in 2017. There are no studies in 2018 and 2019, and there are 6 studies in 2020. Since then, research has continued to increase, with 19 studies in 2021 and 18 studies in 2022. There were a total of 37 lead authors of the study. There were six lead authors who had published papers for more than two years, and two lead authors who had published papers for more than three years. In addition, to examine the details of AI ethics education, a total of 265 keywords that went through a refining process were divided into education-related, ethics-related, AI-related, and other-related. Although the necessity and importance of research on AI ethics education is expected to increase, there are not many researchers who continuously conduct research on AI ethics education. Accordingly, there is a need to find ways to continue research on AI ethics education. AI ethics education is being conducted under various names such as moral education, ethics education, liberal arts education, and AI education. Accordingly, research on AI ethics education at various levels and forms should be conducted, not just educational research on artificial intelligence ethics in terms of regular subjects.

Exploring AI Principles in Global Top 500 Enterprises: A Delphi Technique of LDA Topic Modeling Results

  • Hyun BAEK
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.2
    • /
    • pp.7-17
    • /
    • 2023
  • Artificial Intelligence (AI) technology has already penetrated deeply into our daily lives, and we live with the convenience of it anytime, anywhere, and sometimes even without us noticing it. However, because AI is imitative intelligence based on human Intelligence, it inevitably has both good and evil sides of humans, which is why ethical principles are essential. The starting point of this study is the AI principles for companies or organizations to develop products. Since the late 2010s, studies on ethics and principles of AI have been actively published. This study focused on AI principles declared by global companies currently developing various products through AI technology. So, we surveyed the AI principles of the Global 500 companies by market capitalization at a given specific time and collected the AI principles explicitly declared by 46 of them. AI analysis technology primarily analyzed this text data, especially LDA (Latent Dirichlet Allocation) topic modeling, which belongs to Machine Learning (ML) analysis technology. Then, we conducted a Delphi technique to reach a meaningful consensus by presenting the primary analysis results. We expect to provide meaningful guidelines in AI-related government policy establishment, corporate ethics declarations, and academic research, where debates on AI ethics and principles often occur recently based on the results of our study.

A Study on Policy Instrument for the Development of Ethical AI-based Services for Enterprises: An Exploratory Analysis Using AHP (기업의 윤리적 인공지능 기반 서비스 개발을 위한 정책수단 연구: AHP를 활용한 탐색적 분석)

  • Changki Jang;MinSang Yi;WookJoon Sung
    • Journal of Information Technology Services
    • /
    • v.22 no.2
    • /
    • pp.23-40
    • /
    • 2023
  • Despite the growing interest and normative discussions on AI ethics, there is a lack of discussion on policy instruments that are necessary for companies to develop AI-based services in compliance with ethical principles. Thus, the purpose of this study is to explore policy instruments that can encourage companies to voluntarily comply with and adopt AI ethical standards and self-checklists. The study reviews previous research and similar cases on AI ethics, conducts interviews with AI-related companies, and analyzes the data using AHP to derive action plans. In terms of desirability and feasibility, Research findings show that policy instruments that induce companies to ethically develop AI-based services should be prioritized, while regulatory instruments require a cautious approach. It was also found that a consulting support policy consisting of experts in various fields who can support the use of AI ethics, and support for the development of solutions that adhere to AI ethical standards are necessary as incentive policies. Additionally, the participation and agreement of various stakeholders in the process of establishing AI ethical standards are crucial, and policy instruments need to be continuously supplemented through implementation and feedback. This study is significant as it presents the necessary policy instruments for companies to develop ethical AI-based services through an analytical methodology, moving beyond discursive discussions on AI ethical principles. Further analysis on the effectiveness of policy instruments linked to AI ethical principles is necessary for establishing ethical AI-based service development.

Navigating Ethical AI: A Comprehensive Analysis of Literature and Future Directions in Information Systems (AI와 윤리: 문헌의 종합적 분석과 정보시스템 분야의 향후 연구 방향)

  • Jinyoung Min
    • Knowledge Management Research
    • /
    • v.25 no.3
    • /
    • pp.1-22
    • /
    • 2024
  • As the use of AI becomes a reality in many aspects of daily life, the opportunities and benefits it brings are being highlighted, while concerns about the ethical issues it may cause are also increasing. The field of information systems, which studies the impact of technology on business and society, must contribute to ensuring that AI has a positive influence on human society. To achieve this, it is necessary to explore the direction of research in the information systems field by examining various studies related to AI and ethics. For this purpose, this study collected literature from 2020 to the present and analyzed their research topics through researcher coding and topic modeling methods. The analysis results categorized research topics into AI ethics principles, ethical AI design and development, ethical AI deployment and application, and ethical AI use. After reviewing the literature in each category to grasp the current state of research, this study suggested future research directions for AI ethics in the field of information systems.

A Study on How to Set up a Standard Framework for AI Ethics and Regulation (AI 윤리와 규제에 관한 표준 프레임워크 설정 방안 연구)

  • Nam, Mun-Hee
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.7-15
    • /
    • 2022
  • With the aim of an intelligent world in the age of individual customization through decentralization of information and technology, sharing/opening, and connection, we often see a tendency to cross expectations and concerns in the technological discourse and interest in artificial intelligence more than ever. Recently, it is easy to find claims by futurists that AI singularity will appear before and after 2045. Now, as part of preparations to create a paradigm of coexistence that coexists and prosper with AI in the coming age of artificial intelligence, a standard framework for setting up more correct AI ethics and regulations is required. This is because excluding the risk of omission of setting major guidelines and methods for evaluating reasonable and more reasonable guideline items and evaluation standards are increasingly becoming major research issues. In order to solve these research problems and at the same time to develop continuous experiences and learning effects on AI ethics and regulation setting, we collect guideline data on AI ethics and regulation of international organizations / countries / companies, and research and suggest ways to set up a standard framework (SF: Standard Framework) through a setting research model and text mining exploratory analysis. The results of this study can be contributed as basic prior research data for more advanced AI ethics and regulatory guidelines item setting and evaluation methods in the future.

Seoul PACT : Principles of Artificial Intelligence Ethics and its Application Example to Intelligent E-Government Service (인공지능 윤리 원칙 Seoul PACT를 적용한 지능형 전자정부 서비스 윤리 가이드라인)

  • Kim, Myuhng Joo
    • Journal of Information Technology Services
    • /
    • v.18 no.3
    • /
    • pp.117-128
    • /
    • 2019
  • The remarkable achievements of the artificial intelligence in recent years are also raising awareness about its potential risks. Several governments and public organizations have been proposing the artificial intelligence ethics for sustainable development of artificial intelligence by minimizing potential risks. However, most existing proposals are focused on the developer-centered ethics, which is not sufficient for the comprehensive ethics required for ongoing intelligent information society. In addition, they have chosen a number of principles as the starting point of artificial intelligence ethics, so it is not easy to derive the guideline flexibly for a specific member reflecting its own situation. In this paper, we classify primitive members who need artificial intelligence ethics in intelligent information society into three : Developer, Supplier and User. We suggest a new artificial intelligence ethics, Seoul PACT, with minimal principles through publicness (P), accountability (A), controllability (C), and transparency (T). In addition, we provide 38 canonical guidelines based on these four principles, which are applicable to each primitive members. It is possible for a specific member to duplicate the roles of primitive members, so that the flexible derivation of the artificial intelligence ethics guidelines can be made according to the characteristics of the member reflecting its own situation. As an application example, in preparation for applying artificial intelligence to e-government service, we derive a full set of artificial intelligence ethics guideline from Seoul PACT, which can be adopted by the special member named Korean Government.

The Threat of AI and Our Response: The AI Charter of Ethics in South Korea

  • Hwang, Ha;Park, Min-Hye
    • Asian Journal of Innovation and Policy
    • /
    • v.9 no.1
    • /
    • pp.56-78
    • /
    • 2020
  • Changes in our lives due to Artificial Intelligence (AI) are currently ongoing, and there is little refutation of the effectiveness of AI. However, there have been active discussions to minimize the side effects of AI and use it responsibly, and publishing the AI Charter of Ethics (AICE) is one result of it. This study examines how our society is responding to threats from AI that may emerge in the future by examining various AIECs in the Republic of Korea. First, we summarize seven AI threats and classify these into three categories: AI's value judgment, malicious use of AI, and human alienation. Second, from Korea's seven AICEs, we draw fourteen topics based on three categories: protection of social values, AI control, and fostering digital citizenship. Finally, we review them based on the seven AI threats to evaluate any gaps between the threats and our responses. The analysis indicates that Korea has not yet been able to properly respond to the threat of AI's usurpation of human occupations (jobs). In addition, although Korea's AICEs present appropriate responses to lethal AI weapons, these provisions will be difficult to realize because the competition for AI weapons among military powers is intensifying.

A Study on the Process of Policy Change of Hyper-scale Artificial Intelligence: Focusing on the ACF (초거대 인공지능 정책 변동과정에 관한 연구 : 옹호연합모형을 중심으로)

  • Seok Won, Choi;Joo Yeoun, Lee
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.2
    • /
    • pp.11-23
    • /
    • 2022
  • Although artificial intelligence(AI) is a key technology in the digital transformation among the emerging technologies, there are concerns about the use of AI, so many countries have been trying to set up a proper regulation system. This study analyzes the cases of the regulation policies on AI in USA, EU and Korea with the aim to set up and improve proper AI policies and strategies in Korea. In USA, the establishment of the code of ethics for the use of AI is led by private sector. On the other side, Europe is strengthening competitiveness in the AI industry by consolidating regulations that are dispersed by EU members. Korea has also prepared and promoted policies for AI ethics, copyright and privacy protection at the national level and trying to change to a negative regulation system and improve regulations to close the gap between the leading countries and Korea in AI. Moreover, this study analyzed the course of policy changes of AI regulation policy centered on ACF(Advocacy Coalition Framework) model of Sabatier. Through this study, it proposes hyper-scale AI regulation policy recommendations for improving competitiveness and commercialization in Korea. This study is significant in that it can contribute to increasing the predictability of policy makers who have difficulties due to uncertainty and ambiguity in establishing regulatory policies caused by the emergence of hyper-scale artificial intelligence.

An Artificial Intelligence Ethics Education Model for Practical Power Strength (실천력 강화를 위한 인공지능 윤리 교육 모델)

  • Bae, Jinah;Lee, Jeonghun;Cho, Jungwon
    • Journal of Industrial Convergence
    • /
    • v.20 no.5
    • /
    • pp.83-92
    • /
    • 2022
  • As cases of social and ethical problems caused by artificial intelligence technology have occurred, artificial intelligence ethics are drawing attention along with social interest in the risks and side effects of artificial intelligence. Artificial intelligence ethics should not just be known and felt, but should be actionable and practiced. Therefore, this study proposes an artificial intelligence ethics education model to strengthen the practical ability of artificial intelligence ethics. The artificial intelligence ethics education model derived educational goals and problem-solving processes using artificial intelligence through existing research analysis, applied teaching and learning methods to strengthen practical skills, and compared and analyzed the existing artificial intelligence education model. The artificial intelligence ethics education model proposed in this paper aims to cultivate computing thinking skills and strengthen the practical ability of artificial intelligence ethics. To this end, the problem-solving process using artificial intelligence was presented in six stages, and artificial intelligence ethical factors reflecting the characteristics of artificial intelligence were derived and applied to the problem-solving process. In addition, it was designed to unconsciously check the ethical standards of artificial intelligence through preand post-evaluation of artificial intelligence ethics and apply learner-centered education and learning methods to make learners' ethical practices a habit. The artificial intelligence ethics education model developed through this study is expected to be artificial intelligence education that leads to practice by developing computing thinking skills.

Research on institutional improvement measures to strengthen artificial intelligence ethics (인공지능 윤리 강화를 위한 제도적 개선방안 연구)

  • Gun-Sang Cha
    • Convergence Security Journal
    • /
    • v.24 no.2
    • /
    • pp.63-70
    • /
    • 2024
  • With the development of artificial intelligence technology, our lives are changing in innovative ways, but at the same time, new ethical issues are emerging. In particular, issues of discrimination due to algorithm and data bias, deep fakes, and personal information leakage issues are judged to be social priorities that must be resolved as artificial intelligence services expand. To this end, this paper examines the concept of artificial intelligence and ethical issues from the perspective of artificial intelligence ethics, and includes each country's ethical guidelines, laws, artificial intelligence impact assessment system, artificial intelligence certification system, and the current status of technologies related to artificial intelligence algorithm transparency to prevent this. We would like to examine and suggest institutional improvement measures to strengthen artificial intelligence ethics.