• 제목/요약/키워드: AI Regulations

검색결과 51건 처리시간 0.024초

초거대 인공지능 정책 변동과정에 관한 연구 : 옹호연합모형을 중심으로 (A Study on the Process of Policy Change of Hyper-scale Artificial Intelligence: Focusing on the ACF)

  • 최석원;이주연
    • 시스템엔지니어링학술지
    • /
    • 제18권2호
    • /
    • pp.11-23
    • /
    • 2022
  • Although artificial intelligence(AI) is a key technology in the digital transformation among the emerging technologies, there are concerns about the use of AI, so many countries have been trying to set up a proper regulation system. This study analyzes the cases of the regulation policies on AI in USA, EU and Korea with the aim to set up and improve proper AI policies and strategies in Korea. In USA, the establishment of the code of ethics for the use of AI is led by private sector. On the other side, Europe is strengthening competitiveness in the AI industry by consolidating regulations that are dispersed by EU members. Korea has also prepared and promoted policies for AI ethics, copyright and privacy protection at the national level and trying to change to a negative regulation system and improve regulations to close the gap between the leading countries and Korea in AI. Moreover, this study analyzed the course of policy changes of AI regulation policy centered on ACF(Advocacy Coalition Framework) model of Sabatier. Through this study, it proposes hyper-scale AI regulation policy recommendations for improving competitiveness and commercialization in Korea. This study is significant in that it can contribute to increasing the predictability of policy makers who have difficulties due to uncertainty and ambiguity in establishing regulatory policies caused by the emergence of hyper-scale artificial intelligence.

The Regulation of AI: Striking the Balance Between Innovation and Fairness

  • Kwang-min Lee
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권12호
    • /
    • pp.9-22
    • /
    • 2023
  • 본 논문에서는 인공지능의 무한한 발전 가능성을 유지하면서 공정성과 윤리적 책임을 유지하는 AI 규제에 대한 균형 잡힌 방안을 제시합니다. AI 시스템이 일상생활에 점점 더 통합됨에 따라, 특정 인구 집단에 대한 편견과 불이익을 방지하기 위한 규제 개발이 필수적입니다. 본 논문에서는 책임 있는 개발과 적용을 보장하기 위해 AI 애플리케이션의 규제 프레임워크와 사례 분석 연구를 진행합니다. 본 논문을 통하여 AI 규제에 대한 지속적인 논의를 이끌어내며, 혁신과 공정성 사이의 균형을 맞추는 정책을 수립을 제안합니다.

안전한 AI 서비스를 위한 국내 정책 및 가이드라인 개선방안 연구 (A Study on the Improvement of Domestic Policies and Guidelines for Secure AI Services)

  • 김지연;석병진;김역;이창훈
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.975-987
    • /
    • 2023
  • 인공지능 기술의 발전이 가속화되며 다양한 산업 분야에서 데이터를 활용한 자동화 및 지능화를 가능하게 하는 AI 서비스의 제공이 증가하며, AI 활용으로 발생할 수 있는 AI 보안 위험에 대한 우려가 높아지고 있다. 이에 국외에서는 AI 규제의 필요성과 중요성을 인지하고 관련 정책 및 규제 마련에 주력하고 있다. 국내에서도 이러한 움직임을 보이고 있으나, AI 규제에 대한 구체화가 이루어지지 않은 실정이다. 따라서, 기존의 정책안이나 가이드라인을 비교 분석하여 공통 요소 도출 및 보완점 파악, 국내 AI 규제 방향에 대해 논의할 필요성이 있다. 본 논문에서는 AI 라이프 사이클에서 발생할 수 있는 AI 보안 위험에 대해 조사하고, 각 위험에 대한 분석을 통해 국내 AI 규제 수립에 고려되어야 할 사항 6가지를 도출한다. 이를 토대로 국내 AI 정책안 및 가이드라인을 분석하고, 보완사항을 확인한다. 또한, 미국, EU의 AI 법률의 주요 내용 검토와 본 논문의 분석 결과를 기반으로 국내 AI 정책안과 가이드라인에 대한 개선방안을 제시한다.

Examining the Generative Artificial Intelligence Landscape: Current Status and Policy Strategies

  • Hyoung-Goo Kang;Ahram Moon;Seongmin Jeon
    • Asia pacific journal of information systems
    • /
    • 제34권1호
    • /
    • pp.150-190
    • /
    • 2024
  • This article proposes a framework to elucidate the structural dynamics of the generative AI ecosystem. It also outlines the practical application of this proposed framework through illustrative policies, with a specific emphasis on the development of the Korean generative AI ecosystem and its implications of platform strategies at AI platform-squared. We propose a comprehensive classification scheme within generative AI ecosystems, including app builders, technology partners, app stores, foundational AI models operating as operating systems, cloud services, and chip manufacturers. The market competitiveness for both app builders and technology partners will be highly contingent on their ability to effectively navigate the customer decision journey (CDJ) while offering localized services that fill the gaps left by foundational models. The strategically important platform of platforms in the generative AI ecosystem (i.e., AI platform-squared) is constituted by app stores, foundational AIs as operating systems, and cloud services. A few companies, primarily in the U.S. and China, are projected to dominate this AI platform squared, and consequently, they are likely to become the primary targets of non-market strategies by diverse governments and communities. Korea still has chances in AI platform-squared, but the window of opportunities is narrowing. A cautious approach is necessary when considering potential regulations for domestic large AI models and platforms. Hastily importing foreign regulatory frameworks and non-market strategies, such as those from Europe, could overlook the essential hierarchical structure that our framework underscores. Our study suggests a clear strategic pathway for Korea to emerge as a generative AI powerhouse. As one of the few countries boasting significant companies within the foundational AI models (which need to collaborate with each other) and chip manufacturing sectors, it is vital for Korea to leverage its unique position and strategically penetrate the platform-squared segment-app stores, operating systems, and cloud services. Given the potential network effects and winner-takes-all dynamics in AI platform-squared, this endeavor is of immediate urgency. To facilitate this transition, it is recommended that the government implement promotional policies that strategically nurture these AI platform-squared, rather than restrict them through regulations and stakeholder pressures.

국내 하수처리시설에 인공지능기술 적용을 위한 사례 연구 (The Case Studies of Artificial Intelligence Technology for apply at The Sewage Treatment Plant)

  • 김태우;이호식
    • 한국물환경학회지
    • /
    • 제35권4호
    • /
    • pp.370-378
    • /
    • 2019
  • In the recent years, various studies have presented stable and economic methods for increased regulations and compliance in sewage treatment plants. In some sewage treatment plants, the effluent concentration exceeded the regulations, or the effluent concentration was manipulated. This indicates that the process is currently inefficient to operate and control sewage treatment plants. The operation and control method of sewage treatment plant is mathematically dealing with a physical and chemical mechanism for the anticipated situation during operation. In addition, there are some limitations, such as situations that are different from the actual sewage treatment plant. Therefore, it is necessary to find a more stable and economical way to enhance the operational and control method. AI (Artificial Intelligence) technology is selected among various methods. There are very few cases of applying and utilizing AI technology in domestic sewage treatment plants. In addition, it failed to define specific definitions of applying AI technologies. The purpose of this study is to present the application of AI technology to domestic sewage treatment plants by comparing and analyzing various cases. This study presented the AI technology algorithm system, verification method, data collection, energy and operating costs as methods of applying AI technology.

인공지능 수용의도에서 정부신뢰의 역할 (The Role of Confidence in Government in Acceptance Intention towards Artificial Intelligence)

  • 황서이;남영자
    • 디지털융복합연구
    • /
    • 제18권8호
    • /
    • pp.217-224
    • /
    • 2020
  • 본 연구는 인공지능 수용의도를 증가시킬 수 있는 정책적 시사점을 제시하고자 하였다. 이를 위해 인공지능에 대한 지식수준과 감정적 요인이 인공지능 수용의도에 미치는 영향을 확인하였고, 이에 대한 영향을 정부신뢰가 조절하는지 검증하고자 위계적 회귀분석을 활용하였다. 연구결과는 다음과 같다. 첫째, 인공지능에 대한 지식수준이 높을수록 수용의도가 증가하였고, 인공지능에 대한 감정이 부정적으로 형성될수록 인공지능의 수용의도가 감소하였다. 그리고 수용의도에 미치는 영향은 인공지능에 대한 감정, 정부신뢰, 지식 순으로 나타났다. 둘째, 규제에 대한 정부신뢰가 높을수록 수용의도가 증가하였으며, 규제에 대한 정부신뢰가 낮은 집단일수록 인공지능에 대한 감정이 수용의도에 미치는 영향이 더 큰 것으로 나타났다. 한편, 인구통계학적 요인 중 종교가 인공지능 수용의도에 유의미한 영향을 미치는 것으로 나타나 후속연구에 대한 필요성을 제안하였다. 이 연구는 전반적인 인공지능에 대한 지식과 감정, 그리고 규제에 대한 정부신뢰라는 변인을 통해 인공지능에 대한 인식과 판단을 실증 분석하여 인공지능 연구를 위한 기초자료를 제공하는데 의의가 있다.

AI·DATA 서비스 분야 정부 규제혁신 노력 및 규제 불합리 인식이 기업들의 사업 지속을 위한 규제대응 노력에 미치는 영향 분석 (Analysis of the impact of government regulatory innovation efforts and regulatory irrationality perceptions in AI and DATA services on companies' regulatory response efforts to continue their businesses)

  • 송혜림;정명석;이주연
    • 시스템엔지니어링학술지
    • /
    • 제20권1호
    • /
    • pp.1-15
    • /
    • 2024
  • This study attempted to analyze whether the government's regulatory innovation efforts affect the continued operation of new products and new service-based businesses, such as regulatory compliance and response efforts, despite the perception of regulatory difficulties as business barriers for firms in new industries. Previous studies on the impact of regulations on companies in new industries were a limit to obtaining implications for regulatory issues and characteristics of each field due to the simplification of regulatory indicators and the establishment of field integration. To compensate for this, this study focused on the field of AI and DATA services, and subdivided regulatory issues to indicate practical inconvenience as variables, and model fit and hypothesis verification were performed by applying Structural Equation Model analysis based on the survey results of related companies. As a result, in the field of AI and DATA services, "Perceived regulatory irrationality" and "Perceived government regulatory innovation efforts" significantly affect the "Regulatory environment satisfaction" of the regulated, and "Perceived regulatory irrationality" and "Regulatory environment satisfaction" affect "Regulatory response efforts for companies in new industries to continue their businesses." The significance of this study is that it conducted research on the factors affecting the continuity of business of companies in the AI and DATA service sector by linking the analysis of the impact relationship between satisfaction and continuous use intention, which have been mainly used in the "Policy Acceptance Model" and "IT service sector," to "efforts for companies to continue their business in a new industrial regulatory environment." In addition, by presenting a new empirical model for new industry regulations, it is expected to be meaningful as it can provide a research foundation that can obtain practical implications in related fields.

Exploratory Analysis of AI-based Policy Decision-making Implementation

  • SunYoung SHIN
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권1호
    • /
    • pp.203-214
    • /
    • 2024
  • This study seeks to provide implications for domestic-related policies through exploratory analysis research to support AI-based policy decision-making. The following should be considered when establishing an AI-based decision-making model in Korea. First, we need to understand the impact that the use of AI will have on policy and the service sector. The positive and negative impacts of AI use need to be better understood, guided by a public value perspective, and take into account the existence of different levels of governance and interests across public policy and service sectors. Second, reliability is essential for implementing innovative AI systems. In most organizations today, comprehensive AI model frameworks to enable and operationalize trust, accountability, and transparency are often insufficient or absent, with limited access to effective guidance, key practices, or government regulations. Third, the AI system is accountable. The OECD AI Principles set out five value-based principles for responsible management of trustworthy AI: inclusive growth, sustainable development and wellbeing, human-centered values and fairness values and fairness, transparency and explainability, robustness, security and safety, and accountability. Based on this, we need to build an AI-based decision-making system in Korea, and efforts should be made to build a system that can support policies by reflecting this. The limiting factor of this study is that it is an exploratory study of existing research data, and we would like to suggest future research plans by collecting opinions from experts in related fields. The expected effect of this study is analytical research on artificial intelligence-based decision-making systems, which will contribute to policy establishment and research in related fields.

Natural Selection in Artificial Intelligence: Exploring Consequences and the Imperative for Safety Regulations

  • Seokki Cha
    • Asian Journal of Innovation and Policy
    • /
    • 제12권2호
    • /
    • pp.261-267
    • /
    • 2023
  • In the paper of 'Natural Selection Favors AIs over Humans,' Dan Hendrycks applies principles of Darwinian evolution to forecast potential trajectories of AI development. He proposes that competitive pressures within corporate and military realms could lead to AI replacing human roles and exhibiting self-interested behaviors. However, such claims carry the risk of oversimplifying the complex issues of competition and natural selection without clear criteria for judging whether AI is selfish or altruistic, necessitating a more in-depth analysis and critique. Other studies, such as ''The Threat of AI and Our Response: The AI Charter of Ethics in South Korea,' offer diverse opinions on the natural selection of artificial intelligence, examining major threats that may arise from AI, including AI's value judgment and malicious use, and emphasizing the need for immediate discussions on social solutions. Such contemplation is not merely a technical issue but also significant from an ethical standpoint, requiring thoughtful consideration of how the development of AI harmonizes with human welfare and values. It is also essential to emphasize the importance of cooperation between artificial intelligence and humans. Hendrycks's work, while speculative, is supported by historical observations of inevitable evolution given the right conditions, and it prompts deep contemplation of these issues, setting the stage for future research focused on AI safety, regulation, and ethical considerations.

Learning fair prediction models with an imputed sensitive variable: Empirical studies

  • Kim, Yongdai;Jeong, Hwichang
    • Communications for Statistical Applications and Methods
    • /
    • 제29권2호
    • /
    • pp.251-261
    • /
    • 2022
  • As AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerging. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.