• Title/Summary/Keyword: AI Security

Search Result 399, Processing Time 0.024 seconds

A Methodology for SDLC of AI-based Defense Information System (AI 기반 국방정보시스템 개발 생명주기 단계별 보안 활동 수행 방안)

  • Gyu-do Park;Young-ran Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.577-589
    • /
    • 2023
  • Ministry of National Defense plans to harness AI as a key technology to bolster overall defense capability for cultivation of an advanced strong military based on science and technology based on Defense Innovation 4.0 Plan. However, security threats due to the characteristics of AI can be a real threat to AI-based defense information system. In order to solve them, systematic security activities must be carried out from the development stage. This paper proposes security activities and considerations that must be carried out at each stage of AI-based defense information system. Through this, It is expected to contribute to preventing security threats caused by the application of AI technology to the defense field and securing the safety and reliability of defense information system.

A Study on the Problems of AI-based Security Control (AI 기반 보안관제의 문제점 고찰)

  • Ahn, Jung-Hyun;Choi, Young-Ryul;Baik, Nam-Kyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.452-454
    • /
    • 2021
  • Currently, the security control market is operating based on AI technology. The reason for using AI is to detect large amounts of logs and big data between security equipment, and to alleviate time and human problems. However, problems are still occurring in the application of AI. The security control market is responding to many problems other than the problems introduced in this paper, and this paper attempts to deal with five problems. We would like to consider problems that arise in applying AI technology to security control environments such as 'AI model selection', 'AI standardization problem', 'Big data accuracy', 'Security Control Big Data Accuracy and AI Reliability', 'responsibility material problem', and 'lack of AI validity.'

  • PDF

Analysis of the Security Requirements of the Chatbot Service Implementation Model (챗봇서비스 구현 모델의 보안요구사항 분석)

  • Kyu-min Cho;Jae-il Lee;Dong-kyoo Shin
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.167-176
    • /
    • 2024
  • Chatbot services are used in various fields in connection with AI services. Security research on AI is also in its infancy, but research on practical security in the service implementation stage using it is more insufficient. This paper analyzes the security requirements for chatbot services linked to AI services. First, the paper analyzes the recently published papers and articles on AI security. A general implementation model is established by investigating chatbot services provided in the market. The implementation model includes five components including a chatbot management system and an AI engine Based on the established model, the protection assets and threats specialized in Chatbot services are summarized. Threats are organized around threats specialized in chatbot services through a survey of chatbot service managers in operation. Ten major threats were drawn. It derived the necessary security areas to cope with the organized threats and analyzed the necessary security requirements for each area. This will be used as a security evaluation criterion in the process of reviewing and improving the security level of chatbot service.

Study on the AI Speaker Security Evaluations and Countermeasure (AI 스피커의 보안성 평가 및 대응방안 연구)

  • Lee, Ji-seop;Kang, Soo-young;Kim, Seung-joo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.6
    • /
    • pp.1523-1537
    • /
    • 2018
  • The AI speaker is a simple operation that provides users with useful functions such as music playback, online search, and so the AI speaker market is growing at a very fast pace. However, AI speakers always wait for the user's voice, which can cause serious problems such as eavesdropping and personal information exposure if exposed to security threats. Therefore, in order to provide overall improved security of all AI speakers, it is necessary to identify potential security threats and analyze them systematically. In this paper, security threat modeling is performed by selecting four products with high market share. Data Flow Diagram, STRIDE and LINDDUN Threat modeling was used to derive a systematic and objective checklist for vulnerability checks. Finally, we proposed a method to improve the security of AI speaker by comparing the vulnerability analysis results and the vulnerability of each product.

ETRI AI Strategy #7: Preventing Technological and Social Dysfunction Caused by AI (ETRI AI 실행전략 7: AI로 인한 기술·사회적 역기능 방지)

  • Kim, T.W.;Choi, S.S.;Yeon, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.7
    • /
    • pp.67-76
    • /
    • 2020
  • Because of the development and spread of artificial intelligence (AI) technology, new security threats and adverse AI functions have emerged as a real problem in the process of diversifying areas of use and introducing AI-based products and services to users. In response, it is necessary to develop new AI-based technologies in the field of information protection and security. This paper reviews topics such as domestic and international trends on false information detection technology, cyber security technology, and trust distribution platform technology, and it establishes the direction of the promotion of technology development. In addition, the development of international trends in ethical AI guidelines to ensure the human-centered ethical validity of AI development processes and final systems in parallel with technology development are analyzed and discussed. ETRI has developed AI policing technology, information protection, and security technologies as well as derived tasks and implementation strategies to prepare ethical AI development guidelines to ensure the reliability of AI based on its capabilities.

인공지능과 핀테크 보안

  • Choi, Daeseon
    • Review of KIISC
    • /
    • v.26 no.2
    • /
    • pp.35-38
    • /
    • 2016
  • 본 논문에서는 핀테크 보안에 활용 가능한 딥러닝 기술을 살펴본다. 먼저 인공지능과 관련된 보안 이슈를 인공지능이 사람을 위협하는 상황에 대한 보안(Security FROM AI), 인공지능 시스템이나 서비스를 악의적인 공격으로부터 보호하는 이슈(Security OF AI), 인공지능 기술을 활용해 보안 문제를 해결하는 것(Security BY AI) 3가지로 구분하여 살펴본다. Security BY AI의 일환으로 딥러닝에 기반한 비정상탐지(anomaly detection)과 회귀분석(regression)기법을 설명하고, 이상거래탐지, 바이오인증, 피싱, 파밍 탐지, 본인확인, 명의도용탐지, 거래 상대방 신뢰도 분석 등 핀테크 보안 문제에 활용할 수 있는 방안을 살펴본다.

Cybersecurity Audit of 5G Communication-based IoT, AI, and Cloud Applied Information Systems (5G 통신기반 IoT, AI, Cloud 적용 정보시스템의 사이버 보안 감리 연구)

  • Im, Hyeong-Do;Park, Dea-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.428-434
    • /
    • 2020
  • Recently, due to the development of ICT technology, changes to the convergence service platform of information systems are accelerating. Convergence services expanded to cyber systems with 5G communication, IoT, AI, and cloud are being reflected in the real world. However, the field of cybersecurity audit for responding to cyber attacks and security threats and strengthening security technology is insufficient. In this paper, we analyze the international standard analysis of information security management system, security audit analysis and security of related systems according to the expansion of 5G communication, IoT, AI, Cloud based information system security. In addition, we design and study cybersecurity audit checklists and contents for expanding security according to cyber attack and security threat of information system. This study will be used as the basic data for audit methods and audit contents for coping with cyber attacks and security threats by expanding convergence services of 5G, IoT, AI, and Cloud based systems.

Guideline on Security Measures and Implementation of Power System Utilizing AI Technology (인공지능을 적용한 전력 시스템을 위한 보안 가이드라인)

  • Choi, Inji;Jang, Minhae;Choi, Moonsuk
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.4
    • /
    • pp.399-404
    • /
    • 2020
  • There are many attempts to apply AI technology to diagnose facilities or improve the work efficiency of the power industry. The emergence of new machine learning technologies, such as deep learning, is accelerating the digital transformation of the power sector. The problem is that traditional power systems face security risks when adopting state-of-the-art AI systems. This adoption has convergence characteristics and reveals new cybersecurity threats and vulnerabilities to the power system. This paper deals with the security measures and implementations of the power system using machine learning. Through building a commercial facility operations forecasting system using machine learning technology utilizing power big data, this paper identifies and addresses security vulnerabilities that must compensated to protect customer information and power system safety. Furthermore, it provides security guidelines by generalizing security measures to be considered when applying AI.

A Model of Artificial Intelligence in Cyber Security of SCADA to Enhance Public Safety in UAE

  • Omar Abdulrahmanal Alattas Alhashmi;Mohd Faizal Abdullah;Raihana Syahirah Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.2
    • /
    • pp.173-182
    • /
    • 2023
  • The UAE government has set its sights on creating a smart, electronic-based government system that utilizes AI. The country's collaboration with India aims to bring substantial returns through AI innovation, with a target of over $20 billion in the coming years. To achieve this goal, the UAE launched its AI strategy in 2017, focused on improving performance in key sectors and becoming a leader in AI investment. To ensure public safety as the role of AI in government grows, the country is working on developing integrated cyber security solutions for SCADA systems. A questionnaire-based study was conducted, using the AI IQ Threat Scale to measure the variables in the research model. The sample consisted of 200 individuals from the UAE government, private sector, and academia, and data was collected through online surveys and analyzed using descriptive statistics and structural equation modeling. The results indicate that the AI IQ Threat Scale was effective in measuring the four main attacks and defense applications of AI. Additionally, the study reveals that AI governance and cyber defense have a positive impact on the resilience of AI systems. This study makes a valuable contribution to the UAE government's efforts to remain at the forefront of AI and technology exploitation. The results emphasize the need for appropriate evaluation models to ensure a resilient economy and improved public safety in the face of automation. The findings can inform future AI governance and cyber defense strategies for the UAE and other countries.

금융분야 AI의 윤리적 문제 현황과 해결방안

  • Lee, Su Ryeon;Lee, Hyun Jung;Lee, Aram;Choi, Eun Jung
    • Review of KIISC
    • /
    • v.32 no.3
    • /
    • pp.57-64
    • /
    • 2022
  • 우리 사회에서 AI 활용이 더욱 보편화 되어가고 있는 가운데 AI 신뢰에 대한 사회적 요구도 증가했다. 특히 최근 대화형 인공지능'이루다'사건으로 AI 윤리에 대한 논의가 뜨거워졌다. 금융 분야에서도 로보어드바이저, 보험 심사 등 AI가 다양하게 활용되고 있지만, AI 윤리 문제가 AI 활성화에 큰 걸림돌이 되고 있다. 본 논문에서는 인공지능으로 발생할 수 있는 윤리적 문제를 활용 도메인과 데이터 분석 파이프라인에 따라 나눈다. 금융 AI 기술 분야에 따른 윤리 문제를 분류했으며 각 분야별 윤리사례를 제시했고 윤리 문제 분류에 따른 대응 방안과 해외에서의 대응방식과 우리나라의 대응방식을 소개하며 해결방안을 제시했다. 본 연구를 통해 금융 AI 기술 발전에 더불어 윤리 문제에 대한 경각심을 고취시킬 수 있을 것으로 기대한다. 금융 AI 기술 발전이 AI 윤리와 조화를 이루며 성장하길 바라며, 금융 AI 정책 수립 시에도 AI 윤리적 문제를 염두해 두어 차별, 개인정보유출 등과 같은 AI 윤리 규범 미준수로 파생되는 문제점을 줄이며 금융분야 AI 활용이 더욱 활성화되길 기대한다.