• Title/Summary/Keyword: adversarial attacks

Search Result 57, Processing Time 0.024 seconds

High-Capacity Robust Image Steganography via Adversarial Network

  • Chen, Beijing;Wang, Jiaxin;Chen, Yingyue;Jin, Zilong;Shim, Hiuk Jae;Shi, Yun-Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.366-381
    • /
    • 2020
  • Steganography has been successfully employed in various applications, e.g., copyright control of materials, smart identity cards, video error correction during transmission, etc. Deep learning-based steganography models can hide information adaptively through network learning, and they draw much more attention. However, the capacity, security, and robustness of the existing deep learning-based steganography models are still not fully satisfactory. In this paper, three models for different cases, i.e., a basic model, a secure model, a secure and robust model, have been proposed for different cases. In the basic model, the functions of high-capacity secret information hiding and extraction have been realized through an encoding network and a decoding network respectively. The high-capacity steganography is implemented by hiding a secret image into a carrier image having the same resolution with the help of concat operations, InceptionBlock and convolutional layers. Moreover, the secret image is hidden into the channel B of carrier image only to resolve the problem of color distortion. In the secure model, to enhance the security of the basic model, a steganalysis network has been added into the basic model to form an adversarial network. In the secure and robust model, an attack network has been inserted into the secure model to improve its robustness further. The experimental results have demonstrated that the proposed secure model and the secure and robust model have an overall better performance than some existing high-capacity deep learning-based steganography models. The secure model performs best in invisibility and security. The secure and robust model is the most robust against some attacks.

A Research on Adversarial Example-based Passive Air Defense Method against Object Detectable AI Drone (객체인식 AI적용 드론에 대응할 수 있는 적대적 예제 기반 소극방공 기법 연구)

  • Simun Yuk;Hweerang Park;Taisuk Suh;Youngho Cho
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.119-125
    • /
    • 2023
  • Through the Ukraine-Russia war, the military importance of drones is being reassessed, and North Korea has completed actual verification through a drone provocation towards South Korea at 2022. Furthermore, North Korea is actively integrating artificial intelligence (AI) technology into drones, highlighting the increasing threat posed by drones. In response, the Republic of Korea military has established Drone Operations Command(DOC) and implemented various drone defense systems. However, there is a concern that the efforts to enhance capabilities are disproportionately focused on striking systems, making it challenging to effectively counter swarm drone attacks. Particularly, Air Force bases located adjacent to urban areas face significant limitations in the use of traditional air defense weapons due to concerns about civilian casualties. Therefore, this study proposes a new passive air defense method that aims at disrupting the object detection capabilities of AI models to enhance the survivability of friendly aircraft against the threat posed by AI based swarm drones. Using laser-based adversarial examples, the study seeks to degrade the recognition accuracy of object recognition AI installed on enemy drones. Experimental results using synthetic images and precision-reduced models confirmed that the proposed method decreased the recognition accuracy of object recognition AI, which was initially approximately 95%, to around 0-15% after the application of the proposed method, thereby validating the effectiveness of the proposed method.

Efficient Poisoning Attack Defense Techniques Based on Data Augmentation (데이터 증강 기반의 효율적인 포이즈닝 공격 방어 기법)

  • So-Eun Jeon;Ji-Won Ock;Min-Jeong Kim;Sa-Ra Hong;Sae-Rom Park;Il-Gu Lee
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.25-32
    • /
    • 2022
  • Recently, the image processing industry has been activated as deep learning-based technology is introduced in the image recognition and detection field. With the development of deep learning technology, learning model vulnerabilities for adversarial attacks continue to be reported. However, studies on countermeasures against poisoning attacks that inject malicious data during learning are insufficient. The conventional countermeasure against poisoning attacks has a limitation in that it is necessary to perform a separate detection and removal operation by examining the training data each time. Therefore, in this paper, we propose a technique for reducing the attack success rate by applying modifications to the training data and inference data without a separate detection and removal process for the poison data. The One-shot kill poison attack, a clean label poison attack proposed in previous studies, was used as an attack model. The attack performance was confirmed by dividing it into a general attacker and an intelligent attacker according to the attacker's attack strategy. According to the experimental results, when the proposed defense mechanism is applied, the attack success rate can be reduced by up to 65% compared to the conventional method.

A Study on Improving Data Poisoning Attack Detection against Network Data Analytics Function in 5G Mobile Edge Computing (5G 모바일 에지 컴퓨팅에서 빅데이터 분석 기능에 대한 데이터 오염 공격 탐지 성능 향상을 위한 연구)

  • Ji-won Ock;Hyeon No;Yeon-sup Lim;Seong-min Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.549-559
    • /
    • 2023
  • As mobile edge computing (MEC) is gaining attention as a core technology of 5G networks, edge AI technology of 5G network environment based on mobile user data is recently being used in various fields. However, as in traditional AI security, there is a possibility of adversarial interference of standard 5G network functions within the core network responsible for edge AI core functions. In addition, research on data poisoning attacks that can occur in the MEC environment of standalone mode defined in 5G standards by 3GPP is currently insufficient compared to existing LTE networks. In this study, we explore the threat model for the MEC environment using NWDAF, a network function that is responsible for the core function of edge AI in 5G, and propose a feature selection method to improve the performance of detecting data poisoning attacks for Leaf NWDAF as some proof of concept. Through the proposed methodology, we achieved a maximum detection rate of 94.9% for Slowloris attack-based data poisoning attacks in NWDAF.

Top 10 Key Standardization Trends and Perspectives on Artificial Intelligence in Medicine (의료 인공지능 10대 표준화 동향 및 전망)

  • Jeon, J.H.;Lee, K.C.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.2
    • /
    • pp.1-16
    • /
    • 2020
  • "Artificial Intelligence+" is a key strategic direction that has garnered the attention of several global medical device manufacturers and internet companies. Large hospitals are actively involved in different types of medical AI research and cooperation projects. Medical AI is expected to create numerous opportunities and advancements in areas such as medical imaging, computer aided diagnostics and clinical decision support, new drug development, personal healthcare, pathology analysis, and genetic disease prediction. On the contrary, some studies on the limitations and problems in current conditions such as lack of clinical validation, difficulty in performance comparison, lack of interoperability, adversarial attacks, and computational manipulations are being published. Overall, the medical AI field is in a paradigm shift. Regarding international standardization, the work on the top 10 standardization issues is witnessing rapid progress and the competition for standard development has become fierce.

Research Trends of Adversarial Attacks in Image Segmentation (Segmentation 기반 적대적 공격 동향 조사)

  • Hong, Yoon-Young;Shin, Yeong-Jae;Choi, Chang-Woo;Kim, Ho-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.631-634
    • /
    • 2022
  • 컴퓨터 비전에서 딥러닝을 활용한 이미지 분할 기법은 핵심 분야 중 하나이다. 이미지 분할 기법이 다양한 도메인에 사용되면서 딥러닝 네트워크의 오작동을 일으키는 적대적 공격에 대한 방어와 강건함이 요구되고 있으며 자율주행 자동차, 질병 분석과 같이 모델의 보안 취약성이 심각한 사고를 불러 올 수 있는 영역에서 적대적 공격은 많은 관심을 받고 있다. 본 논문에서는 이미지 분할 기법에 따른 구별방법과 최근 연구되고 있는 적대적 공격의 방향성을 설명하며 향후 컴퓨터 비전 분야 연구의 효율성을 위해 중점적으로 검토되고 있는 연구주제를 설명한다

Improving the Performance of Certified Defense Against Adversarial Attacks (적대적인 공격에 대한 인증 가능한 방어 방법의 성능 향상)

  • Go, Hyojun;Park, Byeongjun;Kim, Changick
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.100-102
    • /
    • 2020
  • 심층 신경망은 적대적인 공격으로 생성된 적대적 예제에 의해 쉽게 오작동할 수 있다. 이에 따라 다양한 방어 방법들이 제안되었으나, 더욱 강력한 적대적인 공격이 제안되어 방어 방법들을 무력화할 가능성은 존재한다. 이러한 가능성은 어떤 공격 범위 내의 적대적인 공격을 방어할 수 있다고 보장할 수 있는 인증된 방어(Certified defense) 방법의 필요성을 강조한다. 이에 본 논문은 인증된 방어 방법 중 가장 효과적인 방법의 하나로 알려진 구간 경계 전파(Interval Bound Propagation)의 성능을 향상하는 방법을 연구한다. 구체적으로, 우리는 기존의 구간 경계 전파 방법의 훈련 과정을 수정하는 방법을 제안하며, 이를 통해 기존 구간 경계 전파 방법의 훈련 시간을 유지하면서 성능을 향상할 수 있음을 보일 것이다. 우리가 제안한 방법으로 수행한 MNIST 데이터 셋에 대한 실험에서 우리는 기존 구간 경계 전파 방법 대비 인증 에러(Verified error)를 Large 모델에 대해서 1.77%, Small 모델에 대해서 0.96% 낮출 수 있었다.

  • PDF

A Survey on Adversarial Attacks Against ASR Systems (ASR 시스템에 대한 오디오 적대적 공격 연구 동향 분석)

  • Na Hyun Kim;Yeon Joon Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.215-216
    • /
    • 2023
  • 오디오 적대적 공격 연구는 최근 몇 년 동안 빠르게 발전해 왔다. 이전에는 음성 신호를 직접 수정하거나 추가하여 공격을 수행하는 방법이 일반적이었지만 최근에는 딥러닝 모델을 이용한 적대적 공격 기술이 주목을 받고 있다. 이러한 적대적 공격은 현재 다양한 분야에 널리 쓰이는 ASR 시스템에 심각한 보안 위협이 될 수 있다. 이에 본 논문에서는 현재까지의 음성신호 적대적 공격 기술과 방어기술의 연구 흐름을 분석하여 더욱 강건한 ASR 시스템을 구축하는 데 기여하고자 한다.

Performance Comparison of Neural Network Models for Adversarial Attacks by Autonomous Ships (자율주행 선박의 적대적 공격에 대한 신경망 모델의 성능 비교)

  • Tae-Hoon Her;Ju-Hyeong Kim;Na-Hyun Kim;So-Yeon Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1106-1107
    • /
    • 2023
  • 자율주행 선박의 기술 발전에 따라 적대적 공격에 대한 위험성이 대두되고 있다. 이를 해결하기 위해 본 연구는 다양한 신경망 모델을 활용하여 적대적 공격을 탐지하는 성능을 체계적으로 비교, 분석하였다. CNN, GRU, LSTM, VGG16 모델을 사용하여 실험을 진행하였고, 이 중 VGG16 모델이 가장 높은 탐지 성능을 보였다. 본 연구의 결과를 통해 자율주행 선박에 적용될 수 있는 보안모델 구축에 대한 신뢰성 있는 방향성을 제시하고자 한다.

Development of Augmentation Method of Ballistic Missile Trajectory using Variational Autoencoder (변이형 오토인코더를 이용한 탄도미사일 궤적 증강기법 개발)

  • Dong Kyu Lee;Dong Wg Hong
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.19 no.2
    • /
    • pp.145-156
    • /
    • 2023
  • Trajectory of ballistic missile is defined by inherent flight dynamics, which decided range and maneuvering characteristics. It is crucial to predict range and maneuvering characteristics of ballistic missile in KAMD (Korea Air and Missile Defense) to minimize damage due to ballistic missile attacks, Nowadays, needs for applying AI(Artificial Intelligence) technologies are increasing due to rapid developments of DNN(Deep Neural Networks) technologies. To apply these DNN technologies amount of data are required for superviesed learning, but trajectory data of ballistic missiles is limited because of security issues. Trajectory data could be considered as multivariate time series including many variables. And augmentation in time series data is a developing area of research. In this paper, we tried to augment trajectory data of ballistic missiles using recently developed methods. We used TimeVAE(Time Variational AutoEncoder) method and TimeGAN(Time Generative Adversarial Networks) to synthesize missile trajectory data. We also compare the results of two methods and analyse for future works.