• 제목/요약/키워드: Adversarial Defense

검색결과 33건 처리시간 0.017초

적대적 공격에 견고한 Perceptual Ad-Blocker 기법 (Perceptual Ad-Blocker Design For Adversarial Attack)

  • 김민재;김보민;허준범
    • 정보보호학회논문지
    • /
    • 제30권5호
    • /
    • pp.871-879
    • /
    • 2020
  • Perceptual Ad-Blocking은 인공지능 기반의 광고 이미지 분류 모델을 이용하여 온라인 광고를 탐지하는 새로운 광고 차단 기법이다. 이러한 Perceptual Ad-Blocking은 최근 이미지 분류 모델이 이미지를 틀리게 분류하게 끔 이미지에 노이즈를 추가하는 적대적 예제(adversarial example)를 이용한 적대적 공격(adversarialbattack)에 취약하다는 연구 결과가 제시된 바 있다. 본 논문에서는 다양한 적대적 예제를 통해 기존 Perceptual Ad-Blocking 기법의 취약점을 증명하고, MNIST, CIFAR-10 등의 데이터 셋에서 성공적인 방어를 수행한 Defense-GAN과 MagNet이 광고 이미지에도 효과적으로 작용함을 보인다. 이를 통해 Defense-GAN과 MagNet 기법을 이용해 적대적 공격에 견고한 새로운 광고 이미지 분류 모델을 제시한다. 기존 다양한 적대적 공격 기법을 이용한 실험 결과에 따르면, 본 논문에서 제안하는 기법은 적대적 공격에 견고한 이미지 분류 기술을 통해 공격 이전의 이미지 분류 모델의 정확도와 성능을 확보할 수 있으며, 더 나아가 방어 기법의 세부사항을 아는 공격자의 화이트박스 공격(White-box attack)에도 일정 수준 방어가 가능함을 보였다.

Defending and Detecting Audio Adversarial Example using Frame Offsets

  • Gong, Yongkang;Yan, Diqun;Mao, Terui;Wang, Donghua;Wang, Rangding
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1538-1552
    • /
    • 2021
  • Machine learning models are vulnerable to adversarial examples generated by adding a deliberately designed perturbation to a benign sample. Particularly, for automatic speech recognition (ASR) system, a benign audio which sounds normal could be decoded as a harmful command due to potential adversarial attacks. In this paper, we focus on the countermeasures against audio adversarial examples. By analyzing the characteristics of ASR systems, we find that frame offsets with silence clip appended at the beginning of an audio can degenerate adversarial perturbations to normal noise. For various scenarios, we exploit frame offsets by different strategies such as defending, detecting and hybrid strategy. Compared with the previous methods, our proposed method can defense audio adversarial example in a simpler, more generic and efficient way. Evaluated on three state-of-the-arts adversarial attacks against different ASR systems respectively, the experimental results demonstrate that the proposed method can effectively improve the robustness of ASR systems.

객체탐지 모델에 대한 위장형 적대적 패치 공격 (Camouflaged Adversarial Patch Attack on Object Detector)

  • 김정훈;양훈민;오세윤
    • 한국군사과학기술학회지
    • /
    • 제26권1호
    • /
    • pp.44-53
    • /
    • 2023
  • Adversarial attacks have received great attentions for their capacity to distract state-of-the-art neural networks by modifying objects in physical domain. Patch-based attack especially have got much attention for its optimization effectiveness and feasible adaptation to any objects to attack neural network-based object detectors. However, despite their strong attack performance, generated patches are strongly perceptible for humans, violating the fundamental assumption of adversarial examples. In this paper, we propose a camouflaged adversarial patch optimization method using military camouflage assessment metrics for naturalistic patch attacks. We also investigate camouflaged attack loss functions, applications of various camouflaged patches on army tank images, and validate the proposed approach with extensive experiments attacking Yolov5 detection model. Our methods produce more natural and realistic looking camouflaged patches while achieving competitive performance.

국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구 (Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks)

  • 양훈민
    • 한국군사과학기술학회지
    • /
    • 제22권1호
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

딥러닝 기반 적외선 객체 검출을 위한 적대적 공격 기술 연구 (Adversarial Attacks for Deep Learning-Based Infrared Object Detection)

  • 김호성;현재국;유현정;김춘호;전현호
    • 한국군사과학기술학회지
    • /
    • 제24권6호
    • /
    • pp.591-601
    • /
    • 2021
  • Recently, infrared object detection(IOD) has been extensively studied due to the rapid growth of deep neural networks(DNN). Adversarial attacks using imperceptible perturbation can dramatically deteriorate the performance of DNN. However, most adversarial attack works are focused on visible image recognition(VIR), and there are few methods for IOD. We propose deep learning-based adversarial attacks for IOD by expanding several state-of-the-art adversarial attacks for VIR. We effectively validate our claim through comprehensive experiments on two challenging IOD datasets, including FLIR and MSOD.

High Representation based GAN defense for Adversarial Attack

  • Sutanto, Richard Evan;Lee, Suk Ho
    • International journal of advanced smart convergence
    • /
    • 제8권1호
    • /
    • pp.141-146
    • /
    • 2019
  • These days, there are many applications using neural networks as parts of their system. On the other hand, adversarial examples have become an important issue concerining the security of neural networks. A classifier in neural networks can be fooled and make it miss-classified by adversarial examples. There are many research to encounter adversarial examples by using denoising methods. Some of them using GAN (Generative Adversarial Network) in order to remove adversarial noise from input images. By producing an image from generator network that is close enough to the original clean image, the adversarial examples effects can be reduced. However, there is a chance when adversarial noise can survive the approximation process because it is not like a normal noise. In this chance, we propose a research that utilizes high-level representation in the classifier by combining GAN network with a trained U-Net network. This approach focuses on minimizing the loss function on high representation terms, in order to minimize the difference between the high representation level of the clean data and the approximated output of the noisy data in the training dataset. Furthermore, the generated output is checked whether it shows minimum error compared to true label or not. U-Net network is trained with true label to make sure the generated output gives minimum error in the end. At last, the remaining adversarial noise that still exist after low-level approximation can be removed with the U-Net, because of the minimization on high representation terms.

BM3D and Deep Image Prior based Denoising for the Defense against Adversarial Attacks on Malware Detection Networks

  • Sandra, Kumi;Lee, Suk-Ho
    • International journal of advanced smart convergence
    • /
    • 제10권3호
    • /
    • pp.163-171
    • /
    • 2021
  • Recently, Machine Learning-based visualization approaches have been proposed to combat the problem of malware detection. Unfortunately, these techniques are exposed to Adversarial examples. Adversarial examples are noises which can deceive the deep learning based malware detection network such that the malware becomes unrecognizable. To address the shortcomings of these approaches, we present Block-matching and 3D filtering (BM3D) algorithm and deep image prior based denoising technique to defend against adversarial examples on visualization-based malware detection systems. The BM3D based denoising method eliminates most of the adversarial noise. After that the deep image prior based denoising removes the remaining subtle noise. Experimental results on the MS BIG malware dataset and benign samples show that the proposed denoising based defense recovers the performance of the adversarial attacked CNN model for malware detection to some extent.

적대적 공격을 방어하기 위한 StarGAN 기반의 탐지 및 정화 연구 (StarGAN-Based Detection and Purification Studies to Defend against Adversarial Attacks)

  • 박성준;류권상;최대선
    • 정보보호학회논문지
    • /
    • 제33권3호
    • /
    • pp.449-458
    • /
    • 2023
  • 인공지능은 빅데이터와 딥러닝 기술을 이용해 다양한 분야에서 삶의 편리함을 주고 있다. 하지만, 딥러닝 기술은 적대적 예제에 매우 취약하여 적대적 예제가 분류 모델의 오분류를 유도한다. 본 연구는 StarGAN을 활용해 다양한 적대적 공격을 탐지 및 정화하는 방법을 제안한다. 제안 방법은 Categorical Entropy loss를 추가한 StarGAN 모델에 다양한 공격 방법으로 생성된 적대적 예제를 학습시켜 판별자는 적대적 예제를 탐지하고, 생성자는 적대적 예제를 정화한다. CIFAR-10 데이터셋을 통해 실험한 결과 평균 탐지 성능은 약 68.77%, 평균정화성능은 약 72.20%를 보였으며 정화 및 탐지 성능으로 도출되는 평균 방어 성능은 약 93.11%를 보였다.

Adversarial Attacks and Defense Strategy in Deep Learning

  • Sarala D.V;Thippeswamy Gangappa
    • International Journal of Computer Science & Network Security
    • /
    • 제24권1호
    • /
    • pp.127-132
    • /
    • 2024
  • With the rapid evolution of the Internet, the application of artificial intelligence fields is more and more extensive, and the era of AI has come. At the same time, adversarial attacks in the AI field are also frequent. Therefore, the research into adversarial attack security is extremely urgent. An increasing number of researchers are working in this field. We provide a comprehensive review of the theories and methods that enable researchers to enter the field of adversarial attack. This article is according to the "Why? → What? → How?" research line for elaboration. Firstly, we explain the significance of adversarial attack. Then, we introduce the concepts, types, and hazards of adversarial attack. Finally, we review the typical attack algorithms and defense techniques in each application area. Facing the increasingly complex neural network model, this paper focuses on the fields of image, text, and malicious code and focuses on the adversarial attack classifications and methods of these three data types, so that researchers can quickly find their own type of study. At the end of this review, we also raised some discussions and open issues and compared them with other similar reviews.

객체인식 AI적용 드론에 대응할 수 있는 적대적 예제 기반 소극방공 기법 연구 (A Research on Adversarial Example-based Passive Air Defense Method against Object Detectable AI Drone)

  • 육심언;박휘랑;서태석;조영호
    • 인터넷정보학회논문지
    • /
    • 제24권6호
    • /
    • pp.119-125
    • /
    • 2023
  • 우크라이나-러시아 전을 통해 드론의 군사적 가치는 재평가되고 있으며, 북한은 '22년 말 대남 드론 도발을 통해 실제 검증까지 완료한 바 있다. 또한, 북한은 인공지능(AI) 기술의 드론 적용을 추진하고 있는 것으로 드러나 드론의 위협은 나날이 커지고 있다. 이에 우리 군은 드론작전사령부를 창설하고 다양한 드론 대응 체계를 도입하는 등 대 드론 체계 구축을 도모하고 있지만, 전력증강 노력이 타격체계 위주로 편중되어 군집드론 공격에 대한 효과적 대응이 우려된다. 특히, 도심에 인접한 공군 비행단은 민간 피해가 우려되어 재래식 방공무기의 사용 역시 극도로 제한되는 실정이다. 이에 본 연구에서는 AI기술이 적용된 적 군집드론의 위협으로부터 아 항공기의 생존성 향상을 위해 AI모델의 객체탐지 능력을 저해하는 소극방공 기법을 제안한다. 대표적인 적대적 머신러닝(Adversarial machine learning) 기술 중 하나인 적대적 예제(Adversarial example)를 레이저를 활용하여 항공기에 조사함으로써, 적 드론에 탑재된 객체인식 AI의 인식률 저하를 도모한다. 합성 이미지와 정밀 축소모형을 활용한 실험을 수행한 결과, 제안기법 적용 전 약 95%의 인식률을 보이는 객체인식 AI의 인식률을 제안기법 적용 후 0~15% 내외로 저하시키는 것을 확인하여 제안기법의 실효성을 검증하였다.