• Title/Summary/Keyword: adversarial attacks

Search Result 57, Processing Time 0.02 seconds

Adversarial Attacks and Defense Strategy in Deep Learning

  • Sarala D.V;Thippeswamy Gangappa
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.1
    • /
    • pp.127-132
    • /
    • 2024
  • With the rapid evolution of the Internet, the application of artificial intelligence fields is more and more extensive, and the era of AI has come. At the same time, adversarial attacks in the AI field are also frequent. Therefore, the research into adversarial attack security is extremely urgent. An increasing number of researchers are working in this field. We provide a comprehensive review of the theories and methods that enable researchers to enter the field of adversarial attack. This article is according to the "Why? → What? → How?" research line for elaboration. Firstly, we explain the significance of adversarial attack. Then, we introduce the concepts, types, and hazards of adversarial attack. Finally, we review the typical attack algorithms and defense techniques in each application area. Facing the increasingly complex neural network model, this paper focuses on the fields of image, text, and malicious code and focuses on the adversarial attack classifications and methods of these three data types, so that researchers can quickly find their own type of study. At the end of this review, we also raised some discussions and open issues and compared them with other similar reviews.

Secure Self-Driving Car System Resistant to the Adversarial Evasion Attacks (적대적 회피 공격에 대응하는 안전한 자율주행 자동차 시스템)

  • Seungyeol Lee;Hyunro Lee;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.907-917
    • /
    • 2023
  • Recently, a self-driving car have applied deep learning technology to advanced driver assistance system can provide convenience to drivers, but it is shown deep that learning technology is vulnerable to adversarial evasion attacks. In this paper, we performed five adversarial evasion attacks, including MI-FGSM(Momentum Iterative-Fast Gradient Sign Method), targeting the object detection algorithm YOLOv5 (You Only Look Once), and measured the object detection performance in terms of mAP(mean Average Precision). In particular, we present a method applying morphology operations for YOLO to detect objects normally by removing noise and extracting boundary. As a result of analyzing its performance through experiments, when an adversarial attack was performed, YOLO's mAP dropped by at least 7.9%. The YOLO applied our proposed method can detect objects up to 87.3% of mAP performance.

AI Security Vulnerabilities in Fully Unmanned Stores: Adversarial Patch Attacks on Object Detection Model & Analysis of the Defense Effectiveness of Data Augmentation (완전 무인 매장의 AI 보안 취약점: 객체 검출 모델에 대한 Adversarial Patch 공격 및 Data Augmentation의 방어 효과성 분석)

  • Won-ho Lee;Hyun-sik Na;So-hee Park;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.245-261
    • /
    • 2024
  • The COVID-19 pandemic has led to the widespread adoption of contactless transactions, resulting in a noticeable increase in the trend towards fully unmanned stores. In such stores, all operational processes are automated, primarily using artificial intelligence (AI) technology. However, this AI technology has several security vulnerabilities, which can be critical in the environment of fully unmanned stores. This paper analyzes the security vulnerabilities that AI-based fully unmanned stores may face, focusing particularly on the object detection model YOLO, demonstrating that Hiding Attacks and Altering Attacks using adversarial patches are possible. It is confirmed that objects with adversarial patches attached may not be recognized by the detection model or may be incorrectly recognized as other objects. Furthermore, the paper analyzes how Data Augmentation techniques can mitigate security threats by providing a defensive effect against adversarial patch attacks. Based on these results, we emphasize the need for proactive research into defensive measures to address the inherent security threats in AI technology used in fully unmanned stores.

Perceptual Ad-Blocker Design For Adversarial Attack (적대적 공격에 견고한 Perceptual Ad-Blocker 기법)

  • Kim, Min-jae;Kim, Bo-min;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.871-879
    • /
    • 2020
  • Perceptual Ad-Blocking is a new advertising blocking technique that detects online advertising by using an artificial intelligence-based advertising image classification model. A recent study has shown that these Perceptual Ad-Blocking models are vulnerable to adversarial attacks using adversarial examples to add noise to images that cause them to be misclassified. In this paper, we prove that existing perceptual Ad-Blocking technique has a weakness for several adversarial example and that Defense-GAN and MagNet who performed well for MNIST dataset and CIFAR-10 dataset are good to advertising dataset. Through this, using Defense-GAN and MagNet techniques, it presents a robust new advertising image classification model for adversarial attacks. According to the results of experiments using various existing adversarial attack techniques, the techniques proposed in this paper were able to secure the accuracy and performance through the robust image classification techniques, and furthermore, they were able to defend a certain level against white-box attacks by attackers who knew the details of defense techniques.

Query-Efficient Black-Box Adversarial Attack Methods on Face Recognition Model (얼굴 인식 모델에 대한 질의 효율적인 블랙박스 적대적 공격 방법)

  • Seo, Seong-gwan;Son, Baehoon;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1081-1090
    • /
    • 2022
  • The face recognition model is used for identity recognition of smartphones, providing convenience to many users. As a result, the security review of the DNN model is becoming important, with adversarial attacks present as a well-known vulnerability of the DNN model. Adversarial attacks have evolved to decision-based attack techniques that use only the recognition results of deep learning models to perform attacks. However, existing decision-based attack technique[14] have a problem that requires a large number of queries when generating adversarial examples. In particular, it takes a large number of queries to approximate the gradient. Therefore, in this paper, we propose a method of generating adversarial examples using orthogonal space sampling and dimensionality reduction sampling to avoid wasting queries that are consumed to approximate the gradient of existing decision-based attack technique[14]. Experiments show that our method can reduce the perturbation size of adversarial examples by about 2.4 compared to existing attack technique[14] and increase the attack success rate by 14% compared to existing attack technique[14]. Experimental results demonstrate that the adversarial example generation method proposed in this paper has superior attack performance.

Study on Neuron Activities for Adversarial Examples in Convolutional Neural Network Model by Population Sparseness Index (개체군 희소성 인덱스에 의한 컨벌루션 신경망 모델의 적대적 예제에 대한 뉴런 활동에 관한 연구)

  • Youngseok Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • Convolutional neural networks have already been applied to various fields beyond human visual processing capabilities in the image processing area. However, they are exposed to a severe risk of deteriorating model performance due to the appearance of adversarial attacks. In addition, defense technology to respond to adversarial attacks is effective against the attack but is vulnerable to other types of attacks. Therefore, to respond to an adversarial attack, it is necessary to analyze how the performance of the adversarial attack deteriorates through the process inside the convolutional neural network. In this study, the adversarial attack of the Alexnet and VGG11 models was analyzed using the population sparseness index, a measure of neuronal activity in neurophysiology. Through the research, it was observed in each layer that the population sparsity index for adversarial examples showed differences from that of benign examples.

Adversarial Wall: Physical Adversarial Attack on Cityscape Pretrained Segmentation Model (도시 환경에서의 이미지 분할 모델 대상 적대적 물리 공격 기법)

  • Suryanto, Naufal;Larasati, Harashta Tatimma;Kim, Yongsu;Kim, Howon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.402-404
    • /
    • 2022
  • Recent research has shown that deep learning models are vulnerable to adversarial attacks not only in the digital but also in the physical domain. This becomes very critical for applications that have a very high safety concern, such as self-driving cars. In this study, we propose a physical adversarial attack technique for one of the common tasks in self-driving cars, namely segmentation of the urban scene. Our method can create a texture on a wall so that it can be misclassified as a road. The demonstration of the technique on a state-of-the-art cityscape pretrained model shows a fairly high success rate, which should raise awareness of more potential attacks in self-driving cars.

GAN Based Adversarial CAN Frame Generation Method for Physical Attack Evading Intrusion Detection System (Intrusion Detection System을 회피하고 Physical Attack을 하기 위한 GAN 기반 적대적 CAN 프레임 생성방법)

  • Kim, Dowan;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1279-1290
    • /
    • 2021
  • As vehicle technology has grown, autonomous driving that does not require driver intervention has developed. Accordingly, CAN security, an network of in-vehicles, has also become important. CAN shows vulnerabilities in hacking attacks, and machine learning-based IDS is introduced to detect these attacks. However, despite its high accuracy, machine learning showed vulnerability against adversarial examples. In this paper, we propose a adversarial CAN frame generation method to avoid IDS by adding noise to feature and proceeding with feature selection and re-packet for physical attack of the vehicle. We check how well the adversarial CAN frame avoids IDS through experiments for each case that adversarial CAN frame generated by all feature modulation, modulation after feature selection, preprocessing after re-packet.

Adversarial Example Detection Based on Symbolic Representation of Image (이미지의 Symbolic Representation 기반 적대적 예제 탐지 방법)

  • Park, Sohee;Kim, Seungjoo;Yoon, Hayeon;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.975-986
    • /
    • 2022
  • Deep learning is attracting great attention, showing excellent performance in image processing, but is vulnerable to adversarial attacks that cause the model to misclassify through perturbation on input data. Adversarial examples generated by adversarial attacks are minimally perturbated where it is difficult to identify, so visual features of the images are not generally changed. Unlikely deep learning models, people are not fooled by adversarial examples, because they classify the images based on such visual features of images. This paper proposes adversarial attack detection method using Symbolic Representation, which is a visual and symbolic features such as color, shape of the image. We detect a adversarial examples by comparing the converted Symbolic Representation from the classification results for the input image and Symbolic Representation extracted from the input images. As a result of measuring performance on adversarial examples by various attack method, detection rates differed depending on attack targets and methods, but was up to 99.02% for specific target attack.

Adversarial Machine Learning: A Survey on the Influence Axis

  • Alzahrani, Shahad;Almalki, Taghreed;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.193-203
    • /
    • 2022
  • After the everyday use of systems and applications of artificial intelligence in our world. Consequently, machine learning technologies have become characterized by exceptional capabilities and unique and distinguished performance in many areas. However, these applications and systems are vulnerable to adversaries who can be a reason to confer the wrong classification by introducing distorted samples. Precisely, it has been perceived that adversarial examples designed throughout the training and test phases can include industrious Ruin the performance of the machine learning. This paper provides a comprehensive review of the recent research on adversarial machine learning. It's also worth noting that the paper only examines recent techniques that were released between 2018 and 2021. The diverse systems models have been investigated and discussed regarding the type of attacks, and some possible security suggestions for these attacks to highlight the risks of adversarial machine learning.