• 제목/요약/키워드: 적대적 예제

검색결과 25건 처리시간 0.027초

Korean Paraphrase Sentence Recognition Model Robust Against Adversarial Examples (적대적 예제에 강건한 한국어 패러프레이즈 문장 인식 모델)

  • Kim, Minho;Hur, Jeong;Kim, Hyun;Lim, Joonho
    • Annual Conference on Human and Language Technology
    • /
    • 한국정보과학회언어공학연구회 2020년도 제32회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.453-454
    • /
    • 2020
  • 본 연구는 적대적 예제에 강건한 한국어 패러프레이즈 문장 인식 기술을 다룬다. 구글에서 적대적 예제를 포함하는 PAWS-X 다국어 말뭉치를 공개하였다. 이로써, 한국어에서도 적대적 예제를 다룰 수 있는 실마리가 제공되었다. PAWS-X는 개체 교환형을 대표로 하는 적대적 예제를 포함하고 있다. 이 말뭉치만으로도 개체 교환형 이외의 적대적 예제 타입을 위한 인식 모델을 구축할 수 있을지, 다앙한 타입의 실(real) 패러프레이즈 문장 인식에서도 적용할 수 있는지, 학습에 추가적인 타입의 패러프레이즈 데이터가 필요한지 등에 대해 다양한 실험을 통해 알아보고자 한다.

  • PDF

Towards General Purpose Korean Paraphrase Sentence Recognition Model (범용의 한국어 패러프레이즈 문장 인식 모델을 위한 연구)

  • Kim, Minho;Hur, Jeong;Lim, Joonho
    • Annual Conference on Human and Language Technology
    • /
    • 한국정보과학회언어공학연구회 2021년도 제33회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.450-452
    • /
    • 2021
  • 본 논문은 범용의 한국어 패러프레이즈 문장 인식 모델 개발을 위한 연구를 다룬다. 범용의 목적을 위해서 가장 걸림돌이 되는 부분 중의 하나는 적대적 예제에 대한 강건성이다. 왜냐하면 패러프레이즈 문장 인식에 대한 적대적 예제는 일반 유형의 말뭉치로 학습시킨 인식 모델을 무력화 시킬 수 있기 때문이다. 또한 적대적 예제의 유형이 다양하기 때문에 다양한 유형에 대해서도 대응할 수 있어야 하는 어려운 점이 있다. 본 논문에서는 다양한 적대적 예제 유형과 일반 유형 모두에 대해서 패러프레이즈 문장 여부를 인식할 수 있는 딥 뉴럴 네트워크 모델을 제시하고자 한다.

  • PDF

StarGAN-Based Detection and Purification Studies to Defend against Adversarial Attacks (적대적 공격을 방어하기 위한 StarGAN 기반의 탐지 및 정화 연구)

  • Sungjune Park;Gwonsang Ryu;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • 제33권3호
    • /
    • pp.449-458
    • /
    • 2023
  • Artificial Intelligence is providing convenience in various fields using big data and deep learning technologies. However, deep learning technology is highly vulnerable to adversarial examples, which can cause misclassification of classification models. This study proposes a method to detect and purification various adversarial attacks using StarGAN. The proposed method trains a StarGAN model with added Categorical Entropy loss using adversarial examples generated by various attack methods to enable the Discriminator to detect adversarial examples and the Generator to purification them. Experimental results using the CIFAR-10 dataset showed an average detection performance of approximately 68.77%, an average purification performance of approximately 72.20%, and an average defense performance of approximately 93.11% derived from restoration and detection performance.

Adversarial Example Detection Based on Symbolic Representation of Image (이미지의 Symbolic Representation 기반 적대적 예제 탐지 방법)

  • Park, Sohee;Kim, Seungjoo;Yoon, Hayeon;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • 제32권5호
    • /
    • pp.975-986
    • /
    • 2022
  • Deep learning is attracting great attention, showing excellent performance in image processing, but is vulnerable to adversarial attacks that cause the model to misclassify through perturbation on input data. Adversarial examples generated by adversarial attacks are minimally perturbated where it is difficult to identify, so visual features of the images are not generally changed. Unlikely deep learning models, people are not fooled by adversarial examples, because they classify the images based on such visual features of images. This paper proposes adversarial attack detection method using Symbolic Representation, which is a visual and symbolic features such as color, shape of the image. We detect a adversarial examples by comparing the converted Symbolic Representation from the classification results for the input image and Symbolic Representation extracted from the input images. As a result of measuring performance on adversarial examples by various attack method, detection rates differed depending on attack targets and methods, but was up to 99.02% for specific target attack.

Detecting Adversarial Examples Using Edge-based Classification

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • 제28권10호
    • /
    • pp.67-76
    • /
    • 2023
  • Although deep learning models are making innovative achievements in the field of computer vision, the problem of vulnerability to adversarial examples continues to be raised. Adversarial examples are attack methods that inject fine noise into images to induce misclassification, which can pose a serious threat to the application of deep learning models in the real world. In this paper, we propose a model that detects adversarial examples using differences in predictive values between edge-learned classification models and underlying classification models. The simple process of extracting the edges of the objects and reflecting them in learning can increase the robustness of the classification model, and economical and efficient detection is possible by detecting adversarial examples through differences in predictions between models. In our experiments, the general model showed accuracy of {49.9%, 29.84%, 18.46%, 4.95%, 3.36%} for adversarial examples (eps={0.02, 0.05, 0.1, 0.2, 0.3}), whereas the Canny edge model showed accuracy of {82.58%, 65.96%, 46.71%, 24.94%, 13.41%} and other edge models showed a similar level of accuracy also, indicating that the edge model was more robust against adversarial examples. In addition, adversarial example detection using differences in predictions between models revealed detection rates of {85.47%, 84.64%, 91.44%, 95.47%, and 87.61%} for each epsilon-specific adversarial example. It is expected that this study will contribute to improving the reliability of deep learning models in related research and application industries such as medical, autonomous driving, security, and national defense.

Empirical Study on Correlation between Performance and PSI According to Adversarial Attacks for Convolutional Neural Networks (컨벌루션 신경망 모델의 적대적 공격에 따른 성능과 개체군 희소 지표의 상관성에 관한 경험적 연구)

  • Youngseok Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • 제17권2호
    • /
    • pp.113-120
    • /
    • 2024
  • The population sparseness index(PSI) is being utilized to describe the functioning of internal layers in artificial neural networks from the perspective of neurons, shedding light on the black-box nature of the network's internal operations. There is research indicating a positive correlation between the PSI and performance in each layer of convolutional neural network models for image classification. In this study, we observed the internal operations of a convolutional neural network when adversarial examples were applied. The results of the experiments revealed a similar pattern of positive correlation for adversarial examples, which were modified to maintain 5% accuracy compared to applying benign data. Thus, while there may be differences in each adversarial attack, the observed PSI for adversarial examples demonstrated consistent positive correlations with benign data across layers.

An Adversarial Attack Type Classification Method Using Linear Discriminant Analysis and k-means Algorithm (선형 판별 분석 및 k-means 알고리즘을 이용한 적대적 공격 유형 분류 방안)

  • Choi, Seok-Hwan;Kim, Hyeong-Geon;Choi, Yoon-Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • 제31권6호
    • /
    • pp.1215-1225
    • /
    • 2021
  • Although Artificial Intelligence (AI) techniques have shown impressive performance in various fields, they are vulnerable to adversarial examples which induce misclassification by adding human-imperceptible perturbations to the input. Previous studies to defend the adversarial examples can be classified into three categories: (1) model retraining methods; (2) input transformation methods; and (3) adversarial examples detection methods. However, even though the defense methods against adversarial examples have constantly been proposed, there is no research to classify the type of adversarial attack. In this paper, we proposed an adversarial attack family classification method based on dimensionality reduction and clustering. Specifically, after extracting adversarial perturbation from adversarial example, we performed Linear Discriminant Analysis (LDA) to reduce the dimensionality of adversarial perturbation and performed K-means algorithm to classify the type of adversarial attack family. From the experimental results using MNIST dataset and CIFAR-10 dataset, we show that the proposed method can efficiently classify five tyeps of adversarial attack(FGSM, BIM, PGD, DeepFool, C&W). We also show that the proposed method provides good classification performance even in a situation where the legitimate input to the adversarial example is unknown.

Effective Adversarial Training by Adaptive Selection of Loss Function in Federated Learning (연합학습에서의 손실함수의 적응적 선택을 통한 효과적인 적대적 학습)

  • Suchul Lee
    • Journal of Internet Computing and Services
    • /
    • 제25권2호
    • /
    • pp.1-9
    • /
    • 2024
  • Although federated learning is designed to be safer than centralized methods in terms of security and privacy, it still has many vulnerabilities. An attacker performing an adversarial attack intentionally manipulates the deep learning model by injecting carefully crafted input data, that is, adversarial examples, into the client's training data to induce misclassification. A common defense strategy against this is so-called adversarial training, which involves preemptively learning the characteristics of adversarial examples into the model. Existing research assumes a scenario where all clients are under adversarial attack, but considering the number of clients in federated learning is very large, this is far from reality. In this paper, we experimentally examine aspects of adversarial training in a scenario where some of the clients are under attack. Through experiments, we found that there is a trade-off relationship in which the classification accuracy for normal samples decreases as the classification accuracy for adversarial examples increases. In order to effectively utilize this trade-off relationship, we present a method to perform adversarial training by adaptively selecting a loss function depending on whether the client is attacked.

A Study on the Efficacy of Edge-Based Adversarial Example Detection Model: Across Various Adversarial Algorithms

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • 제29권2호
    • /
    • pp.31-41
    • /
    • 2024
  • Deep learning models show excellent performance in tasks such as image classification and object detection in the field of computer vision, and are used in various ways in actual industrial sites. Recently, research on improving robustness has been actively conducted, along with pointing out that this deep learning model is vulnerable to hostile examples. A hostile example is an image in which small noise is added to induce misclassification, and can pose a significant threat when applying a deep learning model to a real environment. In this paper, we tried to confirm the robustness of the edge-learning classification model and the performance of the adversarial example detection model using it for adversarial examples of various algorithms. As a result of robustness experiments, the basic classification model showed about 17% accuracy for the FGSM algorithm, while the edge-learning models maintained accuracy in the 60-70% range, and the basic classification model showed accuracy in the 0-1% range for the PGD/DeepFool/CW algorithm, while the edge-learning models maintained accuracy in 80-90%. As a result of the adversarial example detection experiment, a high detection rate of 91-95% was confirmed for all algorithms of FGSM/PGD/DeepFool/CW. By presenting the possibility of defending against various hostile algorithms through this study, it is expected to improve the safety and reliability of deep learning models in various industries using computer vision.

Study on the White Noise effect Against Adversarial Attack for Deep Learning Model for Image Recognition (영상 인식을 위한 딥러닝 모델의 적대적 공격에 대한 백색 잡음 효과에 관한 연구)

  • Lee, Youngseok;Kim, Jongweon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • 제15권1호
    • /
    • pp.27-35
    • /
    • 2022
  • In this paper we propose white noise adding method to prevent missclassification of deep learning system by adversarial attacks. The proposed method is that adding white noise to input image that is benign or adversarial example. The experimental results are showing that the proposed method is robustness to 3 adversarial attacks such as FGSM attack, BIN attack and CW attack. The recognition accuracies of Resnet model with 18, 34, 50 and 101 layers are enhanced when white noise is added to test data set while it does not affect to classification of benign test dataset. The proposed model is applicable to defense to adversarial attacks and replace to time- consuming and high expensive defense method against adversarial attacks such as adversarial training method and deep learning replacing method.