• Title/Summary/Keyword: Adversarial Detection

Search Result 93, Processing Time 0.021 seconds

HiGANCNN: A Hybrid Generative Adversarial Network and Convolutional Neural Network for Glaucoma Detection

  • Alsulami, Fairouz;Alseleahbi, Hind;Alsaedi, Rawan;Almaghdawi, Rasha;Alafif, Tarik;Ikram, Mohammad;Zong, Weiwei;Alzahrani, Yahya;Bawazeer, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.23-30
    • /
    • 2022
  • Glaucoma is a chronic neuropathy that affects the optic nerve which can lead to blindness. The detection and prediction of glaucoma become possible using deep neural networks. However, the detection performance relies on the availability of a large number of data. Therefore, we propose different frameworks, including a hybrid of a generative adversarial network and a convolutional neural network to automate and increase the performance of glaucoma detection. The proposed frameworks are evaluated using five public glaucoma datasets. The framework which uses a Deconvolutional Generative Adversarial Network (DCGAN) and a DenseNet pre-trained model achieves 99.6%, 99.08%, 99.4%, 98.69%, and 92.95% of classification accuracy on RIMONE, Drishti-GS, ACRIMA, ORIGA-light, and HRF datasets respectively. Based on the experimental results and evaluation, the proposed framework closely competes with the state-of-the-art methods using the five public glaucoma datasets without requiring any manually preprocessing step.

Detecting Adversarial Examples Using Edge-based Classification

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.67-76
    • /
    • 2023
  • Although deep learning models are making innovative achievements in the field of computer vision, the problem of vulnerability to adversarial examples continues to be raised. Adversarial examples are attack methods that inject fine noise into images to induce misclassification, which can pose a serious threat to the application of deep learning models in the real world. In this paper, we propose a model that detects adversarial examples using differences in predictive values between edge-learned classification models and underlying classification models. The simple process of extracting the edges of the objects and reflecting them in learning can increase the robustness of the classification model, and economical and efficient detection is possible by detecting adversarial examples through differences in predictions between models. In our experiments, the general model showed accuracy of {49.9%, 29.84%, 18.46%, 4.95%, 3.36%} for adversarial examples (eps={0.02, 0.05, 0.1, 0.2, 0.3}), whereas the Canny edge model showed accuracy of {82.58%, 65.96%, 46.71%, 24.94%, 13.41%} and other edge models showed a similar level of accuracy also, indicating that the edge model was more robust against adversarial examples. In addition, adversarial example detection using differences in predictions between models revealed detection rates of {85.47%, 84.64%, 91.44%, 95.47%, and 87.61%} for each epsilon-specific adversarial example. It is expected that this study will contribute to improving the reliability of deep learning models in related research and application industries such as medical, autonomous driving, security, and national defense.

Anomaly detection in particulate matter sensor using hypothesis pruning generative adversarial network

  • Park, YeongHyeon;Park, Won Seok;Kim, Yeong Beom
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.511-523
    • /
    • 2021
  • The World Health Organization provides guidelines for managing the particulate matter (PM) level because a higher PM level represents a threat to human health. To manage the PM level, a procedure for measuring the PM value is first needed. We use a PM sensor that collects the PM level by laser-based light scattering (LLS) method because it is more cost effective than a beta attenuation monitor-based sensor or tapered element oscillating microbalance-based sensor. However, an LLS-based sensor has a higher probability of malfunctioning than the higher cost sensors. In this paper, we regard the overall malfunctioning, including strange value collection or missing collection data as anomalies, and we aim to detect anomalies for the maintenance of PM measuring sensors. We propose a novel architecture for solving the above aim that we call the hypothesis pruning generative adversarial network (HP-GAN). Through comparative experiments, we achieve AUROC and AUPRC values of 0.948 and 0.967, respectively, in the detection of anomalies in LLS-based PM measuring sensors. We conclude that our HP-GAN is a cutting-edge model for anomaly detection.

GAN Based Adversarial CAN Frame Generation Method for Physical Attack Evading Intrusion Detection System (Intrusion Detection System을 회피하고 Physical Attack을 하기 위한 GAN 기반 적대적 CAN 프레임 생성방법)

  • Kim, Dowan;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1279-1290
    • /
    • 2021
  • As vehicle technology has grown, autonomous driving that does not require driver intervention has developed. Accordingly, CAN security, an network of in-vehicles, has also become important. CAN shows vulnerabilities in hacking attacks, and machine learning-based IDS is introduced to detect these attacks. However, despite its high accuracy, machine learning showed vulnerability against adversarial examples. In this paper, we propose a adversarial CAN frame generation method to avoid IDS by adding noise to feature and proceeding with feature selection and re-packet for physical attack of the vehicle. We check how well the adversarial CAN frame avoids IDS through experiments for each case that adversarial CAN frame generated by all feature modulation, modulation after feature selection, preprocessing after re-packet.

Adversarial-Mixup: Increasing Robustness to Out-of-Distribution Data and Reliability of Inference (적대적 데이터 혼합: 분포 외 데이터에 대한 강건성과 추론 결과에 대한 신뢰성 향상 방법)

  • Gwon, Kyungpil;Yo, Joonhyuk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • Detecting Out-of-Distribution (OOD) data is fundamentally required when Deep Neural Network (DNN) is applied to real-world AI such as autonomous driving. However, modern DNNs are quite vulnerable to the over-confidence problem even if the test data are far away from the trained data distribution. To solve the problem, this paper proposes a novel Adversarial-Mixup training method to let the DNN model be more robust by detecting OOD data effectively. Experimental results show that the proposed Adversarial-Mixup method improves the overall performance of OOD detection by 78% comparing with the State-of-the-Art methods. Furthermore, we show that the proposed method can alleviate the over-confidence problem by reducing the confidence score of OOD data than the previous methods, resulting in more reliable and robust DNNs.

AI Security Vulnerabilities in Fully Unmanned Stores: Adversarial Patch Attacks on Object Detection Model & Analysis of the Defense Effectiveness of Data Augmentation (완전 무인 매장의 AI 보안 취약점: 객체 검출 모델에 대한 Adversarial Patch 공격 및 Data Augmentation의 방어 효과성 분석)

  • Won-ho Lee;Hyun-sik Na;So-hee Park;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.245-261
    • /
    • 2024
  • The COVID-19 pandemic has led to the widespread adoption of contactless transactions, resulting in a noticeable increase in the trend towards fully unmanned stores. In such stores, all operational processes are automated, primarily using artificial intelligence (AI) technology. However, this AI technology has several security vulnerabilities, which can be critical in the environment of fully unmanned stores. This paper analyzes the security vulnerabilities that AI-based fully unmanned stores may face, focusing particularly on the object detection model YOLO, demonstrating that Hiding Attacks and Altering Attacks using adversarial patches are possible. It is confirmed that objects with adversarial patches attached may not be recognized by the detection model or may be incorrectly recognized as other objects. Furthermore, the paper analyzes how Data Augmentation techniques can mitigate security threats by providing a defensive effect against adversarial patch attacks. Based on these results, we emphasize the need for proactive research into defensive measures to address the inherent security threats in AI technology used in fully unmanned stores.

Detecting Malicious Social Robots with Generative Adversarial Networks

  • Wu, Bin;Liu, Le;Dai, Zhengge;Wang, Xiujuan;Zheng, Kangfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5594-5615
    • /
    • 2019
  • Malicious social robots, which are disseminators of malicious information on social networks, seriously affect information security and network environments. The detection of malicious social robots is a hot topic and a significant concern for researchers. A method based on classification has been widely used for social robot detection. However, this method of classification is limited by an unbalanced data set in which legitimate, negative samples outnumber malicious robots (positive samples), which leads to unsatisfactory detection results. This paper proposes the use of generative adversarial networks (GANs) to extend the unbalanced data sets before training classifiers to improve the detection of social robots. Five popular oversampling algorithms were compared in the experiments, and the effects of imbalance degree and the expansion ratio of the original data on oversampling were studied. The experimental results showed that the proposed method achieved better detection performance compared with other algorithms in terms of the F1 measure. The GAN method also performed well when the imbalance degree was smaller than 15%.

A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models (AI 모델의 Robustness 향상을 위한 효율적인 Adversarial Attack 생성 방안 연구)

  • Si-on Jeong;Tae-hyun Han;Seung-bum Lim;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.25-36
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology is introduced in various fields, including security, the development of technology is accelerating. However, with the development of AI technology, attack techniques that cleverly bypass malicious behavior detection are also developing. In the classification process of AI models, an Adversarial attack has emerged that induces misclassification and a decrease in reliability through fine adjustment of input values. The attacks that will appear in the future are not new attacks created by an attacker but rather a method of avoiding the detection system by slightly modifying existing attacks, such as Adversarial attacks. Developing a robust model that can respond to these malware variants is necessary. In this paper, we propose two methods of generating Adversarial attacks as efficient Adversarial attack generation techniques for improving Robustness in AI models. The proposed technique is the XAI-based attack technique using the XAI technique and the Reference based attack through the model's decision boundary search. After that, a classification model was constructed through a malicious code dataset to compare performance with the PGD attack, one of the existing Adversarial attacks. In terms of generation speed, XAI-based attack, and reference-based attack take 0.35 seconds and 0.47 seconds, respectively, compared to the existing PGD attack, which takes 20 minutes, showing a very high speed, especially in the case of reference-based attack, 97.7%, which is higher than the existing PGD attack's generation rate of 75.5%. Therefore, the proposed technique enables more efficient Adversarial attacks and is expected to contribute to research to build a robust AI model in the future.

Security Vulnerability Verification for Open Deep Learning Libraries (공개 딥러닝 라이브러리에 대한 보안 취약성 검증)

  • Jeong, JaeHan;Shon, Taeshik
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.1
    • /
    • pp.117-125
    • /
    • 2019
  • Deep Learning, which is being used in various fields recently, is being threatened with Adversarial Attack. In this paper, we experimentally verify that the classification accuracy is lowered by adversarial samples generated by malicious attackers in image classification models. We used MNIST dataset and measured the detection accuracy by injecting adversarial samples into the Autoencoder classification model and the CNN (Convolution neural network) classification model, which are created using the Tensorflow library and the Pytorch library. Adversarial samples were generated by transforming MNIST test dataset with JSMA(Jacobian-based Saliency Map Attack) and FGSM(Fast Gradient Sign Method). When injected into the classification model, detection accuracy decreased by at least 21.82% up to 39.08%.

A Study on the Efficacy of Edge-Based Adversarial Example Detection Model: Across Various Adversarial Algorithms

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.31-41
    • /
    • 2024
  • Deep learning models show excellent performance in tasks such as image classification and object detection in the field of computer vision, and are used in various ways in actual industrial sites. Recently, research on improving robustness has been actively conducted, along with pointing out that this deep learning model is vulnerable to hostile examples. A hostile example is an image in which small noise is added to induce misclassification, and can pose a significant threat when applying a deep learning model to a real environment. In this paper, we tried to confirm the robustness of the edge-learning classification model and the performance of the adversarial example detection model using it for adversarial examples of various algorithms. As a result of robustness experiments, the basic classification model showed about 17% accuracy for the FGSM algorithm, while the edge-learning models maintained accuracy in the 60-70% range, and the basic classification model showed accuracy in the 0-1% range for the PGD/DeepFool/CW algorithm, while the edge-learning models maintained accuracy in 80-90%. As a result of the adversarial example detection experiment, a high detection rate of 91-95% was confirmed for all algorithms of FGSM/PGD/DeepFool/CW. By presenting the possibility of defending against various hostile algorithms through this study, it is expected to improve the safety and reliability of deep learning models in various industries using computer vision.