• Title/Summary/Keyword: 공격 모델

Search Result 857, Processing Time 0.03 seconds

Query-Efficient Black-Box Adversarial Attack Methods on Face Recognition Model (얼굴 인식 모델에 대한 질의 효율적인 블랙박스 적대적 공격 방법)

  • Seo, Seong-gwan;Son, Baehoon;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1081-1090
    • /
    • 2022
  • The face recognition model is used for identity recognition of smartphones, providing convenience to many users. As a result, the security review of the DNN model is becoming important, with adversarial attacks present as a well-known vulnerability of the DNN model. Adversarial attacks have evolved to decision-based attack techniques that use only the recognition results of deep learning models to perform attacks. However, existing decision-based attack technique[14] have a problem that requires a large number of queries when generating adversarial examples. In particular, it takes a large number of queries to approximate the gradient. Therefore, in this paper, we propose a method of generating adversarial examples using orthogonal space sampling and dimensionality reduction sampling to avoid wasting queries that are consumed to approximate the gradient of existing decision-based attack technique[14]. Experiments show that our method can reduce the perturbation size of adversarial examples by about 2.4 compared to existing attack technique[14] and increase the attack success rate by 14% compared to existing attack technique[14]. Experimental results demonstrate that the adversarial example generation method proposed in this paper has superior attack performance.

DDoS Attack Detection Scheme based on the System Resource Consumption Rate in Linux Systems (리눅스시스템에서 서비스자원소비율을 이용한 분산서비스거부공격 탐지 기법)

  • Ko, Kwang-Sun;Kang, Yong-Hyeog;Eom, Young-Ik
    • Annual Conference of KIPS
    • /
    • 2003.05c
    • /
    • pp.2041-2044
    • /
    • 2003
  • 네트워크에서 발생하는 다양한 침입 중에서 서비스거부공격(DoS Attack. Denial-of-Service Attack)이란 공격자가 침입대상 시스템의 시스템 자원과 네트워크 자원을 악의적인 목적으로 소모시키기 위하여 대량의 패킷을 보냄으로써 정상 사용자로 하여금 시스템이 제공하는 서비스를 이용하지 못하도록 하는 공격을 의미한다. 기존 연구에서는 시스템과 네트워크가 수신한 패킷을 분석한 후 네트워크 세션정보를 생성하여 DoS 공격을 탐지하였다. 그러나 이 기법은 공격자가 분산서비스거부공격(DDoS Attack: Distributed DoS Attack)을 하게 되면 분산된 세션정보가 생성되기 때문에 침입을 실시간으로 탐지하기에는 부적절하다. 본 논문에서는 시스템이 가지고 있는 자윈 중에서 DDoS 공격을 밭을 때 가장 민감하게 반응하는 시스템 자원을 모니터링 함으로써 DDoS 공격을 실시간으로 탐지할 수 있는 모델을 제안한다 제안 모델은 시스템이 네트워크에서 수신한 패킷을 처리하는 과정에서 소모되는 커널 메모리 소비량을 감사자료로 이용한 네트워치기반 비정상행위탐지(networked-based anomaly detection)모델이다.

  • PDF

Adversarial Attack against Deep Learning Based Vulnerability Detection (딥러닝 기반의 코드 취약점 탐지 모델의 적대적 공격)

  • Eun Jung;Hyoungshick Kim
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.352-353
    • /
    • 2024
  • 소프트웨어 보안의 근본적인 문제인 보안 취약점을 해결하기 위해 노력한 결과, 딥러닝 기반의 코드 취약점 탐지 모델은 취약점 탐지에서 높은 탐지 정확도를 보여주고 있다. 하지만, 딥러닝 모델은 작은 변형에 민감하므로 적대적 공격에 취약하다. 딥러닝 기반 코드 취약점 탐지 모델에 대한 적대적 공격 방법을 제안한다.

Model Type Inference Attack Using Output of Black-Box AI Model (블랙 박스 모델의 출력값을 이용한 AI 모델 종류 추론 공격)

  • An, Yoonsoo;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.817-826
    • /
    • 2022
  • AI technology is being successfully introduced in many fields, and models deployed as a service are deployed with black box environment that does not expose the model's information to protect intellectual property rights and data. In a black box environment, attackers try to steal data or parameters used during training by using model output. This paper proposes a method of inferring the type of model to directly find out the composition of layer of the target model, based on the fact that there is no attack to infer the information about the type of model from the deep learning model. With ResNet, VGGNet, AlexNet, and simple convolutional neural network models trained with MNIST datasets, we show that the types of models can be inferred using the output values in the gray box and black box environments of the each model. In addition, we inferred the type of model with approximately 83% accuracy in the black box environment if we train the big and small relationship feature that proposed in this paper together, the results show that the model type can be infrerred even in situations where only partial information is given to attackers, not raw probability vectors.

Improving the Robustness of Deepfake Detection Models Against Adversarial Attacks (적대적 공격에 따른 딥페이크 탐지 모델 강화)

  • Lee, Sangyeong;Hou, Jong-Uk
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.724-726
    • /
    • 2022
  • 딥페이크(deepfake)로 인한 디지털 범죄는 날로 교묘해지면서 사회적으로 큰 파장을 불러일으키고 있다. 이때, 딥러닝 기반 모델의 오류를 발생시키는 적대적 공격(adversarial attack)의 등장으로 딥페이크를 탐지하는 모델의 취약성이 증가하고 있고, 이는 매우 치명적인 결과를 초래한다. 본 연구에서는 2 가지 방법을 통해 적대적 공격에도 영향을 받지 않는 강인한(robust) 모델을 구축하는 것을 목표로 한다. 모델 강화 기법인 적대적 학습(adversarial training)과 영상처리 기반 방어 기법인 크기 변환(resizing), JPEG 압축을 통해 적대적 공격에 대한 강인성을 입증한다.

A Study on Non-query Based Model Extraction Attacks (쿼리를 사용하지 않는 딥러닝 모델 탈취 공격 연구)

  • Cho, Yungi;Lee, Younghan;Jun, Sohee;Paek, Yunheung
    • Annual Conference of KIPS
    • /
    • 2021.05a
    • /
    • pp.219-222
    • /
    • 2021
  • 인공지능 기술은 모든 분야에서 혁신을 이뤄내고 있다. 이와 동시에 인공지능 모델에 대한 여러 보안적인 문제점이 야기되고 있다. 그 중 대표적인 문제는 많은 인적/물적 자원을 통해 개발한 모델을 악의적인 사용자가 탈취하는 것이다. 모델 탈취가 발생할 경우, 경제적인 문제뿐만 아니라 모델 자체의 취약성을 드러낼 수 있다. 현재 많은 연구가 쿼리를 통해 얻는 모델의 입력과 출력을 분석하여 모델의 의사경계면 또는 모델의 기능성을 탈취하고 있다. 하지만 쿼리 기반의 탈취 공격은 획득할 수 있는 정보가 제한적이기 때문에 완벽한 탈취가 어렵다. 이에 따라 딥러닝 모델 연산 과정에서 데이터 스니핑 또는 캐시 부채널 공격을 통해 추가적인 정보 또는 완전한 모델을 탈취하려는 연구가 진행되고 있다. 본 논문에서는 최근 연구 동향과 쿼리 기반 공격과의 차이점을 분석하고 연구한다.

A Study on the Model for Determining the Deceptive Status of Attackers using Markov Chain (Markov Chain을 이용한 기만환경 칩입 공격자의 기만 여부 예측 모델에 대한 연구)

  • Sunmo Yoo;Sungmo Wi;Jonghwa Han;Yonghyoun Kim;Jungsik Cho
    • Convergence Security Journal
    • /
    • v.23 no.2
    • /
    • pp.37-45
    • /
    • 2023
  • Cyber deception technology plays a crucial role in monitoring attacker activities and detecting new types of attacks. However, along with the advancements in deception technology, the development of Anti-honeypot technology has allowed attackers who recognize the deceptive environment to either cease their activities or exploit the environment in reverse. Currently, deception technology is unable to identify or respond to such situations. In this study, we propose a predictive model using Markov chain analysis to determine the identification of attackers who infiltrate deceptive environments. The proposed model for deception status determination is the first attempt of its kind and is expected to overcome the limitations of existing deception-based attacker analysis, which does not consider attackers who identify the deceptive environment. The classification model proposed in this study demonstrated a high accuracy rate of 97.5% in identifying and categorizing attackers operating in deceptive environments. By predicting the identification of an attacker's deceptive environment, it is anticipated that this model can provide refined data for numerous studies analyzing deceptive environment intrusions.

Machine Learning-based Detection of DoS and DRDoS Attacks in IoT Networks

  • Yeo, Seung-Yeon;Jo, So-Young;Kim, Jiyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.101-108
    • /
    • 2022
  • We propose an intrusion detection model that detects denial-of-service(DoS) and distributed reflection denial-of-service(DRDoS) attacks, based on the empirical data of each internet of things(IoT) device by training system and network metrics that can be commonly collected from various IoT devices. First, we collect 37 system and network metrics from each IoT device considering IoT attack scenarios; further, we train them using six types of machine learning models to identify the most effective machine learning models as well as important metrics in detecting and distinguishing IoT attacks. Our experimental results show that the Random Forest model has the best performance with accuracy of over 96%, followed by the K-Nearest Neighbor model and Decision Tree model. Of the 37 metrics, we identified five types of CPU, memory, and network metrics that best imply the characteristics of the attacks in all the experimental scenarios. Furthermore, we found out that packets with higher transmission speeds than larger size packets represent the characteristics of DoS and DRDoS attacks more clearly in IoT networks.

Autoencoder-Based Defense Technique against One-Pixel Adversarial Attacks in Image Classification (이미지 분류를 위한 오토인코더 기반 One-Pixel 적대적 공격 방어기법)

  • Jeong-hyun Sim;Hyun-min Song
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1087-1098
    • /
    • 2023
  • The rapid advancement of artificial intelligence (AI) technology has led to its proactive utilization across various fields. However, this widespread adoption of AI-based systems has raised concerns about the increasing threat of attacks on these systems. In particular, deep neural networks, commonly used in deep learning, have been found vulnerable to adversarial attacks that intentionally manipulate input data to induce model errors. In this study, we propose a method to protect image classification models from visually imperceptible One-Pixel attacks, where only a single pixel is altered in an image. The proposed defense technique utilizes an autoencoder model to remove potential threat elements from input images before forwarding them to the classification model. Experimental results, using the CIFAR-10 dataset, demonstrate that the autoencoder-based defense approach significantly improves the robustness of pretrained image classification models against One-Pixel attacks, with an average defense rate enhancement of 81.2%, all without the need for modifications to the existing models.

Economic Damage Model on Industries due to Internet Attack and A Case Study (인터넷 침해사고로 인한 기업의 경제적인 피해 산출 모델 및 케이스 연구)

  • Jang, Jong-Ho;Chung, Ki-Hyun;Choi, Kyung-Hee
    • The KIPS Transactions:PartC
    • /
    • v.15C no.3
    • /
    • pp.191-198
    • /
    • 2008
  • Because of the internet development, most of people can acquire the information freely. But it have the disadvantages. A remarkable thing among the disadvantages is the internet attack. Internet attacks were given damages to several fields. Specially, the damage is terrible to the economic side. Specially, company is susceptive to economic damage side. But it didn't execute a research about model that estimates damage to the economic side due to internet attacks. This paper presents a model that estimates the damage to economic side due to internet attacks.