• Title/Summary/Keyword: MNIST 데이터셋

Search Result 31, Processing Time 0.026 seconds

An Adversarial Attack Type Classification Method Using Linear Discriminant Analysis and k-means Algorithm (선형 판별 분석 및 k-means 알고리즘을 이용한 적대적 공격 유형 분류 방안)

  • Choi, Seok-Hwan;Kim, Hyeong-Geon;Choi, Yoon-Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1215-1225
    • /
    • 2021
  • Although Artificial Intelligence (AI) techniques have shown impressive performance in various fields, they are vulnerable to adversarial examples which induce misclassification by adding human-imperceptible perturbations to the input. Previous studies to defend the adversarial examples can be classified into three categories: (1) model retraining methods; (2) input transformation methods; and (3) adversarial examples detection methods. However, even though the defense methods against adversarial examples have constantly been proposed, there is no research to classify the type of adversarial attack. In this paper, we proposed an adversarial attack family classification method based on dimensionality reduction and clustering. Specifically, after extracting adversarial perturbation from adversarial example, we performed Linear Discriminant Analysis (LDA) to reduce the dimensionality of adversarial perturbation and performed K-means algorithm to classify the type of adversarial attack family. From the experimental results using MNIST dataset and CIFAR-10 dataset, we show that the proposed method can efficiently classify five tyeps of adversarial attack(FGSM, BIM, PGD, DeepFool, C&W). We also show that the proposed method provides good classification performance even in a situation where the legitimate input to the adversarial example is unknown.

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

Clock Glitch-based Fault Injection Attack on Deep Neural Network (Deep Neural Network에 대한 클럭 글리치 기반 오류 주입 공격)

  • Hyoju Kang;Seongwoo Hong;Youngju Lee;Jeacheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.5
    • /
    • pp.855-863
    • /
    • 2024
  • The use of Deep Neural Network (DNN) is gradually increasing in various fields due to their high efficiency in data analysis and prediction. However, as the use of deep neural networks becomes more frequent, the security threats associated with them are also increasing. In particular, if a fault occurs in the forward propagation process and activation function that can directly affect the prediction of deep neural network, it can have a fatal damage on the prediction accuracy of the model. In this paper, we performed some fault injection attacks on the forward propagation process of each layer except the input layer in a deep neural network and the Softmax function used in the output layer, and analyzed the experimental results. As a result of fault injection on the MNIST dataset using a glitch clock, we confirmed that faut injection on into the iteration statements can conduct deterministic misclassification depending on the network parameters.

Application Scenario of Integrated Development Environment for Autonomous IoT Applications based on Neuromorphic Architecture (뉴로모픽 아키텍처 기반 자율형 IoT 응용 통합개발환경 응용 시나리오)

  • Park, Jisu;Kim, Seoyeon;Kim, Hoinam;Jeong, Jaehyeok;Kim, Kyeongsoo;Jung, Jinman;Yun, Young-Sun
    • Smart Media Journal
    • /
    • v.11 no.2
    • /
    • pp.63-69
    • /
    • 2022
  • As the use of various IoT devices increases, the importance of IoT platforms is also rising. Recently, artificial intelligence technology is being combined with IoT devices, and research applying a neuromorphic architecture to IoT devices with low power is also increasing. In this paper, an application scenario is proposed based on NA-IDE (Neuromorphic Architecture-based autonomous IoT application integrated development environment) with IoT devices and FPGA devices in a GUI format. The proposed scenario connects a camera module to an IoT device, collects MNIST dataset images online, recognizes the collected images through a neuromorphic board, and displays the recognition results through a device module connected to other IoT devices. If the neuromorphic architecture is applied to many IoT devices and used for various application services, the autonomous IoT application integrated development environment based on the neuromorphic architecture is expected to emerge as a core technology leading the 4th industrial revolution.

Improving the Performance of Certified Defense Against Adversarial Attacks (적대적인 공격에 대한 인증 가능한 방어 방법의 성능 향상)

  • Go, Hyojun;Park, Byeongjun;Kim, Changick
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.100-102
    • /
    • 2020
  • 심층 신경망은 적대적인 공격으로 생성된 적대적 예제에 의해 쉽게 오작동할 수 있다. 이에 따라 다양한 방어 방법들이 제안되었으나, 더욱 강력한 적대적인 공격이 제안되어 방어 방법들을 무력화할 가능성은 존재한다. 이러한 가능성은 어떤 공격 범위 내의 적대적인 공격을 방어할 수 있다고 보장할 수 있는 인증된 방어(Certified defense) 방법의 필요성을 강조한다. 이에 본 논문은 인증된 방어 방법 중 가장 효과적인 방법의 하나로 알려진 구간 경계 전파(Interval Bound Propagation)의 성능을 향상하는 방법을 연구한다. 구체적으로, 우리는 기존의 구간 경계 전파 방법의 훈련 과정을 수정하는 방법을 제안하며, 이를 통해 기존 구간 경계 전파 방법의 훈련 시간을 유지하면서 성능을 향상할 수 있음을 보일 것이다. 우리가 제안한 방법으로 수행한 MNIST 데이터 셋에 대한 실험에서 우리는 기존 구간 경계 전파 방법 대비 인증 에러(Verified error)를 Large 모델에 대해서 1.77%, Small 모델에 대해서 0.96% 낮출 수 있었다.

  • PDF

A Method for Fashion Clothing Image Classification (패션 의류 영상 분류 방법)

  • Ichinkhorloo, Gotovsuren;Shin, Seong-Yoon;Lee, Hyun-Chang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.559-560
    • /
    • 2020
  • 우리는 패션 의류 이미지의 빠르고 정확한 분류를 달성하기 위해 최적화 된 동적 감쇠 학습률과 개선 된 모델 구조를 갖춘 딥 러닝 모델을 기반으로 하는 새로운 방법을 제안했습니다. 우리는 Fashion-MNIST 데이터 셋에 대해 제안 된 모델을 사용하여 실험을 수행하고 이를 CNN, LeNet, LSTM 및 BiLSTM의 방법과 비교했습니다.

  • PDF

A Noise-Tolerant Hierarchical Image Classification System based on Autoencoder Models (오토인코더 기반의 잡음에 강인한 계층적 이미지 분류 시스템)

  • Lee, Jong-kwan
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.23-30
    • /
    • 2021
  • This paper proposes a noise-tolerant image classification system using multiple autoencoders. The development of deep learning technology has dramatically improved the performance of image classifiers. However, if the images are contaminated by noise, the performance degrades rapidly. Noise added to the image is inevitably generated in the process of obtaining and transmitting the image. Therefore, in order to use the classifier in a real environment, we have to deal with the noise. On the other hand, the autoencoder is an artificial neural network model that is trained to have similar input and output values. If the input data is similar to the training data, the error between the input data and output data of the autoencoder will be small. However, if the input data is not similar to the training data, the error will be large. The proposed system uses the relationship between the input data and the output data of the autoencoder, and it has two phases to classify the images. In the first phase, the classes with the highest likelihood of classification are selected and subject to the procedure again in the second phase. For the performance analysis of the proposed system, classification accuracy was tested on a Gaussian noise-contaminated MNIST dataset. As a result of the experiment, it was confirmed that the proposed system in the noisy environment has higher accuracy than the CNN-based classification technique.

Perceptual Ad-Blocker Design For Adversarial Attack (적대적 공격에 견고한 Perceptual Ad-Blocker 기법)

  • Kim, Min-jae;Kim, Bo-min;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.871-879
    • /
    • 2020
  • Perceptual Ad-Blocking is a new advertising blocking technique that detects online advertising by using an artificial intelligence-based advertising image classification model. A recent study has shown that these Perceptual Ad-Blocking models are vulnerable to adversarial attacks using adversarial examples to add noise to images that cause them to be misclassified. In this paper, we prove that existing perceptual Ad-Blocking technique has a weakness for several adversarial example and that Defense-GAN and MagNet who performed well for MNIST dataset and CIFAR-10 dataset are good to advertising dataset. Through this, using Defense-GAN and MagNet techniques, it presents a robust new advertising image classification model for adversarial attacks. According to the results of experiments using various existing adversarial attack techniques, the techniques proposed in this paper were able to secure the accuracy and performance through the robust image classification techniques, and furthermore, they were able to defend a certain level against white-box attacks by attackers who knew the details of defense techniques.

Dynamic Adjustment of the Pruning Threshold in Deep Compression (Deep Compression의 프루닝 문턱값 동적 조정)

  • Lee, Yeojin;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.3
    • /
    • pp.99-103
    • /
    • 2021
  • Recently, convolutional neural networks (CNNs) have been widely utilized due to their outstanding performance in various computer vision fields. However, due to their computational-intensive and high memory requirements, it is difficult to deploy CNNs on hardware platforms that have limited resources, such as mobile devices and IoT devices. To address these limitations, a neural network compression research is underway to reduce the size of neural networks while maintaining their performance. This paper proposes a CNN compression technique that dynamically adjusts the thresholds of pruning, one of the neural network compression techniques. Unlike the conventional pruning that experimentally or heuristically sets the thresholds that determine the weights to be pruned, the proposed technique can dynamically find the optimal thresholds that prevent accuracy degradation and output the light-weight neural network in less time. To validate the performance of the proposed technique, the LeNet was trained using the MNIST dataset and the light-weight LeNet could be automatically obtained 1.3 to 3 times faster without loss of accuracy.

Detecting Adversarial Example Using Ensemble Method on Deep Neural Network (딥뉴럴네트워크에서의 적대적 샘플에 관한 앙상블 방어 연구)

  • Kwon, Hyun;Yoon, Joonhyeok;Kim, Junseob;Park, Sangjun;Kim, Yongchul
    • Convergence Security Journal
    • /
    • v.21 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • Deep neural networks (DNNs) provide excellent performance for image, speech, and pattern recognition. However, DNNs sometimes misrecognize certain adversarial examples. An adversarial example is a sample that adds optimized noise to the original data, which makes the DNN erroneously misclassified, although there is nothing wrong with the human eye. Therefore studies on defense against adversarial example attacks are required. In this paper, we have experimentally analyzed the success rate of detection for adversarial examples by adjusting various parameters. The performance of the ensemble defense method was analyzed using fast gradient sign method, DeepFool method, Carlini & Wanger method, which are adversarial example attack methods. Moreover, we used MNIST as experimental data and Tensorflow as a machine learning library. As an experimental method, we carried out performance analysis based on three adversarial example attack methods, threshold, number of models, and random noise. As a result, when there were 7 models and a threshold of 1, the detection rate for adversarial example is 98.3%, and the accuracy of 99.2% of the original sample is maintained.