• Title/Summary/Keyword: Adversarial Data Generation

Search Result 61, Processing Time 0.017 seconds

Effective Analsis of GAN based Fake Date for the Deep Learning Model (딥러닝 훈련을 위한 GAN 기반 거짓 영상 분석효과에 대한 연구)

  • Seungmin, Jang;Seungwoo, Son;Bongsuck, Kim
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.8 no.2
    • /
    • pp.137-141
    • /
    • 2022
  • To inspect the power facility faults using artificial intelligence, it need that improve the accuracy of the diagnostic model are required. Data augmentation skill using generative adversarial network (GAN) is one of the best ways to improve deep learning performance. GAN model can create realistic-looking fake images using two competitive learning networks such as discriminator and generator. In this study, we intend to verify the effectiveness of virtual data generation technology by including the fake image of power facility generated through GAN in the deep learning training set. The GAN-based fake image was created for damage of LP insulator, and ResNet based normal and defect classification model was developed to verify the effect. Through this, we analyzed the model accuracy according to the ratio of normal and defective training data.

Generation of virtual mandibular first molar teeth and accuracy analysis using deep convolutional generative adversarial network (심층 합성곱 생성적 적대 신경망을 활용한 하악 제1대구치 가상 치아 생성 및 정확도 분석)

  • Eun-Jeong Bae;Sun-Young Ihm
    • Journal of Technologic Dentistry
    • /
    • v.46 no.2
    • /
    • pp.36-41
    • /
    • 2024
  • Purpose: This study aimed to generate virtual mandibular left first molar teeth using deep convolutional generative adversarial networks (DCGANs) and analyze their matching accuracy with actual tooth morphology to propose a new paradigm for using medical data. Methods: Occlusal surface images of the mandibular left first molar scanned using a dental model scanner were analyzed using DCGANs. Overall, 100 training sets comprising 50 original and 50 background-removed images were created, thus generating 1,000 virtual teeth. These virtual teeth were classified based on the number of cusps and occlusal surface ratio, and subsequently, were analyzed for consistency by expert dental technicians over three rounds of examination. Statistical analysis was conducted using IBM SPSS Statistics ver. 23.0 (IBM), including intraclass correlation coefficient for intrarater reliability, one-way ANOVA, and Tukey's post-hoc analysis. Results: Virtual mandibular left first molars exhibited high consistency in the occlusal surface ratio but varied in other criteria. Moreover, consistency was the highest in the occlusal buccal lingual criteria at 91.9%, whereas discrepancies were observed most in the occusal buccal cusp criteria at 85.5%. Significant differences were observed among all groups (p<0.05). Conclusion: Based on the classification of the virtually generated left mandibular first molar according to several criteria, DCGANs can generate virtual data highly similar to real data. Thus, subsequent research in the dental field, including the development of improved neural network structures, is necessary.

A Study on the Synthetic ECG Generation for User Recognition (사용자 인식을 위한 가상 심전도 신호 생성 기술에 관한 연구)

  • Kim, Min Gu;Kim, Jin Su;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.8 no.4
    • /
    • pp.33-37
    • /
    • 2019
  • Because the ECG signals are time-series data acquired as time elapses, it is important to obtain comparative data the same in size as the enrolled data every time. This paper suggests a network model of GAN (Generative Adversarial Networks) based on an auxiliary classifier to generate synthetic ECG signals which may address the different data size issues. The Cosine similarity and Cross-correlation are used to examine the similarity of synthetic ECG signals. The analysis shows that the Average Cosine similarity was 0.991 and the Average Euclidean distance similarity based on cross-correlation was 0.25: such results indicate that data size difference issue can be resolved while the generated synthetic ECG signals, similar to real ECG signals, can create synthetic data even when the registered data are not the same as the comparative data in size.

Evaluation of Sentimental Texts Automatically Generated by a Generative Adversarial Network (생성적 적대 네트워크로 자동 생성한 감성 텍스트의 성능 평가)

  • Park, Cheon-Young;Choi, Yong-Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.6
    • /
    • pp.257-264
    • /
    • 2019
  • Recently, deep neural network based approaches have shown a good performance for various fields of natural language processing. A huge amount of training data is essential for building a deep neural network model. However, collecting a large size of training data is a costly and time-consuming job. A data augmentation is one of the solutions to this problem. The data augmentation of text data is more difficult than that of image data because texts consist of tokens with discrete values. Generative adversarial networks (GANs) are widely used for image generation. In this work, we generate sentimental texts by using one of the GANs, CS-GAN model that has a discriminator as well as a classifier. We evaluate the usefulness of generated sentimental texts according to various measurements. CS-GAN model not only can generate texts with more diversity but also can improve the performance of its classifier.

Rapid Misclassification Sample Generation Attack on Deep Neural Network (딥뉴럴네트워크 상에 신속한 오인식 샘플 생성 공격)

  • Kwon, Hyun;Park, Sangjun;Kim, Yongchul
    • Convergence Security Journal
    • /
    • v.20 no.2
    • /
    • pp.111-121
    • /
    • 2020
  • Deep neural networks (DNNs) provide good performance for machine learning tasks such as image recognition and object recognition. However, DNNs are vulnerable to an adversarial example. An adversarial example is an attack sample that causes the neural network to recognize it incorrectly by adding minimal noise to the original sample. However, the disadvantage is that it takes a long time to generate such an adversarial example. Therefore, in some cases, an attack may be necessary that quickly causes the neural network to recognize it incorrectly. In this paper, we propose a fast misclassification sample that can rapidly attack neural networks. The proposed method does not consider the distortion of the original sample when adding noise. We used MNIST and CIFAR10 as experimental data and Tensorflow as a machine learning library. Experimental results show that the fast misclassification sample generated by the proposed method can be generated with 50% and 80% reduced number of iterations for MNIST and CIFAR10, respectively, compared to the conventional Carlini method, and has 100% attack rate.

An Edge Detection Technique for Performance Improvement of eGAN (eGAN 모델의 성능개선을 위한 에지 검출 기법)

  • Lee, Cho Youn;Park, Ji Su;Shon, Jin Gon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.109-114
    • /
    • 2021
  • GAN(Generative Adversarial Network) is an image generation model, which is composed of a generator network and a discriminator network, and generates an image similar to a real image. Since the image generated by the GAN should be similar to the actual image, a loss function is used to minimize the loss error of the generated image. However, there is a problem that the loss function of GAN degrades the quality of the image by making the learning to generate the image unstable. To solve this problem, this paper analyzes GAN-related studies and proposes an edge GAN(eGAN) using edge detection. As a result of the experiment, the eGAN model has improved performance over the existing GAN model.

PathGAN: Local path planning with attentive generative adversarial networks

  • Dooseop Choi;Seung-Jun Han;Kyoung-Wook Min;Jeongdan Choi
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.1004-1019
    • /
    • 2022
  • For autonomous driving without high-definition maps, we present a model capable of generating multiple plausible paths from egocentric images for autonomous vehicles. Our generative model comprises two neural networks: feature extraction network (FEN) and path generation network (PGN). The FEN extracts meaningful features from an egocentric image, whereas the PGN generates multiple paths from the features, given a driving intention and speed. To ensure that the paths generated are plausible and consistent with the intention, we introduce an attentive discriminator and train it with the PGN under a generative adversarial network framework. Furthermore, we devise an interaction model between the positions in the paths and the intentions hidden in the positions and design a novel PGN architecture that reflects the interaction model for improving the accuracy and diversity of the generated paths. Finally, we introduce ETRIDriving, a dataset for autonomous driving, in which the recorded sensor data are labeled with discrete high-level driving actions, and demonstrate the state-of-the-art performance of the proposed model on ETRIDriving in terms of accuracy and diversity.

Bagging deep convolutional autoencoders trained with a mixture of real data and GAN-generated data

  • Hu, Cong;Wu, Xiao-Jun;Shu, Zhen-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5427-5445
    • /
    • 2019
  • While deep neural networks have achieved remarkable performance in representation learning, a huge amount of labeled training data are usually required by supervised deep models such as convolutional neural networks. In this paper, we propose a new representation learning method, namely generative adversarial networks (GAN) based bagging deep convolutional autoencoders (GAN-BDCAE), which can map data to diverse hierarchical representations in an unsupervised fashion. To boost the size of training data, to train deep model and to aggregate diverse learning machines are the three principal avenues towards increasing the capabilities of representation learning of neural networks. We focus on combining those three techniques. To this aim, we adopt GAN for realistic unlabeled sample generation and bagging deep convolutional autoencoders (BDCAE) for robust feature learning. The proposed method improves the discriminative ability of learned feature embedding for solving subsequent pattern recognition problems. We evaluate our approach on three standard benchmarks and demonstrate the superiority of the proposed method compared to traditional unsupervised learning methods.

A Study on Virtual Tooth Image Generation Using Deep Learning - Based on the number of learning (심층 학습을 활용한 가상 치아 이미지 생성 연구 -학습 횟수를 중심으로)

  • Bae, EunJeong;Jeong, Junho;Son, Yunsik;Lim, JoonYeon
    • Journal of Technologic Dentistry
    • /
    • v.42 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • Purpose: Among the virtual teeth generated by Deep Convolutional Generative Adversarial Networks (DCGAN), the optimal data was analyzed for the number of learning. Methods: We extracted 50 mandibular first molar occlusal surfaces and trained 4,000 epoch with DCGAN. The learning screen was saved every 50 times and evaluated on a Likert 5-point scale according to five classification criteria. Results were analyzed by one-way ANOVA and tukey HSD post hoc analysis (α = 0.05). Results: It was the highest with 83.90±6.32 in the number of group3 (2,050-3,000) learning and statistically significant in the group1 (50-1,000) and the group2 (1,050-2,000). Conclusion: Since there is a difference in the optimal virtual tooth generation according to the number of learning, it is necessary to analyze the learning frequency section in various ways.

Automaitc Generation of Fashion Image Dataset by Using Progressive Growing GAN (PG-GAN을 이용한 패션이미지 데이터 자동 생성)

  • Kim, Yanghee;Lee, Chanhee;Whang, Taesun;Kim, Gyeongmin;Lim, Heuiseok
    • Journal of Internet of Things and Convergence
    • /
    • v.4 no.2
    • /
    • pp.1-6
    • /
    • 2018
  • Techniques for generating new sample data from higher dimensional data such as images have been utilized variously for speech synthesis, image conversion and image restoration. This paper adopts Progressive Growing of Generative Adversarial Networks(PG-GANs) as an implementation model to generate high-resolution images and to enhance variation of the generated images, and applied it to fashion image data. PG-GANs allows the generator and discriminator to progressively learn at the same time, continuously adding new layers from low-resolution images to result high-resolution images. We also proposed a Mini-batch Discrimination method to increase the diversity of generated data, and proposed a Sliced Wasserstein Distance(SWD) evaluation method instead of the existing MS-SSIM to evaluate the GAN model.