• Title/Summary/Keyword: Adversarial Networks

Search Result 214, Processing Time 0.024 seconds

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

Generation of wind turbine blade surface defect dataset based on StyleGAN3 and PBGMs

  • W.R. Li;W.H. Zhao;T.T. Wang;Y.F. Du
    • Smart Structures and Systems
    • /
    • v.34 no.2
    • /
    • pp.129-143
    • /
    • 2024
  • In recent years, with the vigorous development of visual algorithms, a large amount of research has been conducted on blade surface defect detection methods represented by deep learning. Detection methods based on deep learning models must rely on a large and rich dataset. However, the geographical location and working environment of wind turbines makes it difficult to effectively capture images of blade surface defects, which inevitably hinders visual detection. In response to the challenge of collecting a dataset for surface defects that are difficult to obtain, a multi-class blade surface defect generation method based on the StyleGAN3 (Style Generative Adversarial Networks 3) deep learning model and PBGMs (Physics-Based Graphics Models) method has been proposed. Firstly, a small number of real blade surface defect datasets are trained using the adversarial neural network of the StyleGAN3 deep learning model to generate a large number of high-resolution blade surface defect images. Secondly, the generated images are processed through Matting and Resize operations to create defect foreground images. The blade background images produced using PBGM technology are randomly fused, resulting in a diverse and high-resolution blade surface defect dataset with multiple types of backgrounds. Finally, experimental validation has proven that the adoption of this method can generate images with defect characteristics and high resolution, achieving a proportion of over 98.5%. Additionally, utilizing the EISeg annotation method significantly reduces the annotation time to just 1/7 of the time required for traditional methods. These generated images and annotated data of blade surface defects provide robust support for the detection of blade surface defects.

Model Type Inference Attack against AI-Based NIDS (AI 기반 NIDS에 대한 모델 종류 추론 공격)

  • Yoonsoo An;Dowan Kim;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.5
    • /
    • pp.875-884
    • /
    • 2024
  • The proliferation of IoT networks has led to an increase in cyber attacks, highlighting the importance of Network Intrusion Detection Systems (NIDS). To overcome the limitations of traditional NIDS and cope with more sophisticated cyber attacks, there is a trend towards integrating artificial intelligence models into NIDS. However, AI-based NIDS are vulnerable to adversarial attacks, which exploit the weaknesses of algorithm. Model Type Inference Attack is one of the types of attacks that infer information inside the model. This paper proposes an optimized framework for Model Type Inference attacks against NIDS models, applying more realistic assumptions. The proposed method successfully trained an attack model to infer the type of NIDS models with an accuracy of approximately 0.92, presenting a new security threat to AI-based NIDS and emphasizing the importance of developing defence method against such attacks.

Automatic Generation of Korean Poetry using Sequence Generative Adversarial Networks (SeqGAN 모델을 이용한 한국어 시 자동 생성)

  • Park, Yo-Han;Jeong, Hye-Ji;Kang, Il-Min;Park, Cheon-Young;Choi, Yong-Seok;Lee, Kong Joo
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.580-583
    • /
    • 2018
  • 본 논문에서는 SeqGAN 모델을 사용하여 한국어 시를 자동 생성해 보았다. SeqGAN 모델은 문장 생성을 위해 재귀 신경망과 강화 학습 알고리즘의 하나인 정책 그라디언트(Policy Gradient)와 몬테카를로 검색(Monte Carlo Search, MC) 기법을 생성기에 적용하였다. 시 문장을 자동 생성하기 위한 학습 데이터로는 사랑을 주제로 작성된 시를 사용하였다. SeqGAN 모델을 사용하여 자동 생성된 시는 동일한 구절이 여러번 반복되는 문제를 보였지만 한국어 텍스트 생성에 있어 SeqGAN 모델이 적용 가능함을 확인하였다.

  • PDF

Style Transfer in Korean Text using Auto-encoder and Adversarial Networks (오토인코더와 적대 네트워크를 활용한 한국어 문체 변환)

  • Yang, Kisu;Lee, Dongyub;Lee, Chanhee;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.658-660
    • /
    • 2018
  • 인공지능 산업이 발달함에 따라 사용자의 특성에 맞게 상호작용하는 기술에 대한 수요도 증가하고 있다. 하지만 텍스트 스타일 변환의 경우 사용자 경험을 크게 향상시킬 수 있는 기술임에도 불구하고, 학습에 필요한 병렬 데이터가 부족하여 모델링과 성능 개선에 어려움을 겪고 있다. 이에 따라 본 논문에서는 비 병렬 데이터만으로 텍스트 스타일 변환이 가능한 선행 모델[1]을 기반으로, 한국어에 적합한 문장 표현 방식 및 성능 개선을 위한 임의 도메인 예측 기법이 적용된 모델을 제안한다.

  • PDF

Synthetic Data Augmentation for Plant Disease Image Generation using GAN (GAN을 이용한 식물 병해 이미지 합성 데이터 증강)

  • Nazki, Haseeb;Lee, Jaehwan;Yoon, Sook;Park, Dong Sun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.459-460
    • /
    • 2018
  • In this paper, we present a data augmentation method that generates synthetic plant disease images using Generative Adversarial Networks (GANs). We propose a training scheme that first uses classical data augmentation techniques to enlarge the training set and then further enlarges the data size and its diversity by applying GAN techniques for synthetic data augmentation. Our method is demonstrated on a limited dataset of 2789 images of tomato plant diseases (Gray mold, Canker, Leaf mold, Plague, Leaf miner, Whitefly etc.).

  • PDF

Oversampling scheme using Conditional GAN (Conditional GAN을 활용한 오버샘플링 기법)

  • Son, Minjae;Jung, Seungwon;Hwang, Eenjun
    • Annual Conference of KIPS
    • /
    • 2018.10a
    • /
    • pp.609-612
    • /
    • 2018
  • 기계학습 분야에서 분류 문제를 해결하기 위해 다양한 알고리즘들이 연구되고 있다. 하지만 기존에 연구된 분류 알고리즘 대부분은 각 클래스에 속한 데이터 수가 거의 같다는 가정하에 학습을 진행하기 때문에 각 클래스의 데이터 수가 불균형한 경우 분류 정확도가 다소 떨어지는 현상을 보인다. 이러한 문제를 해결하기 위해 본 논문에서는 Conditional Generative Adversarial Networks(CGAN)을 활용하여 데이터 수의 균형을 맞추는 오버샘플링 기법을 제안한다. CGAN은 데이터 수가 적은 클래스에 속한 데이터 특징을 학습하고 실제 데이터와 유사한 데이터를 생성한다. 이를 통해 클래스별 데이터의 수를 맞춰 분류 알고리즘의 분류 정확도를 높인다. 실제 수집된 데이터를 이용하여 CGAN을 활용한 오버샘플링 기법이 효과가 있음을 보이고 기존 오버샘플링 기법들과 비교하여 기존 기법들보다 우수함을 입증하였다.

Image Generation based on Text and Sketch with Generative Adversarial Networks (생성적 적대 네트워크를 활용한 텍스트와 스케치 기반 이미지 생성 기법)

  • Lee, Je-Hoon;Lee, Dong-Ho
    • Annual Conference of KIPS
    • /
    • 2018.05a
    • /
    • pp.293-296
    • /
    • 2018
  • 생성적 적대 네트워크를 활용하여 텍스트, 스케치 등 다양한 자원으로부터 이미지를 생성하기 위한 연구는 활발하게 진행되고 있으며 많은 실용적인 연구가 존재한다. 하지만 기존 연구들은 텍스트나 스케치 등 각 하나의 자원을 통해 이미지를 생성하기 때문에 설명이 부족한 텍스트, 실제 이미지와 상이한 스케치와 같이 자원의 정보가 불완전한 경우에는 제대로 된 이미지를 생성하지 못한다는 한계가 있다. 본 논문에서는 기존 연구의 한계점올 극복하기 위해 텍스트와 스케치 두 개의 자원을 동시에 활용하여 이미지를 생성하는 새로운 생성 기법 TS-GAN 을 제안한다. TS-GAN 은 두 단계로 이루어져 있으며 각 단계를 통해 더욱 사실적인 이미지를 생성한다. 본 논문에서 제안한 기법은 컴퓨터 비전 분야에서 많이 활용되는 CUB 데이터세트를 사용하여 이미지 생성 결과의 우수성을 보인다.

Influence Maximization Scheme against Various Social Adversaries

  • Noh, Giseop;Oh, Hayoung;Lee, Jaehoon
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.4
    • /
    • pp.213-220
    • /
    • 2018
  • With the exponential developments of social network, their fundamental role as a medium to spread information, ideas, and influence has gained importance. It can be expressed by the relationships and interactions within a group of individuals. Therefore, some models and researches from various domains have been in response to the influence maximization problem for the effects of "word of mouth" of new products. For example, in reality, more than two related social groups such as commercial companies and service providers exist within the same market issue. Under such a scenario, they called social adversaries competitively try to occupy their market influence against each other. To address the influence maximization (IM) problem between them, we propose a novel IM problem for social adversarial players (IM-SA) which are exploiting the social network attributes to infer the unknown adversary's network configuration. We sophisticatedly define mathematical closed form to demonstrate that the proposed scheme can have a near-optimal solution for a player.

GENERATION OF FUTURE MAGNETOGRAMS FROM PREVIOUS SDO/HMI DATA USING DEEP LEARNING

  • Jeon, Seonggyeong;Moon, Yong-Jae;Park, Eunsu;Shin, Kyungin;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.82.3-82.3
    • /
    • 2019
  • In this study, we generate future full disk magnetograms in 12, 24, 36 and 48 hours advance from SDO/HMI images using deep learning. To perform this generation, we apply the convolutional generative adversarial network (cGAN) algorithm to a series of SDO/HMI magnetograms. We use SDO/HMI data from 2011 to 2016 for training four models. The models make AI-generated images for 2017 HMI data and compare them with the actual HMI magnetograms for evaluation. The AI-generated images by each model are very similar to the actual images. The average correlation coefficient between the two images for about 600 data sets are about 0.85 for four models. We are examining hundreds of active regions for more detail comparison. In the future we will use pix2pix HD and video2video translation networks for image prediction.

  • PDF