• Title/Summary/Keyword: WGAN

Search Result 9, Processing Time 0.025 seconds

Comparison of Seismic Data Interpolation Performance using U-Net and cWGAN (U-Net과 cWGAN을 이용한 탄성파 탐사 자료 보간 성능 평가)

  • Yu, Jiyun;Yoon, Daeung
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.140-161
    • /
    • 2022
  • Seismic data with missing traces are often obtained regularly or irregularly due to environmental and economic constraints in their acquisition. Accordingly, seismic data interpolation is an essential step in seismic data processing. Recently, research activity on machine learning-based seismic data interpolation has been flourishing. In particular, convolutional neural network (CNN) and generative adversarial network (GAN), which are widely used algorithms for super-resolution problem solving in the image processing field, are also used for seismic data interpolation. In this study, CNN-based algorithm, U-Net and GAN-based algorithm, and conditional Wasserstein GAN (cWGAN) were used as seismic data interpolation methods. The results and performances of the methods were evaluated thoroughly to find an optimal interpolation method, which reconstructs with high accuracy missing seismic data. The work process for model training and performance evaluation was divided into two cases (i.e., Cases I and II). In Case I, we trained the model using only the regularly sampled data with 50% missing traces. We evaluated the model performance by applying the trained model to a total of six different test datasets, which consisted of a combination of regular, irregular, and sampling ratios. In Case II, six different models were generated using the training datasets sampled in the same way as the six test datasets. The models were applied to the same test datasets used in Case I to compare the results. We found that cWGAN showed better prediction performance than U-Net with higher PSNR and SSIM. However, cWGAN generated additional noise to the prediction results; thus, an ensemble technique was performed to remove the noise and improve the accuracy. The cWGAN ensemble model removed successfully the noise and showed improved PSNR and SSIM compared with existing individual models.

Combining multi-task autoencoder with Wasserstein generative adversarial networks for improving speech recognition performance (음성인식 성능 개선을 위한 다중작업 오토인코더와 와설스타인식 생성적 적대 신경망의 결합)

  • Kao, Chao Yuan;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.6
    • /
    • pp.670-677
    • /
    • 2019
  • As the presence of background noise in acoustic signal degrades the performance of speech or acoustic event recognition, it is still challenging to extract noise-robust acoustic features from noisy signal. In this paper, we propose a combined structure of Wasserstein Generative Adversarial Network (WGAN) and MultiTask AutoEncoder (MTAE) as deep learning architecture that integrates the strength of MTAE and WGAN respectively such that it estimates not only noise but also speech features from noisy acoustic source. The proposed MTAE-WGAN structure is used to estimate speech signal and the residual noise by employing a gradient penalty and a weight initialization method for Leaky Rectified Linear Unit (LReLU) and Parametric ReLU (PReLU). The proposed MTAE-WGAN structure with the adopted gradient penalty loss function enhances the speech features and subsequently achieve substantial Phoneme Error Rate (PER) improvements over the stand-alone Deep Denoising Autoencoder (DDAE), MTAE, Redundant Convolutional Encoder-Decoder (R-CED) and Recurrent MTAE (RMTAE) models for robust speech recognition.

Technique Proposal to Stabilize Lipschitz Continuity of WGAN Based on Regularization Terms (정칙화 항에 기반한 WGAN의 립쉬츠 연속 안정화 기법 제안)

  • Hahn, Hee-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.239-246
    • /
    • 2020
  • The recently proposed Wasserstein generative adversarial network (WGAN) has improved some of the tricky and unstable training processes that are chronic problems of the generative adversarial network(GAN), but there are still cases where it generates poor samples or fails to converge. In order to solve the problems, this paper proposes algorithms to improve the sampling process so that the discriminator can more accurately estimate the data probability distribution to be modeled and to stably maintain the discriminator should be Lipschitz continuous. Through various experiments, we analyze the characteristics of the proposed techniques and verify their performances.

Proposing Effective Regularization Terms for Improvement of WGAN (WGAN의 성능개선을 위한 효과적인 정칙항 제안)

  • Hahn, Hee Il
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.13-20
    • /
    • 2021
  • A Wasserstein GAN(WGAN), optimum in terms of minimizing Wasserstein distance, still suffers from inconsistent convergence or unexpected output due to inherent learning instability. It is widely known some kinds of restriction on the discriminative function should be considered to solve such problems, which implies the importance of Lipschitz continuity. Unfortunately, there are few known methods to satisfactorily maintain the Lipschitz continuity of the discriminative function. In this paper we propose techniques to stably maintain the Lipschitz continuity of the discriminative function by adding effective regularization terms to the objective function, which limit the magnitude of the gradient vectors of the discriminator to one or less. Extensive experiments are conducted to evaluate the performance of the proposed techniques, which shows the single-sided penalty improves convergence compared with the gradient penalty at the early learning process, while the proposed additional penalty increases inception scores by 0.18 after 100,000 number of learning.

Personalized Multi-Turn Chatbot Based on Dual WGAN (Dual WGAN 기반 페르소나 Multi-Turn 챗봇)

  • Oh, Shinhyeok;Kim, JinTae;Kim, Harksoo;Lee, Jeong-Eom;Kim, Seona;Park, Youngmin;Noh, Myungho
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.49-53
    • /
    • 2019
  • 챗봇은 사람과 컴퓨터가 자연어로 대화를 주고받는 시스템을 말한다. 최근 챗봇에 대한 연구가 활발해지면서 단순히 기계적인 응답보다 사용자가 원하는 개인 특성이 반영된 챗봇에 대한 연구도 많아지고 있다. 기존 연구는 하나의 벡터를 사용하여 한 가지 형태의 페르소나 정보를 모델에 반영했다. 하지만, 페르소나는 한 가지 형태로 정의할 수 없어서 챗봇 모델에 페르소나 정보를 다양한 형태로 반영시키는 연구가 필요하다. 따라서, 본 논문은 최신 생성 기반 Multi-Turn 챗봇 시스템을 기반으로 챗봇이 다양한 형태로 페르소나를 반영하게 하는 방법을 제안한다.

  • PDF

Training Optimization for Fringe Pattern Generation Network Based on Deep Learning (딥러닝 기반의 프린지 패턴 생성 네트워크 학습에 대한 최적화)

  • Park, Sun-Jong;Kim, Woosuk;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.858-859
    • /
    • 2022
  • 본 논문에서는 프린지 패턴을 생성하는 딥러닝 기반의 WGAN-GP 네트워크의 최적화 방법을 제안한다. 기존의 복소 프린지 패턴 생성을 위한 GAN 모델은 생성의 정확도뿐만 아니라 학습의 안정성이 다소 부족하였다. 이에 따라 WGAN-GP 등의 업그레이드 된 방법을 사용하였지만, 네트워크 구조 및 파라미터에 따른 최적화가 필요하다. 보다 정확도 높은 정확도를 가진 프린지 패턴 생성을 위해 learning rate decay 사용하여 학습된 결과를 epoch 별 그래프로 최적화 전의 결과와 비교하고, 홀로그램과 복원 결과에 대한 PSNR 을 비교한다.

  • PDF

Experimental Analysis of Equilibrization in Binary Classification for Non-Image Imbalanced Data Using Wasserstein GAN

  • Wang, Zhi-Yong;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.37-42
    • /
    • 2019
  • In this paper, we explore the details of three classic data augmentation methods and two generative model based oversampling methods. The three classic data augmentation methods are random sampling (RANDOM), Synthetic Minority Over-sampling Technique (SMOTE), and Adaptive Synthetic Sampling (ADASYN). The two generative model based oversampling methods are Conditional Generative Adversarial Network (CGAN) and Wasserstein Generative Adversarial Network (WGAN). In imbalanced data, the whole instances are divided into majority class and minority class, where majority class occupies most of the instances in the training set and minority class only includes a few instances. Generative models have their own advantages when they are used to generate more plausible samples referring to the distribution of the minority class. We also adopt CGAN to compare the data augmentation performance with other methods. The experimental results show that WGAN-based oversampling technique is more stable than other approaches (RANDOM, SMOTE, ADASYN and CGAN) even with the very limited training datasets. However, when the imbalanced ratio is too small, generative model based approaches cannot achieve satisfying performance than the conventional data augmentation techniques. These results suggest us one of future research directions.

Fraud Detection System Model Using Generative Adversarial Networks and Deep Learning (생성적 적대 신경망과 딥러닝을 활용한 이상거래탐지 시스템 모형)

  • Ye Won Kim;Ye Lim Yu;Hong Yong Choi
    • Information Systems Review
    • /
    • v.22 no.1
    • /
    • pp.59-72
    • /
    • 2020
  • Artificial Intelligence is establishing itself as a familiar tool from an intractable concept. In this trend, financial sector is also looking to improve the problem of existing system which includes Fraud Detection System (FDS). It is being difficult to detect sophisticated cyber financial fraud using original rule-based FDS. This is because diversification of payment environment and increasing number of electronic financial transactions has been emerged. In order to overcome present FDS, this paper suggests 3 types of artificial intelligence models, Generative Adversarial Network (GAN), Deep Neural Network (DNN), and Convolutional Neural Network (CNN). GAN proves how data imbalance problem can be developed while DNN and CNN show how abnormal financial trading patterns can be precisely detected. In conclusion, among the experiments on this paper, WGAN has the highest improvement effects on data imbalance problem. DNN model reflects more effects on fraud classification comparatively.

Conditional Variational Autoencoder-based Generative Model for Gene Expression Data Augmentation (유전자 발현량 데이터 증대를 위한 Conditional VAE 기반 생성 모델)

  • Hyunsu Bong;Minsik Oh
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.275-284
    • /
    • 2023
  • Gene expression data can be utilized in various studies, including the prediction of disease prognosis. However, there are challenges associated with collecting enough data due to cost constraints. In this paper, we propose a gene expression data generation model based on Conditional Variational Autoencoder. Our results demonstrate that the proposed model generates synthetic data with superior quality compared to two other state-of-the-art models for gene expression data generation, namely the Wasserstein Generative Adversarial Network with Gradient Penalty based model and the structured data generation models CTGAN and TVAE.