• Title/Summary/Keyword: 생성적 적대적 네트워크

Search Result 38, Processing Time 0.023 seconds

Combining multi-task autoencoder with Wasserstein generative adversarial networks for improving speech recognition performance (음성인식 성능 개선을 위한 다중작업 오토인코더와 와설스타인식 생성적 적대 신경망의 결합)

  • Kao, Chao Yuan;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.6
    • /
    • pp.670-677
    • /
    • 2019
  • As the presence of background noise in acoustic signal degrades the performance of speech or acoustic event recognition, it is still challenging to extract noise-robust acoustic features from noisy signal. In this paper, we propose a combined structure of Wasserstein Generative Adversarial Network (WGAN) and MultiTask AutoEncoder (MTAE) as deep learning architecture that integrates the strength of MTAE and WGAN respectively such that it estimates not only noise but also speech features from noisy acoustic source. The proposed MTAE-WGAN structure is used to estimate speech signal and the residual noise by employing a gradient penalty and a weight initialization method for Leaky Rectified Linear Unit (LReLU) and Parametric ReLU (PReLU). The proposed MTAE-WGAN structure with the adopted gradient penalty loss function enhances the speech features and subsequently achieve substantial Phoneme Error Rate (PER) improvements over the stand-alone Deep Denoising Autoencoder (DDAE), MTAE, Redundant Convolutional Encoder-Decoder (R-CED) and Recurrent MTAE (RMTAE) models for robust speech recognition.

A Study on the Video Quality Improvement of National Intangible Cultural Heritage Documentary Film (국가무형문화재 기록영상 화질 개선에 관한 연구)

  • Kwon, Do-Hyung;Yu, Jeong-Min
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.439-441
    • /
    • 2020
  • 본 논문에서는 국가무형문화재 기록영상의 화질 개선에 관한 연구를 진행한다. 기록영상의 화질 개선을 위해 SRGAN 기반의 초해상화 복원영상 생성 프레임워크의 적용을 제안한다. Image aumentation과 median filter를 적용한 데이터셋과 적대적 신경망인 Generative Adversarial Network (GAN)을 기반으로 딥러닝 네트워크를 구축하여 입력된 Low-Resolution 이미지를 통해 High-Resolution의 복원 영상을 생성한다. 이 연구를 통해 국가무형문화재 기록영상 뿐만 아니라 문화재 전반의 사진 및 영상 기록 자료의 품질 개선 가능성을 제시하고, 영상 기록 자료의 아카이브 구축을 통해 지속적인 활용의 기초연구가 되는 것을 목표로 한다.

  • PDF

Adversarial Framework for Joint Light Field Super-resolution and Deblurring (라이트필드 초해상도와 블러 제거의 동시 수행을 위한 적대적 신경망 모델)

  • Lumentut, Jonathan Samuel;Baek, Hyungsun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.672-684
    • /
    • 2020
  • Restoring a low resolution and motion blurred light field has become essential due to the growing works on parallax-based image processing. These tasks are known as light-field enhancement process. Unfortunately, only a few state-of-the-art methods are introduced to solve the multiple problems jointly. In this work, we design a framework that jointly solves light field spatial super-resolution and motion deblurring tasks. Particularly, we generate a straight-forward neural network that is trained under low-resolution and 6-degree-of-freedom (6-DOF) motion-blurred light field dataset. Furthermore, we propose the strategy of local region optimization on the adversarial network to boost the performance. We evaluate our method through both quantitative and qualitative measurements and exhibit superior performance compared to the state-of-the-art methods.

도메인 어댑테이션을 이용한 폰트 변화에 강인한 한글 분류기 개발

  • Park, Jaewoo;Lee, Eunji;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.50-53
    • /
    • 2019
  • 본 논문에서는 도메인 어댑테이션을 이용하여 폰트 변화에 강인한 한글 분류기를 학습하는 방법을 제안한다. 제안하는 네트워크 모델은 총 7 개로 이루어져 있으며 각각 이미지로부터 폰트에 무관한 정보를 추출하는 인코더, 추출된 정보의 유효성을 판단하기 위해 이미지 재합성에 사용되는 디코더, 재합성된 이미지의 글자 분류기, 폰트 분류기, 재합성된 글자의 정교함을 판단하는 판별기(discriminator), 그리고 인코더에서 추출된 정보에 대한 글자 분류기, 폰트 분류기이다. 본 논문에서는 적대적 생성 신경망의 학습법을 따르는 도메인 어댑테이션 기법을 이용하여 인코더의 추출 정보가 폰트 정보는 속이면서 글자 분류의 정확성은 높이도록 학습하였다. 학습 결과 인코더로부터 추출되는 정보들은 폰트에 무관한 성질을 지니면서 글자 분류에 높은 정확성을 띄었으며, 추가로 디코더에서 나오는 이미지들도 원본 폰트와 같은 이미지를 생성해 낼 수 있었다.

  • PDF

The Spatial Diffusion of War: The Case of World War I (전쟁의 공간적 확산에 관한 연구: 제1차 세계대전을 사례로)

  • Chi, Sang-Hyun;Flint, Colin;Diehl, Paul;Vasquez, John;Scheffran, Jurgen;Radil, Steven M.;Rider, Toby J.
    • Journal of the Korean Geographical Society
    • /
    • v.49 no.1
    • /
    • pp.57-76
    • /
    • 2014
  • Conventional treatments of war diffusion focus extensively on dyadic relationships, whose impact is thought to be immutable over the course of the conf lict. This study indicates that such conceptions are at best incomplete, and more likely misleading to explain the spatial diffusion of wars. Using social network analysis, we examine war joining behavior during World War I. By employing social network analysis, we attempted to overcome the dichotomous understanding of geography as space and network in the discipline of conflict studies. Empirically, networked structural elements of state relationships (e.g., rivalry, alliances) have explanatory and predictive value that must be included alongside dyadic considerations in analyzing war joining behavior. In addition, our analysis demonstrates that the diffusion of conflict involves different driving forces over time.

  • PDF

A Study on Lightweight and Optimizing with Generative Adversarial Network Based Video Super-resolution Model (생성적 적대 신경망 기반의 딥 러닝 비디오 초 해상화 모델 경량화 및 최적화 기법 연구)

  • Kim, Dong-hwi;Lee, Su-jin;Park, Sang-hyo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1226-1228
    • /
    • 2022
  • FHD 이상을 넘어선 UHD급의 고해상도 동영상 콘텐츠의 수요 및 공급이 증가함에 따라 전반적인 산업 영역에서 네트워크 자원을 효율적으로 이용하여 동영상 콘텐츠를 제공하는 데에 관심을 두게 되었다. 기존 방법을 통한 bi-cubic, bi-linear interpolation 등의 방법은 딥 러닝 기반의 모델에 비교적 인풋 이미지의 특징을 잘 잡아내지 못하는 결과를 나타내었다. 딥 러닝 기반의 초 해상화 기술의 경우 기존 방법과 비교 시 연산을 위해 더 많은 자원을 필요로 하므로, 이러한 사용 조건에 따라 본 논문은 초 해상화가 가능한 딥 러닝 모델을 경량화 기법을 사용하여 기존에 사용된 모델보다 비교적 적은 자원을 효율적으로 사용할 수 있도록 연구 개발하는 데 목적을 두었다. 연구방법으로는 structure pruning을 이용하여 모델 자체의 구조를 경량화 하였고, 학습을 진행해야 하는 파라미터를 줄여 하드웨어 자원을 줄이는 연구를 진행했다. 또한, Residual Network의 개수를 줄여가며 PSNR, LPIPS, tOF등의 결과를 비교했다.

  • PDF

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

Optimization And Performance Analysis Via GAN Model Layer Pruning (레이어 프루닝을 이용한 생성적 적대 신경망 모델 경량화 및 성능 분석 연구)

  • Kim, Dong-hwi;Park, Sang-hyo;Bae, Byeong-jun;Cho, Suk-hee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.80-81
    • /
    • 2021
  • 딥 러닝 모델 사용에 있어서, 일반적인 사용자가 이용할 수 있는 하드웨어 리소스는 제한적이기 때문에 기존 모델을 경량화 할 수 있는 프루닝 방법을 통해 제한적인 리소스를 효과적으로 활용할 수 있도록 한다. 그 방법으로, 여러 딥 러닝 모델들 중 비교적 파라미터 수가 많은 것으로 알려진 GAN 아키텍처에 네트워크 프루닝을 적용함으로써 비교적 무거운 모델을 적은 파라미터를 통해 학습할 수 있는 방법을 제시한다. 또한, 본 논문을 통해 기존의 SRGAN 논문에서 가장 효과적인 결과로 제시했던 16 개의 residual block 의 개수를 실제로 줄여 봄으로써 기존 논문에서 제시했던 결과와의 차이에 대해 서술한다.

  • PDF

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.

Raindrop Removal and Background Information Recovery in Coastal Wave Video Imagery using Generative Adversarial Networks (적대적생성신경망을 이용한 연안 파랑 비디오 영상에서의 빗방울 제거 및 배경 정보 복원)

  • Huh, Dong;Kim, Jaeil;Kim, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a video enhancement method using generative adversarial networks to remove raindrops and restore the background information on the removed region in the coastal wave video imagery distorted by raindrops during rainfall. Two experimental models are implemented: Pix2Pix network widely used for image-to-image translation and Attentive GAN, which is currently performing well for raindrop removal on a single images. The models are trained with a public dataset of paired natural images with and without raindrops and the trained models are evaluated their performance of raindrop removal and background information recovery of rainwater distortion of coastal wave video imagery. In order to improve the performance, we have acquired paired video dataset with and without raindrops at the real coast and conducted transfer learning to the pre-trained models with those new dataset. The performance of fine-tuned models is improved by comparing the results from pre-trained models. The performance is evaluated using the peak signal-to-noise ratio and structural similarity index and the fine-tuned Pix2Pix network by transfer learning shows the best performance to reconstruct distorted coastal wave video imagery by raindrops.