• 제목/요약/키워드: Cycle Generative Adversarial Networks (GAN)

검색결과 12건 처리시간 0.026초

다수 화자 한국어 음성 변환 실험 (Many-to-many voice conversion experiments using a Korean speech corpus)

  • 육동석;서형진;고봉구;유인철
    • 한국음향학회지
    • /
    • 제41권3호
    • /
    • pp.351-358
    • /
    • 2022
  • 심층 생성 모델의 일종인 Generative Adversarial Network(GAN)과 Variational AutoEncoder(VAE)는 비병렬 학습 데이터를 사용한 음성 변환에 새로운 방법론을 제시하고 있다. 특히, Conditional Cycle-Consistent Generative Adversarial Network(CC-GAN)과 Cycle-Consistent Variational AutoEncoder(CycleVAE)는 다수 화자 사이의 음성 변환에 우수한 성능을 보이고 있다. 그러나, CC-GAN과 CycleVAE는 비교적 적은 수의 화자를 대상으로 연구가 진행되어왔다. 본 논문에서는 100 명의 한국어 화자 데이터를 사용하여 CC-GAN과 CycleVAE의 음성 변환 성능과 확장 가능성을 실험적으로 분석하였다. 실험 결과 소규모 화자의 경우 CC-GAN이 Mel-Cepstral Distortion(MCD) 기준으로 4.5 % 우수한 성능을 보이지만 대규모 화자의 경우 CycleVAE가 제한된 학습 시간 안에 12.7 % 우수한 성능을 보였다.

수중 선박엔진 음향 변환을 위한 향상된 CycleGAN 알고리즘 (Improved CycleGAN for underwater ship engine audio translation)

  • 아쉬라프 히나;정윤상;이종현
    • 한국음향학회지
    • /
    • 제39권4호
    • /
    • pp.292-302
    • /
    • 2020
  • 기계학습 알고리즘은 소나 및 레이더를 포함한 다양한 분야에서 사용되고 있다. 최근 개발된 GAN(Generative Adversarial Networks)의 변형인 Cycle-Consistency Generative Adversarial Network(CycleGAN)은 쌍을 이루지 않은 이미지-이미지 변환에 대해 검증된 네트워크이다. 본 논문에서는 높은 품질로 수중 선박 엔진음을 변환시킬 수 있는 변형된 CycleGAN을 제안한다. 제안된 네트워크는 수중 음향을 기존영역에서 목표영역으로 변환시키는 생성자 모델과 데이터를 참과 거짓으로 구분하는 개선된 식별자 그리고 변환된 수환 일관성(Cycle Consistency) 손실함수로 구성된다. 제안된 CycleGAN의 정량 및 정성분석은 공개적으로 사용 가능한 수중 데이터 ShipsEar을 사용하여 기존 알고리즘들과 Mel-cepstral분포, 구조적 유사 지수, 최소 거리 비교, 평균 의견 점수를 평가 및 비교함으로써 수행되었고, 분석결과는 제안된 네트워크의 유효성을 입증하였다.

Cycle GAN 기반 벽지 인테리어 이미지 변환 기법 (A Cycle GAN-based Wallpaper Image Transformation Method for Interior Simulation)

  • 김성훈;김요한;김선용
    • 한국전자통신학회논문지
    • /
    • 제18권2호
    • /
    • pp.349-354
    • /
    • 2023
  • 최근 인테리어에 관심을 가지는 인구가 증가함에 따라 세계적으로 인테리어 시장이 크게 성장하고 있으며, 글로벌 인테리어 업체들은 다양한 인테리어 요소에 대한 시뮬레이션 서비스를 개발하여 제공하고 있다. 벽지의 디자인은 가장 중요한 인테리어 요소임에도 불구하고, 기존 벽지 디자인 시뮬레이션 서비스들은 예상되는 결과물과 실제 결과물 간 차이, 긴 시뮬레이션 작업시간, 전문적인 기술의 필요 등의 단점으로 인해 사용에 어려움이 있다. 본 논문에서는 벽지 인테리어 시뮬레이션을 위한 Cycle GAN(: Generative Adversarial Networks) 기반의 벽지 이미지 변환 기법을 제안한다. 제안하는 기법은 다양한 모양의 벽지가 사용된 인테리어 이미지 데이터를 기반으로 모델을 학습하여, 사용자에게 짧은 시간 내에 벽지 인테리어 시뮬레이션을 제공할 수 있다.

국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구 (Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks)

  • 양훈민
    • 한국군사과학기술학회지
    • /
    • 제22권1호
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

Single Image-based Enhancement Techniques for Underwater Optical Imaging

  • Kim, Do Gyun;Kim, Soo Mee
    • 한국해양공학회지
    • /
    • 제34권6호
    • /
    • pp.442-453
    • /
    • 2020
  • Underwater color images suffer from low visibility and color cast effects caused by light attenuation by water and floating particles. This study applied single image enhancement techniques to enhance the quality of underwater images and compared their performance with real underwater images taken in Korean waters. Dark channel prior (DCP), gradient transform, image fusion, and generative adversarial networks (GAN), such as cycleGAN and underwater GAN (UGAN), were considered for single image enhancement. Their performance was evaluated in terms of underwater image quality measure, underwater color image quality evaluation, gray-world assumption, and blur metric. The DCP saturated the underwater images to a specific greenish or bluish color tone and reduced the brightness of the background signal. The gradient transform method with two transmission maps were sensitive to the light source and highlighted the region exposed to light. Although image fusion enabled reasonable color correction, the object details were lost due to the last fusion step. CycleGAN corrected overall color tone relatively well but generated artifacts in the background. UGAN showed good visual quality and obtained the highest scores against all figures of merit (FOMs) by compensating for the colors and visibility compared to the other single enhancement methods.

CycleGAN을 이용한 인터랙티브 웹페이지 (Interactive Web using CycleGAN)

  • 김지원;정해정;김동호
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2021년도 추계학술대회
    • /
    • pp.280-282
    • /
    • 2021
  • 최근에 딥러닝 기술인 GAN (Generative Adversarial Networks) 연구는 Image-to-Image translation 분야에서 활발하게 이뤄지고 있다. 이러한 기술을 바탕으로 사용자에게 편의와 재미를 제공하는 서비스가 애플리케이션 및 웹사이트의 형태로 개발되고 있다. 이에 본 논문은 CycleGAN 모델을 사용하여 이미지를 변환하고, 이를 인터랙티브 웹페이지를 통해 사용자와 실시간으로 상호작용하며 결과 이미지를 제공할 수 있는 방법을 연구하였다. 모델을 구현하기 위해 Tensorflow 및 Keras를 사용하였고, Django와 HTML5, CSS, JavaScript를 사용하여 웹사이트를 제작하였다.

  • PDF

Evaluating Chest Abnormalities Detection: YOLOv7 and Detection Transformer with CycleGAN Data Augmentation

  • Yoshua Kaleb Purwanto;Suk-Ho Lee;Dae-Ki Kang
    • International journal of advanced smart convergence
    • /
    • 제13권2호
    • /
    • pp.195-204
    • /
    • 2024
  • In this paper, we investigate the comparative performance of two leading object detection architectures, YOLOv7 and Detection Transformer (DETR), across varying levels of data augmentation using CycleGAN. Our experiments focus on chest scan images within the context of biomedical informatics, specifically targeting the detection of abnormalities. The study reveals that YOLOv7 consistently outperforms DETR across all levels of augmented data, maintaining better performance even with 75% augmented data. Additionally, YOLOv7 demonstrates significantly faster convergence, requiring approximately 30 epochs compared to DETR's 300 epochs. These findings underscore the superiority of YOLOv7 for object detection tasks, especially in scenarios with limited data and when rapid convergence is essential. Our results provide valuable insights for researchers and practitioners in the field of computer vision, highlighting the effectiveness of YOLOv7 and the importance of data augmentation in improving model performance and efficiency.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • 한국해양공학회지
    • /
    • 제36권1호
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

SSIM 목적 함수와 CycleGAN을 이용한 적외선 이미지 데이터셋 생성 기법 연구 (Synthetic Infra-Red Image Dataset Generation by CycleGAN based on SSIM Loss Function)

  • 이하늘;이현재
    • 한국군사과학기술학회지
    • /
    • 제25권5호
    • /
    • pp.476-486
    • /
    • 2022
  • Synthetic dynamic infrared image generation from the given virtual environment is being the primary goal to simulate the output of the infra-red(IR) camera installed on a vehicle to evaluate the control algorithm for various search & reconnaissance missions. Due to the difficulty to obtain actual IR data in complex environments, Artificial intelligence(AI) has been used recently in the field of image data generation. In this paper, CycleGAN technique is applied to obtain a more realistic synthetic IR image. We added the Structural Similarity Index Measure(SSIM) loss function to the L1 loss function to generate a more realistic synthetic IR image when the CycleGAN image is generated. From the simulation, it is applicable to the guided-missile flight simulation tests by using the synthetic infrared image generated by the proposed technique.

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.