• Title/Summary/Keyword: cycle-consistent adversarial network

Search Result 4, Processing Time 0.019 seconds

Many-to-many voice conversion experiments using a Korean speech corpus (다수 화자 한국어 음성 변환 실험)

  • Yook, Dongsuk;Seo, HyungJin;Ko, Bonggu;Yoo, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.351-358
    • /
    • 2022
  • Recently, Generative Adversarial Networks (GAN) and Variational AutoEncoders (VAE) have been applied to voice conversion that can make use of non-parallel training data. Especially, Conditional Cycle-Consistent Generative Adversarial Networks (CC-GAN) and Cycle-Consistent Variational AutoEncoders (CycleVAE) show promising results in many-to-many voice conversion among multiple speakers. However, the number of speakers has been relatively small in the conventional voice conversion studies using the CC-GANs and the CycleVAEs. In this paper, we extend the number of speakers to 100, and analyze the performances of the many-to-many voice conversion methods experimentally. It has been found through the experiments that the CC-GAN shows 4.5 % less Mel-Cepstral Distortion (MCD) for a small number of speakers, whereas the CycleVAE shows 12.7 % less MCD in a limited training time for a large number of speakers.

Reverting Gene Expression Pattern of Cancer into Normal-Like Using Cycle-Consistent Adversarial Network

  • Lee, Chan-hee;Ahn, TaeJin
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.275-283
    • /
    • 2018
  • Cancer show distinct pattern of gene expression when it is compared to normal. This difference results malignant characteristic of cancer. Many cancer drugs are targeting this difference so that it can selectively kill cancer cells. One of the recent demand for personalized treating cancer is retrieving normal tissue from a patient so that the gene expression difference between cancer and normal be assessed. However, in most clinical situation it is hard to retrieve normal tissue from a patient. This is because biopsy of normal tissues may cause damage to the organ function or a risk of infection or side effect what a patient to take. Thus, there is a challenge to estimate normal cell's gene expression where cancers are originated from without taking additional biopsy. In this paper, we propose in-silico based prediction of normal cell's gene expression from gene expression data of a tumor sample. We call this challenge as reverting the cancer into normal. We divided this challenge into two parts. The first part is making a generator that is able to fool a pretrained discriminator. Pretrained discriminator is from the training of public data (9,601 cancers, 7,240 normals) which shows 0.997 of accuracy to discriminate if a given gene expression pattern is cancer or normal. Deceiving this pretrained discriminator means our method is capable of generating very normal-like gene expression data. The second part of the challenge is to address whether generated normal is similar to true reverse form of the input cancer data. We used, cycle-consistent adversarial networks to approach our challenges, since this network is capable of translating one domain to the other while maintaining original domain's feature and at the same time adding the new domain's feature. We evaluated that, if we put cancer data into a cycle-consistent adversarial network, it could retain most of the information from the input (cancer) and at the same time change the data into normal. We also evaluated if this generated gene expression of normal tissue would be the biological reverse form of the gene expression of cancer used as an input.

Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis

  • Chenggong Yan;Jie Lin;Haixia Li;Jun Xu;Tianjing Zhang;Hao Chen;Henry C. Woodruff;Guangyao Wu;Siqi Zhang;Yikai Xu;Philippe Lambin
    • Korean Journal of Radiology
    • /
    • v.22 no.6
    • /
    • pp.983-993
    • /
    • 2021
  • Objective: To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods: Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results: With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion: The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.1
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.