• Title/Summary/Keyword: 조건 적대적 생성 신경망

Search Result 7, Processing Time 0.02 seconds

Image-to-Image Translation Based on U-Net with R2 and Attention (R2와 어텐션을 적용한 유넷 기반의 영상 간 변환에 관한 연구)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • In the Image processing and computer vision, the problem of reconstructing from one image to another or generating a new image has been steadily drawing attention as hardware advances. However, the problem of computer-generated images also continues to emerge when viewed with human eyes because it is not natural. Due to the recent active research in deep learning, image generating and improvement problem using it are also actively being studied, and among them, the network called Generative Adversarial Network(GAN) is doing well in the image generating. Various models of GAN have been presented since the proposed GAN, allowing for the generation of more natural images compared to the results of research in the image generating. Among them, pix2pix is a conditional GAN model, which is a general-purpose network that shows good performance in various datasets. pix2pix is based on U-Net, but there are many networks that show better performance among U-Net based networks. Therefore, in this study, images are generated by applying various networks to U-Net of pix2pix, and the results are compared and evaluated. The images generated through each network confirm that the pix2pix model with Attention, R2, and Attention-R2 networks shows better performance than the existing pix2pix model using U-Net, and check the limitations of the most powerful network. It is suggested as a future study.

A Study on the Emotional Text Generation using Generative Adversarial Network (Generative Adversarial Network 학습을 통한 감정 텍스트 생성에 관한 연구)

  • Kim, Woo-seong;Kim, Hyeoncheol
    • Annual Conference of KIPS
    • /
    • 2019.05a
    • /
    • pp.380-382
    • /
    • 2019
  • GAN(Generative Adversarial Network)은 정해진 학습 데이터에서 정해진 생성자와 구분자가 서로 각각에게 적대적인 관계를 유지하며 동시에 서로에게 생산적인 관계를 유지하며 가능한 긍정적인 영향을 주며 학습하는 기계학습 분야이다. 전통적인 문장 생성은 단어의 통계적 분포를 기반으로 한 마르코프 결정 과정(Markov Decision Process)과 순환적 신경 모델(Recurrent Neural Network)을 사용하여 학습시킨다. 이러한 방법은 문장 생성과 같은 연속된 데이터를 기반으로 한 모델들의 표준 모델이 되었다. GAN은 표준모델이 존재하는 해당 분야에 새로운 모델로써 다양한 시도가 시도되고 있다. 하지만 이러한 모델의 시도에도 불구하고, 지금까지 해결하지 못하고 있는 다양한 문제점이 존재한다. 이 논문에서는 다음과 같은 두 가지 문제점에 집중하고자 한다. 첫째, Sequential 한 데이터 처리에 어려움을 겪는다. 둘째, 무작위로 생성하기 때문에 사용자가 원하는 데이터만 출력되지 않는다. 본 논문에서는 이러한 문제점을 해결하고자, 부분적인 정답 제공을 통한 조건별 생산적 적대 생성망을 설계하여 이 방법을 사용하여 해결하였다. 첫째, Sequence to Sequence 모델을 도입하여 Sequential한 데이터를 처리할 수 있도록 하여 원시적인 텍스트를 생성할 수 있게 하였다. 둘째, 부분적인 정답 제공을 통하여 문장의 생성 조건을 구분하였다. 결과적으로, 제안하는 기법들로 원시적인 감정 텍스트를 생성할 수 있었다.

Generation of High-Resolution Chest X-rays using Multi-scale Conditional Generative Adversarial Network with Attention (주목 메커니즘 기반의 멀티 스케일 조건부 적대적 생성 신경망을 활용한 고해상도 흉부 X선 영상 생성 기법)

  • Ann, Kyeongjin;Jang, Yeonggul;Ha, Seongmin;Jeon, Byunghwan;Hong, Youngtaek;Shim, Hackjoon;Chang, Hyuk-Jae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.1-12
    • /
    • 2020
  • In the medical field, numerical imbalance of data due to differences in disease prevalence is a common problem. It reduces the performance of a artificial intelligence network, leading to difficulties in learning a network with good performance. Recently, generative adversarial network (GAN) technology has been introduced as a way to address this problem, and its ability has been demonstrated by successful applications in various fields. However, it is still difficult to achieve good results in solving problems with performance degraded by numerical imbalances because the image resolution of the previous studies is not yet good enough and the structure in the image is modeled locally. In this paper, we propose a multi-scale conditional generative adversarial network based on attention mechanism, which can produce high resolution images to solve the numerical imbalance problem of chest X-ray image data. The network was able to produce images for various diseases by controlling condition variables with only one network. It's efficient and effective in that the network don't need to be learned independently for all disease classes and solves the problem of long distance dependency in image generation with self-attention mechanism.

Optimization of Abdominal X-ray Images using Generative Adversarial Network to Realize Minimized Radiation Dose (방사선 조사선량의 최소화를 위한 생성적 적대 신경망을 활용한 복부 엑스선 영상 최적화 연구)

  • Sangwoo Kim;Jae-Dong Rhim
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.2
    • /
    • pp.191-199
    • /
    • 2023
  • This study aimed to propose minimized radiation doses with an optimized abdomen x-ray image, which realizes a Deep Blind Image Super-Resolution Generative adversarial network (BSRGAN) technique. Entrance surface doses (ESD) measured were collected by changing exposure conditions. In the identical exposures, abdominal images were acquired and were processed with the BSRGAN. The images reconstructed by the BSRGAN were compared to a reference image with 80 kVp and 320 mA, which was evaluated by mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). In addition, signal profile analysis was employed to validate the effect of the images reconstructed by the BSRGAN. The exposure conditions with the lowest MSE (about 0.285) were shown in 90 kVp, 125 mA and 100 kVp, 100 mA, which decreased the ESD in about 52 to 53% reduction), exhibiting PSNR = 37.694 and SSIM = 0.999. The signal intensity variations in the optimized conditions rather decreased than that of the reference image. This means that the optimized exposure conditions would obtain reasonable image quality with a substantial decrease of the radiation dose, indicating it could sufficiently reflect the concept of As Low As Reasonably Achievable (ALARA) as the principle of radiation protection.

A Study on Lightweight and Optimizing with Generative Adversarial Network Based Video Super-resolution Model (생성적 적대 신경망 기반의 딥 러닝 비디오 초 해상화 모델 경량화 및 최적화 기법 연구)

  • Kim, Dong-hwi;Lee, Su-jin;Park, Sang-hyo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1226-1228
    • /
    • 2022
  • FHD 이상을 넘어선 UHD급의 고해상도 동영상 콘텐츠의 수요 및 공급이 증가함에 따라 전반적인 산업 영역에서 네트워크 자원을 효율적으로 이용하여 동영상 콘텐츠를 제공하는 데에 관심을 두게 되었다. 기존 방법을 통한 bi-cubic, bi-linear interpolation 등의 방법은 딥 러닝 기반의 모델에 비교적 인풋 이미지의 특징을 잘 잡아내지 못하는 결과를 나타내었다. 딥 러닝 기반의 초 해상화 기술의 경우 기존 방법과 비교 시 연산을 위해 더 많은 자원을 필요로 하므로, 이러한 사용 조건에 따라 본 논문은 초 해상화가 가능한 딥 러닝 모델을 경량화 기법을 사용하여 기존에 사용된 모델보다 비교적 적은 자원을 효율적으로 사용할 수 있도록 연구 개발하는 데 목적을 두었다. 연구방법으로는 structure pruning을 이용하여 모델 자체의 구조를 경량화 하였고, 학습을 진행해야 하는 파라미터를 줄여 하드웨어 자원을 줄이는 연구를 진행했다. 또한, Residual Network의 개수를 줄여가며 PSNR, LPIPS, tOF등의 결과를 비교했다.

  • PDF

Voice Conversion using Generative Adversarial Nets conditioned by Phonetic Posterior Grams (Phonetic Posterior Grams에 의해 조건화된 적대적 생성 신경망을 사용한 음성 변환 시스템)

  • Lim, Jin-su;Kang, Cheon-seong;Kim, Dong-Ha;Kim, Kyung-sup
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.369-372
    • /
    • 2018
  • This paper suggests non-parallel-voice-conversion network conversing voice between unmapped voice pair as source voice and target voice. Conventional voice conversion researches used learning methods that minimize spectrogram's distance error. Not only these researches have some problem that is lost spectrogram resolution by methods averaging pixels. But also have used parallel data that is hard to collect. This research uses PPGs that is input voice's phonetic data and a GAN learning method to generate more clear voices. To evaluate the suggested method, we conduct MOS test with GMM based Model. We found that the performance is improved compared to the conventional methods.

  • PDF

A study on speech disentanglement framework based on adversarial learning for speaker recognition (화자 인식을 위한 적대학습 기반 음성 분리 프레임워크에 대한 연구)

  • Kwon, Yoohwan;Chung, Soo-Whan;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.447-453
    • /
    • 2020
  • In this paper, we propose a system to extract effective speaker representations from a speech signal using a deep learning method. Based on the fact that speech signal contains identity unrelated information such as text content, emotion, background noise, and so on, we perform a training such that the extracted features only represent speaker-related information but do not represent speaker-unrelated information. Specifically, we propose an auto-encoder based disentanglement method that outputs both speaker-related and speaker-unrelated embeddings using effective loss functions. To further improve the reconstruction performance in the decoding process, we also introduce a discriminator popularly used in Generative Adversarial Network (GAN) structure. Since improving the decoding capability is helpful for preserving speaker information and disentanglement, it results in the improvement of speaker verification performance. Experimental results demonstrate the effectiveness of our proposed method by improving Equal Error Rate (EER) on benchmark dataset, Voxceleb1.