• Title/Summary/Keyword: Adversarial Networks

Search Result 214, Processing Time 0.02 seconds

Convergence of Artificial Intelligence Techniques and Domain Specific Knowledge for Generating Super-Resolution Meteorological Data (기상 자료 초해상화를 위한 인공지능 기술과 기상 전문 지식의 융합)

  • Ha, Ji-Hun;Park, Kun-Woo;Im, Hyo-Hyuk;Cho, Dong-Hee;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.63-70
    • /
    • 2021
  • Generating a super-resolution meteological data by using a high-resolution deep neural network can provide precise research and useful real-life services. We propose a new technique of generating improved training data for super-resolution deep neural networks. To generate high-resolution meteorological data with domain specific knowledge, Lambert conformal conic projection and objective analysis were applied based on observation data and ERA5 reanalysis field data of specialized institutions. As a result, temperature and humidity analysis data based on domain specific knowledge showed improved RMSE by up to 42% and 46%, respectively. Next, a super-resolution generative adversarial network (SRGAN) which is one of the aritifial intelligence techniques was used to automate the manual data generation technique using damain specific techniques as described above. Experiments were conducted to generate high-resolution data with 1 km resolution from global model data with 10 km resolution. Finally, the results generated with SRGAN have a higher resoltuion than the global model input data, and showed a similar analysis pattern to the manually generated high-resolution analysis data, but also showed a smooth boundary.

A Study on Webtoon Background Image Generation Using CartoonGAN Algorithm (CartoonGAN 알고리즘을 이용한 웹툰(Webtoon) 배경 이미지 생성에 관한 연구)

  • Saekyu Oh;Juyoung Kang
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.173-185
    • /
    • 2022
  • Nowadays, Korean webtoons are leading the global digital comic market. Webtoons are being serviced in various languages around the world, and dramas or movies produced with Webtoons' IP (Intellectual Property Rights) have become a big hit, and more and more webtoons are being visualized. However, with the success of these webtoons, the working environment of webtoon creators is emerging as an important issue. According to the 2021 Cartoon User Survey, webtoon creators spend 10.5 hours a day on creative activities on average. Creators have to draw large amount of pictures every week, and competition among webtoons is getting fiercer, and the amount of paintings that creators have to draw per episode is increasing. Therefore, this study proposes to generate webtoon background images using deep learning algorithms and use them for webtoon production. The main character in webtoon is an area that needs much of the originality of the creator, but the background picture is relatively repetitive and does not require originality, so it can be useful for webtoon production if it can create a background picture similar to the creator's drawing style. Background generation uses CycleGAN, which shows good performance in image-to-image translation, and CartoonGAN, which is specialized in the Cartoon style image generation. This deep learning-based image generation is expected to shorten the working hours of creators in an excessive work environment and contribute to the convergence of webtoons and technologies.

Advancing Process Plant Design: A Framework for Design Automation Using Generative Neural Network Models

  • Minhyuk JUNG;Jaemook CHOI;Seonu JOO;Wonseok CHOI;Hwikyung Chun
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1285-1285
    • /
    • 2024
  • In process plant construction, the implementation of design automation technologies is pivotal in reducing the timeframes associated with the design phase and in enabling the generation and evaluation of a variety of design alternatives, thereby facilitating the identification of optimal solutions. These technologies can play a crucial role in ensuring the successful delivery of projects. Previous research in the domain of design automation has primarily focused on parametric design in architectural contexts and on the automation of equipment layout and pipe routing within plant engineering, predominantly employing rule-based algorithms. Nevertheless, these studies are constrained by the limited flexibility of their models, which narrows the scope for generating alternative solutions and complicates the process of exploring comprehensive solutions using nonlinear optimization techniques as the number of design and engineering parameters increases. This research introduces a framework for automating plant design through the use of generative neural network models to overcome these challenges. The framework is applicable to the layout problems of process plants, covering the equipment necessary for production processes and the facilities for essential resources and their interconnections. The development of the proposed Neural-network (NN) based Generative Design Model unfolds in four stages: (a) Rule-based Model Development: This initial phase involves the development of rule-based models for layout generation and evaluation, where the generation model produces layouts based on predefined parameters, and the evaluation model assesses these layouts using various performance metrics. (b) Neural Network Model Development: This phase transitions towards neural network models, establishing a NN-based layout generation model utilizing Generative Adversarial Network (GAN)-based methods and a NN-based layout evaluation model. (c) Model Optimization: The third phase is dedicated to optimizing the models through Bayesian Optimization, aiming to extend the exploration space beyond the limitations of rule-based models. (d) Inverse Design Model Development: The concluding phase employs an inverse design method to merge the generative and evaluative networks, resulting in a model that outputs layout designs to meet specific performance objectives. This study aims to augment the efficiency and effectiveness of the design process in process plant construction, transcending the limitations of conventional rule-based approaches and contributing to the achievement of successful project outcomes.

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.