• 제목/요약/키워드: Style transfer

Search Result 135, Processing Time 0.021 seconds

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

  • Xing, Baixi;Dou, Jian;Huang, Qing;Si, Huahao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1464-1485
    • /
    • 2021
  • Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate images in respect to emotional style. DCNNs (Deep Convolutional Neural Networks) can capture style and provide emotional style transfer iterative solution for affective image generation. Here, we learn the image emotion features via DCNNs and map the affective style on the other images. We set image emotion feature as the style target in this style transfer problem, and held experiments to handle affective image generation of eight emotion categories, including dignified, dreaming, sad, vigorous, soothing, exciting, joyous, and graceful. A user study was conducted to test the synesthesia emotional image style transfer result with ground truth user perception triggered by the music-image pairs' stimuli. The transferred affective image result for music-image emotional synesthesia perception was proved effective according to user study result.

A Normalized Loss Function of Style Transfer Network for More Diverse and More Stable Transfer Results (다양성 및 안정성 확보를 위한 스타일 전이 네트워크 손실 함수 정규화 기법)

  • Choi, Insung;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.980-993
    • /
    • 2020
  • Deep-learning based style transfer has recently attracted great attention, because it provides high quality transfer results by appropriately reflecting the high level structural characteristics of images. This paper deals with the problem of providing more stable and more diverse style transfer results of such deep-learning based style transfer method. Based on the investigation of the experimental results from the wide range of hyper-parameter settings, this paper defines the problem of the stability and the diversity of the style transfer, and proposes a partial loss normalization method to solve the problem. The style transfer using the proposed normalization method not only gives the stability on the control of the degree of style reflection, regardless of the input image characteristics, but also presents the diversity of style transfer results, unlike the existing method, at controlling the weight of the partial style loss, and provides the stability on the difference in resolution of the input image.

Optimization of attention map based model for improving the usability of style transfer techniques

  • Junghye Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.31-38
    • /
    • 2023
  • Style transfer is one of deep learning-based image processing techniques that has been actively researched recently. These research efforts have led to significant improvements in the quality of result images. Style transfer is a technology that takes a content image and a style image as inputs and generates a transformed result image by applying the characteristics of the style image to the content image. It is becoming increasingly important in exploiting the diversity of digital content. To improve the usability of style transfer technology, ensuring stable performance is crucial. Recently, in the field of natural language processing, the concept of Transformers has been actively utilized. Attention maps, which forms the basis of Transformers, is also being actively applied and researched in the development of style transfer techniques. In this paper, we analyze the representative techniques SANet and AdaAttN and propose a novel attention map-based structure which can generate improved style transfer results. The results demonstrate that the proposed technique effectively preserves the structure of the content image while applying the characteristics of the style image.

Exploring the Artistic Style of the Oriental Paintings (동양화의 예술적 스타일 탐구)

  • Li, Suli;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.475-478
    • /
    • 2019
  • Although the work of neural style transfer has shown successful applications in transferring the style of a certain type of artistic painting, it is less effective in transferring Oriental paintings. In this paper, we explore three methods which are effective in transferring Oriental paintings. Then, we take a typical network from each method to carry on the experiment, in view of three different methods to Oriental paintings style transfer effect has carried on the discussion.

Super High-Resolution Image Style Transfer (초-고해상도 영상 스타일 전이)

  • Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.104-123
    • /
    • 2022
  • Style transfer based on neural network provides very high quality results by reflecting the high level structural characteristics of images, and thereby has recently attracted great attention. This paper deals with the problem of resolution limitation due to GPU memory in performing such neural style transfer. We can expect that the gradient operation for style transfer based on partial image, with the aid of the fixed size of receptive field, can produce the same result as the gradient operation using the entire image. Based on this idea, each component of the style transfer loss function is analyzed in this paper to obtain the necessary conditions for partitioning and padding, and to identify, among the information required for gradient calculation, the one that depends on the entire input. By structuring such information for using it as auxiliary constant input for partition-based gradient calculation, this paper develops a recursive algorithm for super high-resolution image style transfer. Since the proposed method performs style transfer by partitioning input image into the size that a GPU can handle, it can perform style transfer without the limit of the input image resolution accompanied by the GPU memory size. With the aid of such super high-resolution support, the proposed method can provide a unique style characteristics of detailed area which can only be appreciated in super high-resolution style transfer.

A Multi-domain Style Transfer by Modified Generator of GAN

  • Lee, Geum-Boon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.27-33
    • /
    • 2022
  • In this paper, we propose a novel generator architecture for multi-domain style transfer method not an image to image translation, as a method of generating a styled image by transfering a style to the content image. A latent vector and Gaussian noises are added to the generator of GAN so that a high quality image is generated while considering the characteristics of various data distributions for each domain and preserving the features of the content data. With the generator architecture of the proposed GAN, networks are configured and presented so that the content image can learn the styles for each domain well, and it is applied to the domain composed of images of the four seasons to show the high resolution style transfer results.

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.

Real-time Style Transfer for Video (실시간 비디오 스타일 전이 기법에 관한 연구)

  • Seo, Sang Hyun
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.63-68
    • /
    • 2016
  • Texture transfer is a method to transfer the texture of an input image into a target image, and is also used for transferring artistic style of the input image. This study presents a real-time texture transfer for generating artistic style video. In order to enhance performance, this paper proposes a parallel framework using T-shape kernel used in general texture transfer on GPU. To accelerate motion computation time which is necessarily required for maintaining temporal coherence, a multi-scaled motion field is proposed in parallel concept. Through these approach, an artistic texture transfer for video with a real-time performance is archived.

Motion Style Transfer using Variational Autoencoder (변형 자동 인코더를 활용한 모션 스타일 이전)

  • Ahn, Jewon;Kwon, Taesoo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.33-43
    • /
    • 2021
  • In this paper, we propose a framework that transfers the information of style motions to content motions based on a variational autoencoder network combined with a style encoding in the latent space. Because we transfer a style to a content motion that is sampled from a variational autoencoder, we can increase the diversity of existing motion data. In addition, we can improve the unnatural motions caused by decoding a new latent variable from style transfer. That improvement was achieved by additionally using the velocity information of motions when generating next frames.

Temporal Transfer of Locomotion Style

  • Kim, Yejin;Kim, Myunggyu;Neff, Michael
    • ETRI Journal
    • /
    • v.37 no.2
    • /
    • pp.406-416
    • /
    • 2015
  • Timing plays a key role in expressing the qualitative aspects of a character's motion; that is, conveying emotional state, personality, and character role, all potentially without changing spatial positions. Temporal editing of locomotion style is particularly difficult for a novice animator since observers are not well attuned to the sense of weight and energy displayed through motion timing; and the interface for adjusting timing is far less intuitive to use than that for adjusting pose. In this paper, we propose an editing system that effectively captures the timing variations in an example locomotion set and utilizes them for style transfer from one motion to another via both global and upper-body timing transfers. The global timing transfer focuses on matching the input motion to the body speed of the selected example motion, while the upper-body timing transfer propagates the sense of movement flow - succession - through the torso and arms. Our transfer process is based on key times detected from the example set and transferring the relative changes of angle rotation in the upper body joints from a timing source to an input target motion. We demonstrate that our approach is practical in an interactive application such that a set of short locomotion cycles can be applied to generate a longer sequence with continuously varied timings.