• Title/Summary/Keyword: Image Style Transfer

Search Result 39, Processing Time 0.026 seconds

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

  • Xing, Baixi;Dou, Jian;Huang, Qing;Si, Huahao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1464-1485
    • /
    • 2021
  • Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate images in respect to emotional style. DCNNs (Deep Convolutional Neural Networks) can capture style and provide emotional style transfer iterative solution for affective image generation. Here, we learn the image emotion features via DCNNs and map the affective style on the other images. We set image emotion feature as the style target in this style transfer problem, and held experiments to handle affective image generation of eight emotion categories, including dignified, dreaming, sad, vigorous, soothing, exciting, joyous, and graceful. A user study was conducted to test the synesthesia emotional image style transfer result with ground truth user perception triggered by the music-image pairs' stimuli. The transferred affective image result for music-image emotional synesthesia perception was proved effective according to user study result.

Optimization of attention map based model for improving the usability of style transfer techniques

  • Junghye Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.31-38
    • /
    • 2023
  • Style transfer is one of deep learning-based image processing techniques that has been actively researched recently. These research efforts have led to significant improvements in the quality of result images. Style transfer is a technology that takes a content image and a style image as inputs and generates a transformed result image by applying the characteristics of the style image to the content image. It is becoming increasingly important in exploiting the diversity of digital content. To improve the usability of style transfer technology, ensuring stable performance is crucial. Recently, in the field of natural language processing, the concept of Transformers has been actively utilized. Attention maps, which forms the basis of Transformers, is also being actively applied and researched in the development of style transfer techniques. In this paper, we analyze the representative techniques SANet and AdaAttN and propose a novel attention map-based structure which can generate improved style transfer results. The results demonstrate that the proposed technique effectively preserves the structure of the content image while applying the characteristics of the style image.

A Multi-domain Style Transfer by Modified Generator of GAN

  • Lee, Geum-Boon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.27-33
    • /
    • 2022
  • In this paper, we propose a novel generator architecture for multi-domain style transfer method not an image to image translation, as a method of generating a styled image by transfering a style to the content image. A latent vector and Gaussian noises are added to the generator of GAN so that a high quality image is generated while considering the characteristics of various data distributions for each domain and preserving the features of the content data. With the generator architecture of the proposed GAN, networks are configured and presented so that the content image can learn the styles for each domain well, and it is applied to the domain composed of images of the four seasons to show the high resolution style transfer results.

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.

Super High-Resolution Image Style Transfer (초-고해상도 영상 스타일 전이)

  • Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.104-123
    • /
    • 2022
  • Style transfer based on neural network provides very high quality results by reflecting the high level structural characteristics of images, and thereby has recently attracted great attention. This paper deals with the problem of resolution limitation due to GPU memory in performing such neural style transfer. We can expect that the gradient operation for style transfer based on partial image, with the aid of the fixed size of receptive field, can produce the same result as the gradient operation using the entire image. Based on this idea, each component of the style transfer loss function is analyzed in this paper to obtain the necessary conditions for partitioning and padding, and to identify, among the information required for gradient calculation, the one that depends on the entire input. By structuring such information for using it as auxiliary constant input for partition-based gradient calculation, this paper develops a recursive algorithm for super high-resolution image style transfer. Since the proposed method performs style transfer by partitioning input image into the size that a GPU can handle, it can perform style transfer without the limit of the input image resolution accompanied by the GPU memory size. With the aid of such super high-resolution support, the proposed method can provide a unique style characteristics of detailed area which can only be appreciated in super high-resolution style transfer.

Real-time Style Transfer for Video (실시간 비디오 스타일 전이 기법에 관한 연구)

  • Seo, Sang Hyun
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.63-68
    • /
    • 2016
  • Texture transfer is a method to transfer the texture of an input image into a target image, and is also used for transferring artistic style of the input image. This study presents a real-time texture transfer for generating artistic style video. In order to enhance performance, this paper proposes a parallel framework using T-shape kernel used in general texture transfer on GPU. To accelerate motion computation time which is necessarily required for maintaining temporal coherence, a multi-scaled motion field is proposed in parallel concept. Through these approach, an artistic texture transfer for video with a real-time performance is archived.

A Normalized Loss Function of Style Transfer Network for More Diverse and More Stable Transfer Results (다양성 및 안정성 확보를 위한 스타일 전이 네트워크 손실 함수 정규화 기법)

  • Choi, Insung;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.980-993
    • /
    • 2020
  • Deep-learning based style transfer has recently attracted great attention, because it provides high quality transfer results by appropriately reflecting the high level structural characteristics of images. This paper deals with the problem of providing more stable and more diverse style transfer results of such deep-learning based style transfer method. Based on the investigation of the experimental results from the wide range of hyper-parameter settings, this paper defines the problem of the stability and the diversity of the style transfer, and proposes a partial loss normalization method to solve the problem. The style transfer using the proposed normalization method not only gives the stability on the control of the degree of style reflection, regardless of the input image characteristics, but also presents the diversity of style transfer results, unlike the existing method, at controlling the weight of the partial style loss, and provides the stability on the difference in resolution of the input image.

Algorithm development for texture and color style transfer of cultural heritage images (문화유산 이미지의 질감과 색상 스타일 전이를 위한 알고리즘 개발 연구)

  • Baek Seohyun;Cho Yeeun;Ahn Sangdoo;Choi Jongwon
    • Conservation Science in Museum
    • /
    • v.31
    • /
    • pp.55-70
    • /
    • 2024
  • Style transfer algorithms are currently undergoing active research and are used, for example, to convert ordinary images into classical painting styles. However, such algorithms have yet to produce appropriate results when applied to Korean cultural heritage images, while the number of cases for such applications also remains insufficient. Accordingly, this study attempts to develop a style transfer algorithm that can be applied to styles found among Korean cultural heritage. The algorithm was produced by improving data comprehension by enabling it to learn meaningful characteristics of the styles through representation learning and to separate the cultural heritage from the background in the target images, allowing it to extract the style-relevant areas with the desired color and texture from the style images. This study confirmed that, by doing so, a new image can be created by effectively transferring the characteristics of the style image while maintaining the form of the target image, which thereby enables the transfer of a variety of cultural heritage styles.

A Study for Generation of Artificial Lunar Topography Image Dataset Using a Deep Learning Based Style Transfer Technique (딥러닝 기반 스타일 변환 기법을 활용한 인공 달 지형 영상 데이터 생성 방안에 관한 연구)

  • Na, Jong-Ho;Lee, Su-Deuk;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.32 no.2
    • /
    • pp.131-143
    • /
    • 2022
  • The lunar exploration autonomous vehicle operates based on the lunar topography information obtained from real-time image characterization. For highly accurate topography characterization, a large number of training images with various background conditions are required. Since the real lunar topography images are difficult to obtain, it should be helpful to be able to generate mimic lunar image data artificially on the basis of the planetary analogs site images and real lunar images available. In this study, we aim to artificially create lunar topography images by using the location information-based style transfer algorithm known as Wavelet Correct Transform (WCT2). We conducted comparative experiments using lunar analog site images and real lunar topography images taken during China's and America's lunar-exploring projects (i.e., Chang'e and Apollo) to assess the efficacy of our suggested approach. The results show that the proposed techniques can create realistic images, which preserve the topography information of the analog site image while still showing the same condition as an image taken on lunar surface. The proposed algorithm also outperforms a conventional algorithm, Deep Photo Style Transfer (DPST) in terms of temporal and visual aspects. For future work, we intend to use the generated styled image data in combination with real image data for training lunar topography objects to be applied for topographic detection and segmentation. It is expected that this approach can significantly improve the performance of detection and segmentation models on real lunar topography images.

CNN-based Opti-Acoustic Transformation for Underwater Feature Matching (수중에서의 특징점 매칭을 위한 CNN기반 Opti-Acoustic변환)

  • Jang, Hyesu;Lee, Yeongjun;Kim, Giseop;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • In this paper, we introduce the methodology that utilizes deep learning-based front-end to enhance underwater feature matching. Both optical camera and sonar are widely applicable sensors in underwater research, however, each sensor has its own weaknesses, such as light condition and turbidity for the optic camera, and noise for sonar. To overcome the problems, we proposed the opti-acoustic transformation method. Since feature detection in sonar image is challenging, we converted the sonar image to an optic style image. Maintaining the main contents in the sonar image, CNN-based style transfer method changed the style of the image that facilitates feature detection. Finally, we verified our result using cosine similarity comparison and feature matching against the original optic image.