• Title/Summary/Keyword: Video Texture Transfer

Search Result 5, Processing Time 0.023 seconds

Real-time Style Transfer for Video (실시간 비디오 스타일 전이 기법에 관한 연구)

  • Seo, Sang Hyun
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.63-68
    • /
    • 2016
  • Texture transfer is a method to transfer the texture of an input image into a target image, and is also used for transferring artistic style of the input image. This study presents a real-time texture transfer for generating artistic style video. In order to enhance performance, this paper proposes a parallel framework using T-shape kernel used in general texture transfer on GPU. To accelerate motion computation time which is necessarily required for maintaining temporal coherence, a multi-scaled motion field is proposed in parallel concept. Through these approach, an artistic texture transfer for video with a real-time performance is archived.

Texture Transfer Based on Video (비디오 기반의 질감 전이 기법)

  • Kong, Phutphalla;Lee, Ho-Chang;Yoon, Kyung-Hyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.406-407
    • /
    • 2012
  • Texture transfer is a NPR technique for expressing various styles according to source (reference) image. By late 2000s, there are many texture transfer researches. But video base researchers are not active. Moreover, they didn't use important feature like directional information which need to express detail characteristics of target. So, we propose a new method to generate texture transfer animation (using video) with directional effect for maintaining temporal coherence and controlling coherence direction of texture. For maintaining temporal coherence, we use optical flow and confidence map to adapt for occlusion/disocclusion boundaries. And we control direction of texture for taking structure of input. For expressing various texture effects according to different regions, we calculate gradient based on directional weight. With these techniques, our algorithm can make animation result that maintain temporal coherence and express directional texture effect. It is reflect the characteristics of source and target image well. And our result can express various texture directions automatically.

Dynamic Reconstruction Algorithm of 3D Volumetric Models (3D 볼류메트릭 모델의 동적 복원 알고리즘)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.207-215
    • /
    • 2022
  • The latest volumetric technology's high geometrical accuracy and realism ensure a high degree of correspondence between the real object and the captured 3D model. Nevertheless, since the 3D model obtained in this way constitutes a sequence as a completely independent 3D model between frames, the consistency of the model surface structure (geometry) is not guaranteed for every frame, and the density of vertices is very high. It can be seen that the interconnection node (Edge) becomes very complicated. 3D models created using this technology are inherently different from models created in movie or video game production pipelines and are not suitable for direct use in applications such as real-time rendering, animation and simulation, and compression. In contrast, our method achieves consistency in the quality of the volumetric 3D model sequence by linking re-meshing, which ensures high consistency of the 3D model surface structure between frames and the gradual deformation and texture transfer through correspondence and matching of non-rigid surfaces. And It maintains the consistency of volumetric 3D model sequence quality and provides post-processing automation.

Video-based Stained Glass

  • Kang, Dongwann;Lee, Taemin;Shin, Yong-Hyeon;Seo, Sanghyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2345-2358
    • /
    • 2022
  • This paper presents a method to generate stained-glass animation from video inputs. The method initially segments an input video volume into several regions considered as fragments of glass by mean-shift segmentation. However, the segmentation predominantly results in over-segmentation, causing several tiny segments in a highly textured area. In practice, assembling significantly tiny or large glass fragments is avoided to ensure architectural stability in stained glass manufacturing. Therefore, we use low-frequency components in the segmentation to prevent over-segmentation and subdivide segmented regions that are oversized. The subdividing must be coherent between adjacent frames to prevent temporal artefacts, such as flickering and the shower door effect. To temporally subdivide regions coherently, we obtain a panoramic image from the segmented regions in input frames, subdivide it using a weighted Voronoi diagram, and thereafter project the subdivided regions onto the input frames. To render stained glass fragment for each coherent region, we determine the optimal match glass fragment for the region from a dataset consisting of real stained-glass fragment images and transfer its color and texture to the region. Finally, applying lead came at the boundary of the regions in each frame yields temporally coherent stained-glass animation.

Mosaic Technique on Panning Video Images using Interpolation Search (보간 검색을 이용한 Panning 비디오 영상에서의 모자이크 기법)

  • Jang, Sung-Gab;Kim, Jae-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.63-72
    • /
    • 2005
  • This paper proposes a new method to construct a panorama image from video sequences captured by the video camcoder revolving on the center axis of the tripod. The proposed method is consisted of two algorithms; frame selection and image mosaics. In order to select frames to construct the panorama image, we employ the interpolation search using the information in overlapped areas. This method can search suitable frames quickly. We construct an image mosaic using the projective transform induced from four pairs of quasi-features. The conventional methods select feature points by using only texture information, but the presented method in this paper uses the position of each feature point as well. We make an experiment on the proposed method with real video sequences. The results show that the proposed method is better than the conventional one in terms of image quality.