• Title/Summary/Keyword: video composition

Search Result 106, Processing Time 0.025 seconds

Poisson Video Composition Using Shape Matching (형태 정합을 이용한 포아송 동영상 합성)

  • Heo, Gyeongyong;Choi, Hun;Kim, Jihong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.4
    • /
    • pp.617-623
    • /
    • 2018
  • In this paper, we propose a novel seamless video composition method based on shape matching and Poisson equation. Video composition method consists of video segmentation process and video blending process. In the video segmentation process, the user first sets a trimap for the first frame, and then performs a grab-cut algorithm. Next, considering that the performance of video segmentation may be reduced if the color, brightness and texture of the object and the background are similar, the object region segmented in the current frame is corrected through shape matching between the objects of the current frame and the previous frame. In the video blending process, the object of source video and the background of target video are blended seamlessly using Poisson equation, and the object is located according to the movement path set by the user. Simulation results show that the proposed method has better performance not only in the naturalness of the composite video but also in computational time.

Multi-View Video Processing: IVR, Graphics Composition, and Viewer

  • Kwon, Jun-Sup;Hwang, Won-Young;Choi, Chang-Yeol;Chang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.12 no.4
    • /
    • pp.333-341
    • /
    • 2007
  • Multi-view video has recently gained much attraction from academic and commercial fields because it can deliver the immersive viewing of natural scenes. This paper presents multi-view video processing being composed of intermediate view reconstruction (IVR), graphics composition, and multi-view video viewer. First we generate virtual views between multi-view cameras using depth and texture images of the input videos. Then we mix graphic objects to the generated view images. The multi-view video viewer is developed to examine the reconstructed images and composite images. As well, it can provide users with some special effects of multi-view video. We present experimental results that validate our proposed method and show that graphic objects could become the inalienable part of the multi-view video.

Intelligent Composition of CG and Dynamic Scene (CG와 동영상의 지적합성)

  • 박종일;정경훈;박경세;송재극
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1995.06a
    • /
    • pp.77-81
    • /
    • 1995
  • Video composition is to integrate multiple image materials into one scene. It considerably enhances the degree of freedom in producing various scenes. However, we need to adjust the viewing point sand the image planes of image planes of image materials for high quality video composition. In this paper, were propose an intelligent video composition technique concentrating on the composition of CG and real scene. We first model the camera system. The projection is assumed to be perspective and the camera motion is assumed to be 3D rotational and 3D translational. Then, we automatically extract camera parameters comprising the camera model from real scene by a dedicated algorithm. After that, CG scene is generated according to the camera parameters of the real scene. Finally the two are composed into one scene. Experimental results justify the validity of the proposed method.

Automatic Object Segmentation and Background Composition for Interactive Video Communications over Mobile Phones

  • Kim, Daehee;Oh, Jahwan;Jeon, Jieun;Lee, Junghyun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.125-132
    • /
    • 2012
  • This paper proposes an automatic object segmentation and background composition method for video communication over consumer mobile phones. The object regions were extracted based on the motion and color variance of the first two frames. To combine the motion and variance information, the Euclidean distance between the motion boundary pixel and the neighboring color variance edge pixels was calculated, and the nearest edge pixel was labeled to the object boundary. The labeling results were refined using the morphology for a more accurate and natural-looking boundary. The grow-cut segmentation algorithm begins in the expanded label map, where the inner and outer boundary belongs to the foreground and background, respectively. The segmented object region and a new background image stored a priori in the mobile phone was then composed. In the background composition process, the background motion was measured using the optical-flow, and the final result was synthesized by accurately locating the object region according to the motion information. This study can be considered an extended, improved version of the existing background composition algorithm by considering motion information in a video. The proposed segmentation algorithm reduces the computational complexity significantly by choosing the minimum resolution at each segmentation step. The experimental results showed that the proposed algorithm can generate a fast, accurate and natural-looking background composition.

  • PDF

A Functional Modeling of Composition Manager for Service Composition Based on TINA (개방형 정보통신망 기반의 서비스 컴포지션을 위한 컴포지션 관리자 모델링)

  • 신영석;임선환
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.344-351
    • /
    • 2004
  • This paper describes a modeling of service composition manager based on TINA (Telecommunication Information Networking Architecture). The service composition function is mainly motivated by the desire to easily generate new service using existing services from retailers or third party service providers. The TNA-C (Consortium) specification for the service composition does not include the detailed composition procedures and its object models. In this paper, we propose a model of components for the service composition, which adapts a static composition feature in a single provider domain To validate the proposed modeling, we implemented prototype service composition function, which combines two multimedia services; a VOD (Video On Demand) service and a VCS (Video Conference Service) service. As a result, we obtain the specification of the detailed composition architecture between a retailer domain and a third-party service provider domain.

Development of Online Video Mash-up System based on Automatic Scene Elements Composition using Storyboard (스토리보드에 따라 장면요소를 자동 조합하는 주제모델링 기반 온라인 비디오 매쉬업 시스템 개발)

  • Park, Jongbin;Kim, Kyung-Won;Jung, Jong-Jin;Lim, Tae-Beom
    • Journal of Broadcast Engineering
    • /
    • v.21 no.4
    • /
    • pp.525-537
    • /
    • 2016
  • In this paper, we develop an online video mash-up system which use automatic scene elements composition scheme using a storyboard. There are two conventional online video production schemes. Video collage method is simple and easy, but it was difficult to reflect narrative or story. Another way is a template based method which usually select a template and it replaces resources such as photos or videos in the template. However, if the related templates do not exist, there are limitations that cannot create the desired output. In addition, the quality and atmosphere of the output is too dependent on the template. To solve these problems, we propose a video mash-up scheme using storyboard and we also implement a classification and recommendation scheme based on topic modeling.

Hybrid Blending for Video Composition (동영상 합성을 위한 혼합 블랜딩)

  • Kim, Jihong;Heo, Gyeongyong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.231-237
    • /
    • 2020
  • In this paper, we provide an efficient hybrid video blending scheme to improve the naturalness of composite video in Poisson equation-based composite methods. In image blending process, various blending methods are used depending on the purpose of image composition. The hybrid blending method proposed in this paper has the characteristics that there is no seam in the composite video and the color distortion of the object is reduced by properly utilizing the advantages of Poisson blending and alpha blending. First, after blending the source object by the Poisson blending method, the color difference between the blended object and the original object is compared. If the color difference is equal to or greater than the threshold value, the object of source video is alpha blended and is added together with the Poisson blended object. Simulation results show that the proposed method has not only better naturalness than Poisson blending and alpha blending, but also requires a relatively small amount of computation.

Robust Estimation of Camera Parameters from Video Signals for Video Composition (영상합성을 위한 영상으로부터의 견실한 카메라피라미터 확정법)

  • 박종일;이충웅
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.10
    • /
    • pp.1305-1313
    • /
    • 1995
  • In this paper, we propose a robust estimation of camera parameters from image sequence for high quality video composition. We first establish correspondence of feature points between consecutive image fields. After the establishment, we formulate a nonlinear least-square data fitting problem. When the image sequence contains moving objects, and/or when the correspondence establishment is not successful for some feature points, we get bad observations, outliers. They should be properly eliminated for a good estimation. Thus, we propose an iterative algorithm for rejecting the outliers and fitting the camera parameters alternatively. We show the validity of the proposed method using computer generated data sets and real image sequeces.

  • PDF

Seamless Video Switching System for Service Compatible 3DTV Broadcasting

  • Kim, Sangjin;Jeon, Taehyun
    • ETRI Journal
    • /
    • v.38 no.5
    • /
    • pp.847-857
    • /
    • 2016
  • Broadcasting services such as multi/single channel HDTV and 3DTV/2DTV use a multi-channel encoder that changes the bitrate and composition of the video service depending on the time. However, this type of multi-channel encoder could cause a longer latency owing to the variable bitrate and relatively bigger size of the buffers, which results in the same delay as in 3DTV even for a conventional DTV service. On the other hand, systems built based on separate encoders, each of which is optimized for the target service, might not have such latency problems. Nevertheless, there might be a distortion problem in the image and sound at the time of a switchover between two encoders with different output bitrates and group of picture structures. This paper proposes a system that can realize a seamless video service conversion using two different video encoders optimized for each video service. An overall functional description of the video service change control server, which is a main control block for the proposed system, is also provided. The experiment results confirm the seamless switchover and reduced broadcasting latency of DTV services compared with a broadcasting system composed of a multi-channel encoder system.

Extensible Hierarchical Method of Detecting Interactive Actions for Video Understanding

  • Moon, Jinyoung;Jin, Junho;Kwon, Yongjin;Kang, Kyuchang;Park, Jongyoul;Park, Kyoung
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.502-513
    • /
    • 2017
  • For video understanding, namely analyzing who did what in a video, actions along with objects are primary elements. Most studies on actions have handled recognition problems for a well-trimmed video and focused on enhancing their classification performance. However, action detection, including localization as well as recognition, is required because, in general, actions intersect in time and space. In addition, most studies have not considered extensibility for a newly added action that has been previously trained. Therefore, proposed in this paper is an extensible hierarchical method for detecting generic actions, which combine object movements and spatial relations between two objects, and inherited actions, which are determined by the related objects through an ontology and rule based methodology. The hierarchical design of the method enables it to detect any interactive actions based on the spatial relations between two objects. The method using object information achieves an F-measure of 90.27%. Moreover, this paper describes the extensibility of the method for a new action contained in a video from a video domain that is different from the dataset used.