• Title/Summary/Keyword: 비디오 객체 분할

Search Result 72, Processing Time 0.025 seconds

Object Segmentation/Detection through learned Background Model and Segmented Object Tracking Method using Particle Filter (배경 모델 학습을 통한 객체 분할/검출 및 파티클 필터를 이용한 분할된 객체의 움직임 추적 방법)

  • Lim, Su-chang;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1537-1545
    • /
    • 2016
  • In real time video sequence, object segmentation and tracking method are actively applied in various application tasks, such as surveillance system, mobile robots, augmented reality. This paper propose a robust object tracking method. The background models are constructed by learning the initial part of each video sequences. After that, the moving objects are detected via object segmentation by using background subtraction method. The region of detected objects are continuously tracked by using the HSV color histogram with particle filter. The proposed segmentation method is superior to average background model in term of moving object detection. In addition, the proposed tracking method provide a continuous tracking result even in the case that multiple objects are existed with similar color, and severe occlusion are occurred with multiple objects. The experiment results provided with 85.9 % of average object overlapping rate and 96.3% of average object tracking rate using two video sequences.

Object-Based Video Segmentation Using Spatio-temporal Entropic Thresholding and Camera Panning Compensation (시공간 엔트로피 임계법과 카메라 패닝 보상을 이용한 객체 기반 동영상 분할)

  • 백경환;곽노윤
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.4 no.3
    • /
    • pp.126-133
    • /
    • 2003
  • This paper is related to a morphological segmentation method for extracting the moving object in video sequence using global motion compensation and two-dimensional spatio-temporal entropic thresholding. First, global motion compensation is performed with camera panning vector estimated in the hierarchical pyramid structure constructed by wavelet transform. Secondly, the regions with high possibility to include the moving object between two consecutive frames are extracted block by block from the global motion compensated image using two-dimensional spatio-temporal entropic thresholding. Afterwards, the LUT classifying each block into one among changed block, uncertain block, stationary block according to the results classified by two-dimensional spatio-temporal entropic thresholding is made out. Next, by adaptively selecting the initial search layer and the search range referring to the LUT, the proposed HBMA can effectively carry out fast motion estimation and extract object-included region in the hierarchical pyramid structure. Finally, after we define the thresholded gradient image in the object-included region, and apply the morphological segmentation method to the object-included region pixel by pixel and extract the moving object included in video sequence. As shown in the results of computer simulation, the proposed method provides relatively good segmentation results for moving object and specially comes up with reasonable segmentation results in the edge areas with lower contrast.

  • PDF

An Automatic Segmentation Method for Video Object Plane Generation (비디오 객체 생성을 위한 자동 영상 분할 방법)

  • 최재각;김문철;이명호;안치득;김성대
    • Journal of Broadcast Engineering
    • /
    • v.2 no.2
    • /
    • pp.146-155
    • /
    • 1997
  • The new video coding standard Iv1PEG-4 is enabling content-based functionalities. It requires a prior decomposition of sequences into video object planes (VOP's) so that each VOP represents moving objets. This paper addresses an image segmentation method for separating moving objects from still background (non-moving area) in video sequences using a statistical hypothesis test. In the proposed method. three consecutive image frames are exploited and a hypothesis testing is performed by comparing two means from two consecutive difference images. which results in a T-test. This hypothesis test yields a change detection mask that indicates moving areas (foreground) and non-moving areas (background), Moreover. an effective method for extracting

  • PDF

A Robust Object Extraction Method for Immersive Video Conferencing (몰입형 화상 회의를 위한 강건한 객체 추출 방법)

  • Ahn, Il-Koo;Oh, Dae-Young;Kim, Jae-Kwang;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.11-23
    • /
    • 2011
  • In this paper, an accurate and fully automatic video object segmentation method is proposed for video conferencing systems in which the real-time performance is required. The proposed method consists of two steps: 1) accurate object extraction on the initial frame, 2) real-time object extraction from the next frame using the result of the first step. Object extraction on the initial frame starts with generating a cumulative edge map obtained from frame differences in the beginning. This is because we can estimate the initial shape of the foreground object from the cumulative motion. This estimated shape is used to assign the seeds for both object and background, which are needed for Graph-Cut segmentation. Once the foreground object is extracted by Graph-Cut segmentation, real-time object extraction is conducted using the extracted object and the double edge map obtained from the difference between two successive frames. Experimental results show that the proposed method is suitable for real-time processing even in VGA resolution videos contrary to previous methods, being a useful tool for immersive video conferencing systems.

Unsupervised Segmentation of Objects using Genetic Algorithms (유전자 알고리즘 기반의 비지도 객체 분할 방법)

  • 김은이;박세현
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.4
    • /
    • pp.9-21
    • /
    • 2004
  • The current paper proposes a genetic algorithm (GA)-based segmentation method that can automatically extract and track moving objects. The proposed method mainly consists of spatial and temporal segmentation; the spatial segmentation divides each frame into regions with accurate boundaries, and the temporal segmentation divides each frame into background and foreground areas. The spatial segmentation is performed using chromosomes that evolve distributed genetic algorithms (DGAs). However, unlike standard DGAs, the chromosomes are initiated from the segmentation result of the previous frame, then only unstable chromosomes corresponding to actual moving object parts are evolved by mating operators. For the temporal segmentation, adaptive thresholding is performed based on the intensity difference between two consecutive frames. The spatial and temporal segmentation results are then combined for object extraction, and tracking is performed using the natural correspondence established by the proposed spatial segmentation method. The main advantages of the proposed method are twofold: First, proposed video segmentation method does not require any a priori information second, the proposed GA-based segmentation method enhances the search efficiency and incorporates a tracking algorithm within its own architecture. These advantages were confirmed by experiments where the proposed method was success fully applied to well-known and natural video sequences.

A Study on Face Object Detection System using spatial color model (공간적 컬러 모델을 이용한 얼굴 객체 검출 시스템 연구)

  • Baek, Deok-Soo;Byun, Oh-Sung;Baek, Young-Hyun
    • 전자공학회논문지 IE
    • /
    • v.43 no.2
    • /
    • pp.30-38
    • /
    • 2006
  • This paper is used the color space distribution HMMD model presented in MPEG-7 in order to segment and detect the wanted image parts as a real time without the user's manufacturing in the video object segmentation. Here, it is applied the wavelet morphology to remove a small part that is regarded as a noise in image and a part excepting for the face image. Also, it did the optimal composition by the rough set. In this paper, tile proposed video object detection algorithm is confirmed to be superior as detecting the face object exactly than the conventional algorithm by applying those to the different size images.put the of paper here.

Data Augmentation Scheme for Semi-Supervised Video Object Segmentation (준지도 비디오 객체 분할 기술을 위한 데이터 증강 기법)

  • Kim, Hojin;Kim, Dongheyon;Kim, Jeonghoon;Im, Sunghoon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.13-19
    • /
    • 2022
  • Video Object Segmentation (VOS) task requires an amount of labeled sequence data, which limits the performance of the current VOS methods trained with public datasets. In this paper, we propose two effective data augmentation schemes for VOS. The first augmentation method is to swap the background segment to the background from another image, and the other method is to play the sequence in reverse. The two augmentation schemes for VOS enable the current VOS methods to robustly predict the segmentation labels and improve the performance of VOS.

Automatic Extraction of Focused Video Object from Low Depth-of-Field Image Sequences (낮은 피사계 심도의 동영상에서 포커스 된 비디오 객체의 자동 검출)

  • Park, Jung-Woo;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.10
    • /
    • pp.851-861
    • /
    • 2006
  • The paper proposes a novel unsupervised video object segmentation algorithm for image sequences with low depth-of-field (DOF), which is a popular photographic technique enabling to represent the intention of photographer by giving a clear focus only on an object-of-interest (OOI). The proposed algorithm largely consists of two modules. The first module automatically extracts OOIs from the first frame by separating sharply focused OOIs from other out-of-focused foreground or background objects. The second module tracks OOIs for the rest of the video sequence, aimed at running the system in real-time, or at least, semi-real-time. The experimental results indicate that the proposed algorithm provides an effective tool, which can be a basis of applications, such as video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing systems.

Fast Computer-Generated Hologram Technique for Digital Holographic Video (홀로그래픽 비디오를 위한 고속 컴퓨터-생성 홀로그램 기술)

  • Choi, Hyun-Jun;Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.322-324
    • /
    • 2011
  • 컴퓨터-생성 홀로그램(computer-generated hologram, CGH) 기법은 실사객체 혹은 가상의 객체로부터 계산에 의해 디지털 홀로그램을 생성해 낼 수 있다. 하지만 HD급 해상도의 디지털 홀로그램 한 프레임을 일반적인 PC를 이용해 계산하기 위해서는 약 10분 정도가 소요된다. 이는 실시간 홀로그래픽 비디오 서비스를 어렵게 하는 문제점 중에 하나이다. 본 논문에서는 CGH 기법의 과도한 연산량을 줄이기 위해 깊이정보(depth-map) 비디오 프레임간의 공간적인 중복성을 이용하는 방법을 제안한다. 이 방법은 인접한 깊이정보 프레임간의 차이를 구해 동일한 깊이값을 갖는 좌표들의 CGH 계산을 생략하는 것이다. 제안한 방법을 적용한 결과 연산속도가 52%정도 향상되는 것을 확인할 수 있었다.

  • PDF

An Efficient Face Region Detection for Content-based Video Summarization (내용기반 비디오 요약을 위한 효율적인 얼굴 객체 검출)

  • Kim Jong-Sung;Lee Sun-Ta;Baek Joong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.7C
    • /
    • pp.675-686
    • /
    • 2005
  • In this paper, we propose an efficient face region detection technique for the content-based video summarization. To segment video, shot changes are detected from a video sequence and key frames are selected from the shots. We select one frame that has the least difference between neighboring frames in each shot. The proposed face detection algorithm detects face region from selected key frames. And then, we provide user with summarized frames included face region that has an important meaning in dramas or movies. Using Bayes classification rule and statistical characteristic of the skin pixels, face regions are detected in the frames. After skin detection, we adopt the projection method to segment an image(frame) into face region and non-face region. The segmented regions are candidates of the face object and they include many false detected regions. So, we design a classifier to minimize false lesion using CART. From SGLD matrices, we extract the textual feature values such as Inertial, Inverse Difference, and Correlation. As a result of our experiment, proposed face detection algorithm shows a good performance for the key frames with a complex and variant background. And our system provides key frames included the face region for user as video summarized information.