• 제목/요약/키워드: Motion segmentation

검색결과 203건 처리시간 0.026초

A Framework for Human Motion Segmentation Based on Multiple Information of Motion Data

  • Zan, Xiaofei;Liu, Weibin;Xing, Weiwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권9호
    • /
    • pp.4624-4644
    • /
    • 2019
  • With the development of films, games and animation industry, analysis and reuse of human motion capture data become more and more important. Human motion segmentation, which divides a long motion sequence into different types of fragments, is a key part of mocap-based techniques. However, most of the segmentation methods only take into account low-level physical information (motion characteristics) or high-level data information (statistical characteristics) of motion data. They cannot use the data information fully. In this paper, we propose an unsupervised framework using both low-level physical information and high-level data information of human motion data to solve the human segmentation problem. First, we introduce the algorithm of CFSFDP and optimize it to carry out initial segmentation and obtain a good result quickly. Second, we use the ACA method to perform optimized segmentation for improving the result of segmentation. The experiments demonstrate that our framework has an excellent performance.

RGB Motion Segmentation using Background Subtraction based on AMF

  • 김윤호
    • 한국정보전자통신기술학회논문지
    • /
    • 제6권2호
    • /
    • pp.81-87
    • /
    • 2013
  • Motion segmentation is a fundamental technique for analysing image sequences of real scenes. A process of identifying moving objects from data is a typical task in many computer vision applications. In this paper, we propose motion segmentation that generally consists from background subtraction and foreground pixel segmentation. The Approximated Median Filter (AMF) was chosen to perform background modeling. Motion segmentation in this paper covers RGB video data.

RGB Motion Segmentation using Background Subtraction based on AMF

  • 김윤호
    • 한국정보전자통신기술학회논문지
    • /
    • 제7권1호
    • /
    • pp.61-67
    • /
    • 2014
  • Motion segmentation is a fundamental technique for analysing image sequences of real scenes. A process of identifying moving objects from data is a typical task in many computer vision applications. In this paper, we propose motion segmentation that generally consists from background subtraction and foreground pixel segmentation. The Approximated Median Filter(AMF) was chosen to perform background modeling. Motion segmentation in this paper covers RGB video data.

연속영상에서 motion 기반의 새로운 분할 알고리즘 (A new motion-based segmentation algorithm in image sequences)

  • 정철곤;김중규
    • 한국통신학회논문지
    • /
    • 제27권3A호
    • /
    • pp.240-248
    • /
    • 2002
  • 본 논문에서는 연속영상에서 움직이는 객체의 motion에 기반하여 영상을 분할하는 새로운 알고리즘을 제안하였다. 전체적인 분할 과정은 2단계로 구성되어진다. 첫 단계는 '픽셀 레이블링' 단계이며, 두 번째 단계는 'motion 분할' 단계이다. '픽셀 레이블링' 단계에서는 optical flow에 의해 발생하는 속도 벡터들의 크기에 따라 영상의 각 픽셀에 레이블을 부여한다. 'Motion 분할' 단계에서는 첫 단계에서 생겨난 불필요한 잡음을 제거하기 위해 motion 필드를 마코프 랜덤 필드로 모델링하여 에너지 최소화를 통해 motion을 분할한다. 실험결과, 제안된 알고리즘이 연속영상에서 움직이는 객체의 motion을 효율적으로 분할함을 확인할 수 있었다.

MOTION DETECTION USING CURVATURE MAP AND TWO-STEP BIMODAL SEGMENTATION

  • Lee, Suk-Ho
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제13권4호
    • /
    • pp.247-256
    • /
    • 2009
  • In this paper, a motion detection algorithm which works well in low illumination environment is proposed. By using the level set based bimodal motion segmentation, the algorithm obtains an automatic segmentation of the motion region and the spurious regions due to the large CCD noise in low illumination environment are removed effectively.

  • PDF

계층적 모션 추정을 통한 장면 분할 기법 (Scene Segmentation using a Hierarchical Motion Estimation Technique)

  • 김모곤;우종선;정순기
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.203-206
    • /
    • 2002
  • We propose the new algorithm for scene segmentation. The proposed system consists motion estimation module and motion segmentation module. The former estimates 2D-motion value for each pixel position from two images transformed by wavelet. The latter determine scene segments well fitting on dominant affine motion models. What distinguishes proposed algorithm from other methods is that it needs not other post-processing for scene segmentation. We can manipulate both multimedia data and objects in virtual environment using proposed algorithm.

  • PDF

Automatic Object Segmentation and Background Composition for Interactive Video Communications over Mobile Phones

  • Kim, Daehee;Oh, Jahwan;Jeon, Jieun;Lee, Junghyun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권3호
    • /
    • pp.125-132
    • /
    • 2012
  • This paper proposes an automatic object segmentation and background composition method for video communication over consumer mobile phones. The object regions were extracted based on the motion and color variance of the first two frames. To combine the motion and variance information, the Euclidean distance between the motion boundary pixel and the neighboring color variance edge pixels was calculated, and the nearest edge pixel was labeled to the object boundary. The labeling results were refined using the morphology for a more accurate and natural-looking boundary. The grow-cut segmentation algorithm begins in the expanded label map, where the inner and outer boundary belongs to the foreground and background, respectively. The segmented object region and a new background image stored a priori in the mobile phone was then composed. In the background composition process, the background motion was measured using the optical-flow, and the final result was synthesized by accurately locating the object region according to the motion information. This study can be considered an extended, improved version of the existing background composition algorithm by considering motion information in a video. The proposed segmentation algorithm reduces the computational complexity significantly by choosing the minimum resolution at each segmentation step. The experimental results showed that the proposed algorithm can generate a fast, accurate and natural-looking background composition.

  • PDF

활률적 클러스터링에 의한 움직임 파라미터 추정과 세그맨테이션 (Motion Parameter Estimation and Segmentation with Probabilistic Clustering)

  • 정차근
    • 방송공학회논문지
    • /
    • 제3권1호
    • /
    • pp.50-60
    • /
    • 1998
  • 본 논문에서는 콤팩트한 동영상 표현과 객체기반의 generic한 동영상압축을 위한 파라미터릭 움직임 모델의 파라미터 추정과 세그맨테이션 기법에 관해서 기술한다. 동영상의 optical flow와 같은 국소적 움직임 정보와 파라미터 움직임 모델의 특징을 이용해서 영상의 콤팩트한 구조적 표현을 추출하기 위해, 본 논문에서는 2 스템의 과정 즉, 초기영역을 추출하는 과정과, 파라미터릭 움직임 파라미터의 추정과 세그맨테이션을 동시에 수행하는 과정으로 구성된 새로운 알고리즘을 제안한다. 혼합 모델이 ML 추정에 의거한 확률적 클러스터링에 의해 움직임 물체의 움직임과 형상을 반영한 초기영역을 추출하고, 파라미터릭 움직임 모델을 사용해서 각각의 초기 영역마다 움직임 파라미터를 추정하고 세그맨테이션을 수행한다. 또한, CIF 표준 동영상을 사용한 모의 실험을 통해 본 제안 알고리즘의 유효성을 평가한다.

  • PDF

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권4호
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.

Motion Segmentation from Color Video Sequences based on AMF

  • 알라김;김윤호
    • 한국정보전자통신기술학회논문지
    • /
    • 제2권3호
    • /
    • pp.31-38
    • /
    • 2009
  • A process of identifying moving objects from data is typical task in many computer vision applications. In this paper, we propose a motion segmentation method that generally consists from background subtraction and foreground pixel segmentation. The Approximated Median Filter (AMF) was chosen to perform background modelling. To demonstrate the effectiveness of proposed approach, we tested it gray-scale video data as well as RGB color space.

  • PDF