• Title/Summary/Keyword: Motion segmentation

Search Result 203, Processing Time 0.027 seconds

A Framework for Human Motion Segmentation Based on Multiple Information of Motion Data

  • Zan, Xiaofei;Liu, Weibin;Xing, Weiwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4624-4644
    • /
    • 2019
  • With the development of films, games and animation industry, analysis and reuse of human motion capture data become more and more important. Human motion segmentation, which divides a long motion sequence into different types of fragments, is a key part of mocap-based techniques. However, most of the segmentation methods only take into account low-level physical information (motion characteristics) or high-level data information (statistical characteristics) of motion data. They cannot use the data information fully. In this paper, we propose an unsupervised framework using both low-level physical information and high-level data information of human motion data to solve the human segmentation problem. First, we introduce the algorithm of CFSFDP and optimize it to carry out initial segmentation and obtain a good result quickly. Second, we use the ACA method to perform optimized segmentation for improving the result of segmentation. The experiments demonstrate that our framework has an excellent performance.

RGB Motion Segmentation using Background Subtraction based on AMF

  • Kim, Yoon-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.2
    • /
    • pp.81-87
    • /
    • 2013
  • Motion segmentation is a fundamental technique for analysing image sequences of real scenes. A process of identifying moving objects from data is a typical task in many computer vision applications. In this paper, we propose motion segmentation that generally consists from background subtraction and foreground pixel segmentation. The Approximated Median Filter (AMF) was chosen to perform background modeling. Motion segmentation in this paper covers RGB video data.

RGB Motion Segmentation using Background Subtraction based on AMF

  • Kim, Yoon-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.7 no.1
    • /
    • pp.61-67
    • /
    • 2014
  • Motion segmentation is a fundamental technique for analysing image sequences of real scenes. A process of identifying moving objects from data is a typical task in many computer vision applications. In this paper, we propose motion segmentation that generally consists from background subtraction and foreground pixel segmentation. The Approximated Median Filter(AMF) was chosen to perform background modeling. Motion segmentation in this paper covers RGB video data.

A new motion-based segmentation algorithm in image sequences (연속영상에서 motion 기반의 새로운 분할 알고리즘)

  • 정철곤;김중규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.240-248
    • /
    • 2002
  • This paper presents a new motion-based segmentation algorithm of moving objects in image sequences. The procedure toward complete segmentation consists of two steps: pixel labeling and motion segmentation. In the first step, we assign a label to each pixel according to magnitude of velocity vector. And velocity vector is generated by optical flow. And, in the second step, we have modeled motion field as a markov random field for noise canceling and make a segmentation of motion through energy minimization. We have demonstrated the efficiency of the presented method through experimental results.

MOTION DETECTION USING CURVATURE MAP AND TWO-STEP BIMODAL SEGMENTATION

  • Lee, Suk-Ho
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.13 no.4
    • /
    • pp.247-256
    • /
    • 2009
  • In this paper, a motion detection algorithm which works well in low illumination environment is proposed. By using the level set based bimodal motion segmentation, the algorithm obtains an automatic segmentation of the motion region and the spurious regions due to the large CCD noise in low illumination environment are removed effectively.

  • PDF

Scene Segmentation using a Hierarchical Motion Estimation Technique (계층적 모션 추정을 통한 장면 분할 기법)

  • 김모곤;우종선;정순기
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.203-206
    • /
    • 2002
  • We propose the new algorithm for scene segmentation. The proposed system consists motion estimation module and motion segmentation module. The former estimates 2D-motion value for each pixel position from two images transformed by wavelet. The latter determine scene segments well fitting on dominant affine motion models. What distinguishes proposed algorithm from other methods is that it needs not other post-processing for scene segmentation. We can manipulate both multimedia data and objects in virtual environment using proposed algorithm.

  • PDF

Automatic Object Segmentation and Background Composition for Interactive Video Communications over Mobile Phones

  • Kim, Daehee;Oh, Jahwan;Jeon, Jieun;Lee, Junghyun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.125-132
    • /
    • 2012
  • This paper proposes an automatic object segmentation and background composition method for video communication over consumer mobile phones. The object regions were extracted based on the motion and color variance of the first two frames. To combine the motion and variance information, the Euclidean distance between the motion boundary pixel and the neighboring color variance edge pixels was calculated, and the nearest edge pixel was labeled to the object boundary. The labeling results were refined using the morphology for a more accurate and natural-looking boundary. The grow-cut segmentation algorithm begins in the expanded label map, where the inner and outer boundary belongs to the foreground and background, respectively. The segmented object region and a new background image stored a priori in the mobile phone was then composed. In the background composition process, the background motion was measured using the optical-flow, and the final result was synthesized by accurately locating the object region according to the motion information. This study can be considered an extended, improved version of the existing background composition algorithm by considering motion information in a video. The proposed segmentation algorithm reduces the computational complexity significantly by choosing the minimum resolution at each segmentation step. The experimental results showed that the proposed algorithm can generate a fast, accurate and natural-looking background composition.

  • PDF

Motion Parameter Estimation and Segmentation with Probabilistic Clustering (활률적 클러스터링에 의한 움직임 파라미터 추정과 세그맨테이션)

  • 정차근
    • Journal of Broadcast Engineering
    • /
    • v.3 no.1
    • /
    • pp.50-60
    • /
    • 1998
  • This paper addresses a problem of extraction of parameteric motion estimation and structural motion segmentation for compact image sequence representation and object-based generic video coding. In order to extract meaningful motion structure from image sequences, a direct parameteric motion estimation based on a pre-segmentation is proposed. The pre-segmentation which considers the motion of the moving objects is canied out based on probabilistic clustering with mixture models using optical flow and image intensities. Parametric motion segmentation can be obtained by iterated estimation of motion model parameters and region reassignment according to a criterion using Gauss-Newton iterative optimization algorithm. The efficiency of the proposed methoo is verified with computer simulation using elF real image sequences.

  • PDF

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.

Motion Segmentation from Color Video Sequences based on AMF

  • Kim, Alla;Kim, Yoon-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.31-38
    • /
    • 2009
  • A process of identifying moving objects from data is typical task in many computer vision applications. In this paper, we propose a motion segmentation method that generally consists from background subtraction and foreground pixel segmentation. The Approximated Median Filter (AMF) was chosen to perform background modelling. To demonstrate the effectiveness of proposed approach, we tested it gray-scale video data as well as RGB color space.

  • PDF