• Title/Summary/Keyword: Motion segmentation

Search Result 203, Processing Time 0.028 seconds

Dynamic Scene Segmentation Algorithm Using a Cross Mask and Edge Information (Cross Mask와 에지 정보를 사용한 동영상 분할)

  • 강정숙;박래홍;이상욱
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.8
    • /
    • pp.1247-1256
    • /
    • 1989
  • In this paper, we propose the dynamic scene segmentation algorithm using a cross mask and edge information. This method, a combination of the conventioanl feature-based and pixel-based approaches, uses edges as features and determines moving pixels, with a cross mask centered on each edge pixel, by computing similarity measure between two consecutive image frames. With simple calcualtion the proposed method works well for image consisting of complex background or several moving objects. Also this method works satisfactorily in case of rotaitional motion.

  • PDF

Mdlti-View Video Generation from 2 Dimensional Video (2차원 동영상으로부터 다시점 동영상 생성 기법)

  • Baek, Yun-Ki;Choi, Mi-Nam;Park, Se-Whan;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.1C
    • /
    • pp.53-61
    • /
    • 2008
  • In this paper, we propose an algorithm for generation of multi-view video from conventional 2 dimensional video. Color and motion information of an object are used for segmentation and from the segmented objects, multi-view video is generated. Especially, color information is used to extract the boundary of an object that is barely extracted by using motion information. To classify the homogeneous regions with color, luminance and chrominance components are used. A pixel-based motion estimation with a measurement window is also performed to obtain motion information. Then, we combine the results from motion estimation and color segmentation and consequently we obtain a depth information by assigning motion intensity value to each segmented region. Finally, we generate multi-view video by applying rotation transformation method to 2 dimensional input images and the obtained depth information in each object. The experimental results show that the proposed algorithm outperforms comparing with conventional conversion methods.

Very Low Rate Coding of Motion Video Using 3-D Segmentation with Two Change Detection Masks (두 변화검출 마스크를 이용한 3차원 영상분할 초저속 동영상 부호화)

  • Lee, Sang-Mi;Kim, Nam-Chul;Son, Hyon
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.10
    • /
    • pp.146-153
    • /
    • 1990
  • A new 3-D segmentation-based coding technique is proposed for transmitting the motion video with reasonablly acceptable quality even at a very low bit rate. Only meaningful motion areas are extracted by using two change detection masks and a current frame is directly segmented rather than a difference frame itself so that a good quality of image can be obtained at high compression ratios. Through the experiments, the sequence of Miss America is reconstructed with visually acceptable quality at the very high compression ratio of 360:1.

  • PDF

A Study on Measurement of Repetitive Work using Digital Image Processing (영상처리를 이용한 반복적 작업의 측정에 관한 연구)

  • Lee, Jeong-Cheol;Sim, Eok-Su;Kim, Nam-Joo;Park, Chan-Kwon;Park, Jin-Woo
    • IE interfaces
    • /
    • v.14 no.1
    • /
    • pp.95-105
    • /
    • 2001
  • Previous work measurement methods need much time and effort of time study analysts because they have to measure required time through direct observations. In this study, we propose a method which efficiently measures standard times without involvement of human analysts using digital image processing techniques. This method consists of two main steps: motion representation step and cycle segmentation step. In motion representation step, we first detect the motion of any object distinct from its background by differencing two consecutive images separated by a constant time interval. The images thus obtained then pass through an edge detector filter. Finally, the mean values of coordinates of significant pixels of the edge image are obtained. Through these processes, the motions of the observed worker are represented by two time series data of worker location in horizontal and vertical axes. In the second step, called the cycle segmentation step, we extract the frames which have maximum or minimum coordinates in one cycle and store them in a stack, and calculate each cycle time using these frames. In this step we also consider methods on how to detect work delays due to unexpected events such as operator's escapement from the work area, or interruptions. To condude, the experimental results show that the proposed method is very cost-effective and useful for measuring time standards for various work environment.

  • PDF

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

Video object segmentation and frame preprocessing for real-time and high compression MPEG-4 encoding (실시간 고압축 MPEG-4 부호화를 위한 비디오 객체 분할과 프레임 전처리)

  • 김준기;이호석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.2C
    • /
    • pp.147-161
    • /
    • 2003
  • Video object segmentation is one of the core technologies for content-based real-time MPEG-4 encoding system. For real-time requirement, the segmentation algorithm should be fast and accurate but almost all existing algorithms are computationally intensive and not suitable for real-time applications. The MPEG-4 VM(Verification Model) has provided basic algorithms for MPEG-4 encoding but it has many limitations in practical software development, real-time camera input system and compression efficiency. In this paper, we implemented the preprocessing system for real-time camera input and VOP extraction for content-based video coding and also implemented motion detection to achieve the 180 : 1 compression rate for real-time and high compression MPEG-4 encoding.

Non-Prior Training Active Feature Model-Based Object Tracking for Real-Time Surveillance Systems (실시간 감시 시스템을 위한 사전 무학습 능동 특징점 모델 기반 객체 추적)

  • 김상진;신정호;이성원;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.23-34
    • /
    • 2004
  • In this paper we propose a feature point tracking algorithm using optical flow under non-prior taming active feature model (NPT-AFM). The proposed algorithm mainly focuses on analysis non-rigid objects[1], and provides real-time, robust tracking by NPT-AFM. NPT-AFM algorithm can be divided into two steps: (i) localization of an object-of-interest and (ii) prediction and correction of the object position by utilizing the inter-frame information. The localization step was realized by using a modified Shi-Tomasi's feature tracking algoriam[2] after motion-based segmentation. In the prediction-correction step, given feature points are continuously tracked by using optical flow method[3] and if a feature point cannot be properly tracked, temporal and spatial prediction schemes can be employed for that point until it becomes uncovered again. Feature points inside an object are estimated instead of its shape boundary, and are updated an element of the training set for AFH Experimental results, show that the proposed NPT-AFM-based algerian can robustly track non-rigid objects in real-time.

Touching Pigs Segmentation and Tracking Verification Using Motion Information (움직임 정보를 이용한 근접 돼지 분리와 추적 검증)

  • Park, Changhyun;Sa, Jaewon;Kim, Heegon;Chung, Yongwha;Park, Daihee;Kim, Hakjae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.4
    • /
    • pp.135-144
    • /
    • 2018
  • The domestic pigsty environment is highly vulnerable to the spread of respiratory diseases such as foot-and-mouth disease because of the small space. In order to manage this issue, a variety of studies have been conducted to automatically analyze behavior of individual pigs in a pig pen through a video surveillance system using a camera. Even though it is required to correctly segment touching pigs for tracking each pig in complex situations such as aggressive behavior, detecting the correct boundaries among touching pigs using Kinect's depth information of lower accuracy is a challenging issue. In this paper, we propose a segmentation method using motion information of the touching pigs. In addition, our proposed method can be applied for detecting tracking errors in case of tracking individual pigs in the complex environment. In the experimental results, we confirmed that the touching pigs in a pig farm were separated with the accuracy of 86%, and also confirmed that the tracking errors were detected accurately.

Parametrized Construction of Virtual Drivers' Reach Motion to Seat Belt (매개변수로 제어가능한 운전자의 안전벨트 뻗침 모션 생성)

  • Seo, Hye-Won;Cordier, Frederic;Choi, Woo-Jin;Choi, Hyung-Yun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.4
    • /
    • pp.249-259
    • /
    • 2011
  • In this paper we present our work on the parameterized construction of virtual drivers' reach motion to seat belt, by using motion capture data. A user can generate a new reach motion by controlling a number of parameters. We approach the problem by using multiple sets of example reach motions and learning the relation between the labeling parameters and the motion data. The work is composed of three tasks. First, we construct a motion database using multiple sets of labeled motion clips obtained by using a motion capture device. This involves removing the redundancy of each motion clip by using PCA (Principal Component Analysis), and establishing temporal correspondence among different motion clips by automatic segmentation and piecewise time warping of each clip. Next, we compute motion blending functions by learning the relation between labeling parameters (age, hip base point (HBP), and height) and the motion parameters as represented by a set of PC coefficients. During runtime, on-line motion synthesis is accomplished by evaluating the motion blending function from the user-supplied control parameters.

Tracking Regional Left Ventricular Wall Motion With Color Kinesis in Echocardiography (심초음파에서 국소 좌심실벽 운동 추적을 위한 Color Kinesis 구현에 관한 연구)

  • Shin, D.K.;Kim, D.Y.;Choi, K.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.579-582
    • /
    • 1997
  • The two dimnesional echocardiography is widely used to evaluate regional wall motion abnormaility, because of its abilities to depict left ventricluar wall motion. A new method, color kinesis is a technology or echocardiographic assessment of left ventricular wall motion. In this paper, we proposed a algorithm or color kinesis which is based on acoustic quantification and automatically detects endocardial motion during systole on a frame-by-frame basis. The echocardiograms were obtained in the short-axis views in normal subjects. Automated edge detection and endocardial contour tracing algorithm was applied to each frames, quantitative analysis based on segmentation was performed, and pre-defined color overlays superimposed on the gray scale images. Segmental analysis of color kinesis provided automated, quantitative diagnosis of regional wall motion abnormality.

  • PDF