• Title/Summary/Keyword: Motion segmentation

Search Result 203, Processing Time 0.023 seconds

Video Object Extraction Using Contour Information (윤곽선 정보를 이용한 동영상에서의 객체 추출)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.33-45
    • /
    • 2011
  • In this paper, we present a method for extracting video objects efficiently by using the modified graph cut algorithm based on contour information. First, we extract objects at the first frame by an automatic object extraction algorithm or the user interaction. To estimate the objects' contours at the current frame, motion information of objects' contour in the previous frame is analyzed. Block-based histogram back-projection is conducted along the estimated contour point. Each color model of objects and background can be generated from back-projection images. The probabilities of links between neighboring pixels are decided by the logarithmic based distance transform map obtained from the estimated contour image. Energy of the graph is defined by predefined color models and logarithmic distance transform map. Finally, the object is extracted by minimizing the energy. Experimental results of various test images show that our algorithm works more accurately than other methods.

Gait Recognition Using Multiple Feature detection (다중 특징점 검출을 이용한 보행인식)

  • Cho, Woon;Kim, Dong-Hyeon;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.84-92
    • /
    • 2007
  • The gait recognition is presented for human identification from a sequence of noisy silhouettes segmented from video by capturing at a distance. The proposed gait recognition algorithm gives better performance than the baseline algorithm because of segmentation of the object by using multiple modules; i) motion detection, ii) object region detection, iii) head detection, and iv) active shape models, which solve the baseline algorithm#s problems to make background, to remove shadow, and to be better recognition rates. For the experiment, we used the HumanID Gait Challenge data set, which is the largest gait benchmarking data set with 122 objects, For realistic simulation we use various values for the following parameters; i) viewpoint, ii) shoe, iii) surface, iv) carrying condition, and v) time.

A Smoke Detection Method based on Video for Early Fire-Alarming System (조기 화재 경보 시스템을 위한 비디오 기반 연기 감지 방법)

  • Truong, Tung X.;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.213-220
    • /
    • 2011
  • This paper proposes an effective, four-stage smoke detection method based on video that provides emergency response in the event of unexpected hazards in early fire-alarming systems. In the first phase, an approximate median method is used to segment moving regions in the present frame of video. In the second phase, a color segmentation of smoke is performed to select candidate smoke regions from these moving regions. In the third phase, a feature extraction algorithm is used to extract five feature parameters of smoke by analyzing characteristics of the candidate smoke regions such as area randomness and motion of smoke. In the fourth phase, extracted five parameters of smoke are used as an input for a K-nearest neighbor (KNN) algorithm to identify whether the candidate smoke regions are smoke or non-smoke. Experimental results indicate that the proposed four-stage smoke detection method outperforms other algorithms in terms of smoke detection, providing a low false alarm rate and high reliability in open and large spaces.

A Forest Fire Detection Algorithm Using Image Information (영상정보를 이용한 산불 감지 알고리즘)

  • Seo, Min-Seok;Lee, Choong Ho
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.159-164
    • /
    • 2019
  • Detecting wildfire using only color in image information is a very difficult issue. This paper proposes an algorithm to detect forest fire area by analyzing color and motion of the area in the video including forest fire. The proposed algorithm removes the background region using the Gaussian Mixture based background segmentation algorithm, which does not depend on the lighting conditions. In addition, the RGB channel is changed to an HSV channel to extract flame candidates based on color. The extracted flame candidates judge that it is not a flame if the area moves while labeling and tracking. If the flame candidate areas extracted in this way are in the same position for more than 2 minutes, it is regarded as flame. Experimental results using the implemented algorithm confirmed the validity.

The Modified Block Matching Algorithm for a Hand Tracking of an HCI system (HCI 시스템의 손 추적을 위한 수정 블록 정합 알고리즘)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.4 no.4
    • /
    • pp.9-14
    • /
    • 2003
  • A GUI (graphical user interface) has been a dominant platform for HCI (human computer interaction). A GUI - based interaction has made computers simpler and easier to use. The GUI - based interaction, however, does not easily support the range of interaction necessary to meet users' needs that are natural. intuitive, and adaptive. In this paper, the modified BMA (block matching algorithm) is proposed to track a hand in a sequence of an image and to recognize it in each video frame in order to replace a mouse with a pointing device for a virtual reality. The HCI system with 30 frames per second is realized in this paper. The modified BMA is proposed to estimate a position of the hand and segmentation with an orientation of motion and a color distribution of the hand region for real - time processing. The experimental result shows that the modified BMA with the YCbCr (luminance Y, component blue, component red) color coordinate guarantees the real - time processing and the recognition rate. The hand tracking by the modified BMA can be applied to a virtual reclity or a game or an HCI system for the disable.

  • PDF

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.

Three-Dimensional Conversion of Two-Dimensional Movie Using Optical Flow and Normalized Cut (Optical Flow와 Normalized Cut을 이용한 2차원 동영상의 3차원 동영상 변환)

  • Jung, Jae-Hyun;Park, Gil-Bae;Kim, Joo-Hwan;Kang, Jin-Mo;Lee, Byoung-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.1
    • /
    • pp.16-22
    • /
    • 2009
  • We propose a method to convert a two-dimensional movie to a three-dimensional movie using normalized cut and optical flow. In this paper, we segment an image of a two-dimensional movie to objects first, and then estimate the depth of each object. Normalized cut is one of the image segmentation algorithms. For improving speed and accuracy of normalized cut, we used a watershed algorithm and a weight function using optical flow. We estimate the depth of objects which are segmented by improved normalized cut using optical flow. Ordinal depth is estimated by the change of the segmented object label in an occluded region which is the difference of absolute values of optical flow. For compensating ordinal depth, we generate the relational depth which is the absolute value of optical flow as motion parallax. A final depth map is determined by multiplying ordinal depth by relational depth, then dividing by average optical flow. In this research, we propose the two-dimensional/three-dimensional movie conversion method which is applicable to all three-dimensional display devices and all two-dimensional movie formats. We present experimental results using sample two-dimensional movies.

Development of Velocity Imaging Method for Motility of Left Ventricle in Gated SPECT (게이트 심근 SPECT에서 좌심실의 운동성 분석을 위한 속도영상화 기법 개발)

  • Jo, Mi-Jung;Lee, Byeong-Il;Choi, Hyun-Ju;Hwang, Hae-Gil;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.7
    • /
    • pp.808-817
    • /
    • 2006
  • Although the generally used the velocity index of doppler effect is a very significant factor in the functional evaluation of the left ventricle, it depends on the subjective evaluation of an inspector. The objective data of the motility can be obtained from the gated myocardial SPECT images by quantitative analysis. However, it is difficult to image visual of the velocity of the motion. The aim of our study is to develop a new method for the imaging velocity using the gated myocardial SPECT images and use it as an evaluation index for analyzing motility. First we visualized left ventricle into 3 dimensions using the coordinates of the points which were obtained through a segmentation of myocardium. Each point was represented by the different colors, according to the velocity of each point. We performed a validation study using 7 normal subjects and 15 myocardial infarction patients. To analyze motility, we used the average of the moved distance and the velocity. In normal cases, the average of the moved distance was 4.3mm and the average of the velocity was 11.9mm. In patient cases, the average of the moved distance was 3.9mm and the average of the velocity was 10.5mm. These results show that the motility of normal subjects is higher than the abnormal subjects. We expect that our proposed method could become a way to improve the accuracy and reproducibility for the functional evaluation of myocardial wall.

  • PDF

Hidden Markov Model for Gesture Recognition (제스처 인식을 위한 은닉 마르코프 모델)

  • Park, Hye-Sun;Kim, Eun-Yi;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.1 s.307
    • /
    • pp.17-26
    • /
    • 2006
  • This paper proposes a novel hidden Markov model (HMM)-based gesture recognition method and applies it to an HCI to control a computer game. The novelty of the proposed method is two-fold: 1) the proposed method uses a continuous streaming of human motion as the input to the HMM instead of isolated data sequences or pre-segmented sequences of data and 2) the gesture segmentation and recognition are performed simultaneously. The proposed method consists of a single HMM composed of thirteen gesture-specific HMMs that independently recognize certain gestures. It takes a continuous stream of pose symbols as an input, where a pose is composed of coordinates that indicate the face, left hand, and right hand. Whenever a new input Pose arrives, the HMM continuously updates its state probabilities, then recognizes a gesture if the probability of a distinctive state exceeds a predefined threshold. To assess the validity of the proposed method, it was applied to a real game, Quake II, and the results demonstrated that the proposed HMM could provide very useful information to enhance the discrimination between different classes and reduce the computational cost.

Propriety analysis of Depth-Map production methods For Depth-Map based on 20 to 3D Conversion - the Last Bladesman (2D to 3D Conversion에서 Depth-Map 기반 제작 사례연구 - '명장 관우' 제작 중심으로 -)

  • Kim, Hyo In;Kim, Hyung Woo
    • Smart Media Journal
    • /
    • v.3 no.1
    • /
    • pp.52-62
    • /
    • 2014
  • Prevalence of common three-dimensional display progresses, increasing the demand for three-dimensional content. Starting from the year 2010 to meet increasing 2D to 3D conversion is insufficient to meet demand content was presented as an alternative. But, Convert 2D to 3D stereo effect only emphasizes content production as a three-dimensional visual fatigue and the degradation of the Quality problems are pointed out. In this study, opened in 2011 'Scenes Guan', the 13 selected Scene is made of the three-dimensional transform the content and the Quality of the transformation applied to the Depth-Map is a visual representation of three-dimensional fatigue and, the adequacy of whether the expert has group interviews and surveys were conducted. Many of the changes are applied to the motion picture of the three-dimensional configurations of Depth-Map conversion technology used in many ways before and after the analysis of the relationship of cascade configurations to create a depth map to the stage. Experiments, presented in this study is a three-dimensional configuration of Depth-Map transformation can lower the production of a three-dimensional visual fatigue and improve the results obtained for a reasonable place was more than half of the experiment accepted the expert group to show a positive reaction were. The results of this study with a rapid movement to convert 2D images into 3D images of applying Depth-map configuration cascade manner to reduce the visual fatigue, to increase the efficiency, and has a three-dimensional perception is the result derived.