• Title/Summary/Keyword: 모션 검출

Search Result 118, Processing Time 0.024 seconds

Fire-Smoke Detection Based on Video using Dynamic Bayesian Networks (동적 베이지안 네트워크를 이용한 동영상 기반의 화재연기감지)

  • Lee, In-Gyu;Ko, Byung-Chul;Nam, Jae-Yeol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.4C
    • /
    • pp.388-396
    • /
    • 2009
  • This paper proposes a new fire-smoke detection method by using extracted features from camera images and pattern recognition technique. First, moving regions are detected by analyzing the frame difference between two consecutive images and generate candidate smoke regions by applying smoke color model. A smoke region generally has a few characteristics such as similar color, simple texture and upward motion. From these characteristics, we extract brightness, wavelet high frequency and motion vector as features. Also probability density functions of three features are generated using training data. Probabilistic models of smoke region are then applied to observation nodes of our proposed Dynamic Bayesian Networks (DBN) for considering time continuity. The proposed algorithm was successfully applied to various fire-smoke tasks not only forest smokes but also real-world smokes and showed better detection performance than previous method.

Soccer Video Highlight Building Algorithm using Structural Characteristics of Broadcasted Sports Video (스포츠 중계 방송의 구조적 특성을 이용한 축구동영상 하이라이트 생성 알고리즘)

  • 김재홍;낭종호;하명환;정병희;김경수
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.727-743
    • /
    • 2003
  • This paper proposes an automatic highlight building algorithm for soccer video by using the structural characteristics of broadcasted sports video that an interesting (or important) event (such as goal or foul) in sports video has a continuous replay shot surrounded by gradual shot change effect like wipe. This shot editing rule is used in this paper to analyze the structure of broadcated soccer video and extracts shot involving the important events to build a highlight. It first uses the spatial-temporal image of video to detect wipe transition effects and zoom out/in shot changes. They are used to detect the replay shot. However, using spatial-temporal image alone to detect the wipe transition effect requires too much computational resources and need to change algorithm if the wipe pattern is changed. For solving these problems, a two-pass detection algorithm and a pixel sub-sampling technique are proposed in this paper. Furthermore, to detect the zoom out/in shot change and replay shots more precisely, the green-area-ratio and the motion energy are also computed in the proposed scheme. Finally, highlight shots composed of event and player shot are extracted by using these pre-detected replay shot and zoom out/in shot change point. Proposed algorithm will be useful for web services or broadcasting services requiring abstracted soccer video.

Shot Motion Classification Using Partial Decoding of INTRA Picture in Compressed Video (압축비디오에서 인트라픽쳐 부분 복호화를 이용한 샷 움직임 분류)

  • Kim, Kang-Wook;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.7
    • /
    • pp.858-865
    • /
    • 2011
  • In order to allow the user to efficiently browse, select, and retrieve a desired video part without having to deal directly with GBytes of compressed data, classification of shot motion characteristic has to be carried out as a preparation for such user interaction. The organization of video information for video database requires segmentation of a video into its constituent shots and their subsequent characterization in terms of content and camera movement in shot. In order to classify shot motion, it is a conventional way to use element of motion vector. However, there is a limit to estimate global camera motion because the way that uses motion vectors only represents local movement. For shot classification in terms of motion information, we propose a new scheme consisting of partial decoding of INTRA pictures and comparing the x, y displacement vector curve between the decoded I-frame and next P-frame in compressed video data.

Capture of Foot Motion for Real-time Virtual Wearing by Stereo Cameras (스테레오 카메라로부터 실시간 가상 착용을 위한 발동작 검출)

  • Jung, Da-Un;Yun, Yong-In;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1575-1591
    • /
    • 2008
  • In this paper, we propose a new method detecting foot motion capture in order to overlap in realtime foot's 3D virtual model from stereo cameras. In order to overlap foot's virtual model at the same position of the foot, a process of the foot's joint detection to regularly track the foot's joint motion is necessary, and accurate register both foot's virtual model and user's foot in complicated motion is most important problem in this technology. In this paper, we propose a dynamic registration using two types of marker groups. A plane information of the ground handles the relationship between foot's virtual model and user's foot and obtains foot's pose and location. Foot's rotation is predicted by two attached marker groups according to instep of center framework. Consequently, we had implemented our proposed system and estimated the accuracy of the proposed method using various experiments.

  • PDF

Optical Flow Based Vehicle Counting and Speed Estimation in CCTV Videos (Optical Flow 기반 CCTV 영상에서의 차량 통행량 및 통행 속도 추정에 관한 연구)

  • Kim, Jihae;Shin, Dokyung;Kim, Jaekyung;Kwon, Cheolhee;Byun, Hyeran
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.448-461
    • /
    • 2017
  • This paper proposes a vehicle counting and speed estimation method for traffic situation analysis in road CCTV videos. The proposed method removes a distortion in the images using Inverse perspective Mapping, and obtains specific region for vehicle counting and speed estimation using lane detection algorithm. Then, we can obtain vehicle counting and speed estimation results from using optical flow at specific region. The proposed method achieves stable accuracy of 88.94% from several CCTV images by regional groups and it totally applied at 106,993 frames, about 3 hours video.

Development of Dental Light Robotic System using Image Processing Technology (영상처리 기술을 이용한 치과용 로봇 조명장치의 개발)

  • Moon, Hyun-Il;Kim, Myoung-Nam;Lee, Kyu-Bok
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.26 no.3
    • /
    • pp.285-296
    • /
    • 2010
  • Robot-assisted illuminating equipment based on image-processing technology was developed and then its accuracy was measured. The current system was designed to detect facial appearance using a camera and to illuminate it using a robot-assisted system. It was composed of a motion control component, a light control component and an image-processing component. Images were captured with a camera and following their acquisition the images that showed motion change were extracted in accordance with the Adaboost algorithm. Following the detection experiment for the oral cavity of patients based on image-processing technology, a higher degree of the facial recognition was obtained from the frontal view and the light robot arm was stably controlled.

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.

Video Digital Doorlock System for Recognition and Transmission of Approaching Objects (접근객체 인식 및 전송을 위한 영상 디지털 도어락 시스템 설계)

  • Lee, Sang-Rack;Park, Jin-Tae;Woo, Byoung-Hyoun;Choi, Han-Go
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.237-242
    • /
    • 2014
  • Current digital door lock systems are mainly designed for users' convenience, so they have weakness in security. Thus, this paper suggests a video digital doorlock system grouped with a relay device, a server, and a digital doorlock with a camera, sensors, and communication modules, which is detecting or recognizing objects approaching to the front of the door lock system and sending images and door-opening information to users' smart devices. Experiments showed that the suggested system has 96~98% recognition rate of approaching objects and requires 17.1~23.9 seconds for transmission on average, depending on network systems. Therefore, the system is thought to have enough capability for real time security response by monitoring the front area of the doorlock system.

People Tracking and Accompanying Algorithm for Mobile Robot Using Kinect Sensor and Extended Kalman Filter (키넥트센서와 확장칼만필터를 이용한 이동로봇의 사람추적 및 사람과의 동반주행)

  • Park, Kyoung Jae;Won, Mooncheol
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.38 no.4
    • /
    • pp.345-354
    • /
    • 2014
  • In this paper, we propose a real-time algorithm for estimating the relative position and velocity of a person with respect to a robot using a Kinect sensor and an extended Kalman filter (EKF). Additionally, we propose an algorithm for controlling the robot in the proximity of a person in a variety of modes. The algorithm detects the head and shoulder regions of the person using a histogram of oriented gradients (HOG) and a support vector machine (SVM). The EKF algorithm estimates the relative positions and velocities of the person with respect to the robot using data acquired by a Kinect sensor. We tested the various modes of proximity movement for a human in indoor situations. The accuracy of the algorithm was verified using a motion capture system.

Statistical Model for Emotional Video Shot Characterization (비디오 셧의 감정 관련 특징에 대한 통계적 모델링)

  • 박현재;강행봉
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1200-1208
    • /
    • 2003
  • Affective computing plays an important role in intelligent Human Computer Interactions(HCI). To detect emotional events, it is desirable to construct a computing model for extracting emotion related features from video. In this paper, we propose a statistical model based on the probabilistic distribution of low level features in video shots. The proposed method extracts low level features from video shots and then from a GMM(Gaussian Mixture Model) for them to detect emotional shots. As low level features, we use color, camera motion and sequence of shot lengths. The features can be modeled as a GMM by using EM(Expectation Maximization) algorithm and the relations between time and emotions are estimated by MLE(Maximum Likelihood Estimation). Finally, the two statistical models are combined together using Bayesian framework to detect emotional events in video.