• Title/Summary/Keyword: Faint Recognition

Search Result 4, Processing Time 0.018 seconds

A Study on the Recognition System of Faint Situation based on Bimodal Information (바이모달 정보를 이용한 기절상황인식 시스템에 관한 연구)

  • So, In-Mi;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.2
    • /
    • pp.225-236
    • /
    • 2010
  • This study proposes a method for the recognition of emergency situation according to the bimodal information of camera image sensor and gravity sensor. This method can recognize emergency condition by mutual cooperation and compensation between sensors even when one of the sensors malfunction, the user does not carry gravity sensor, or in the place like bathroom where it is hard to acquire camera images. This paper implemented HMM(Hidden Markov Model) based learning and recognition algorithm to recognize actions such as walking, sitting on floor, sitting at sofa, lying and fainting motions. Recognition rate was enhanced when image feature vectors and gravity feature vectors are combined in learning and recognition process. Also, this method maintains high recognition rate by detecting moving object through adaptive background model even in various illumination changes.

Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio (모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.

Emergency situations Recognition System Using Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템)

  • Kim, Young-Un;Kang, Sun-Kyung;So, In-Mi;Han, Dae-Kyung;Kim, Yoon-Jin;Jung, Sung-Tae
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.757-758
    • /
    • 2008
  • This paper aims to propose an emergency recognition system using multimodal information extracted by an image processing module, a voice processing module, and a gravity sensor processing module. Each processing module detects predefined events such as moving, stopping, fainting, and transfer them to the multimodal integration module. Multimodal integration module recognizes emergency situation by using the transferred events and rechecks it by asking the user some question and recognizing the answer. The experiment was conducted for a faint motion in the living room and bathroom. The results of the experiment show that the proposed system is robust than previous methods and effectively recognizes emergency situations at various situations.

  • PDF

Design and Implementation of Emergency Recognition System based on Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템의 설계 및 구현)

  • Kim, Eoung-Un;Kang, Sun-Kyung;So, In-Mi;Kwon, Tae-Kyu;Lee, Sang-Seol;Lee, Yong-Ju;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.181-190
    • /
    • 2009
  • This paper presents a multimodal emergency recognition system based on visual information, audio information and gravity sensor information. It consists of video processing module, audio processing module, gravity sensor processing module and multimodal integration module. The video processing module and gravity sensor processing module respectively detects actions such as moving, stopping and fainting and transfer them to the multimodal integration module. The multimodal integration module detects emergency by fusing the transferred information and verifies it by asking a question and recognizing the answer via audio channel. The experiment results show that the recognition rate of video processing module only is 91.5% and that of gravity sensor processing module only is 94%, but when both information are combined the recognition result becomes 100%.