• Title/Summary/Keyword: 행동 방향 인식

Search Result 200, Processing Time 0.023 seconds

Human Behavior Recognition based on Gaze Direction In Office Environment (실내 환경에서 시선 방향을 고려한 사람 행동 인식)

  • Kong, Byung-Yong;Jung, Do-Joon;Kim, Hang-Joon
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.119-120
    • /
    • 2007
  • 본 논문에서는 실내의 고정된 단일 칼라 카메라에서 획득된 비디오 스트림으로부터 사람의 행동을 인식하기 위한 시스템을 제안한다. 제안된 시스템은 사람의 시공간적 상태 변화와 사람의 시선 방향을 이용하여 규칙기반으로 행동을 인식한다. 사람의 의미 있는 상태변화를 이벤트로, 이벤트의 시퀀스 즉, 사람의 행동을 시나리오로 정의하였다. 따라서 입력비디오 스트림에서 사람의 상태변화로 이벤트를 검출하고, 검출된 이벤트의 시퀀스로 사람의 행동을 인식한다. 사람의 시선은 얼굴과 머리 영역의 색정보를 이용한 시선 방향 추정 방법으로 찾아지며, 사람의 상태 변화는 사람의 위치와 키 등을 이용하여 검출된다. 본 시스템은 실내 환경에서 획득한 비디오에서 실험하였으며, 실험결과 시선 방향에 의해 서로 다른 행동을 구분하여 인식할 수 있었다.

  • PDF

Recognizing the Direction of Action using Generalized 4D Features (일반화된 4차원 특징을 이용한 행동 방향 인식)

  • Kim, Sun-Jung;Kim, Soo-Wan;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.518-528
    • /
    • 2014
  • In this paper, we propose a method to recognize the action direction of human by developing 4D space-time (4D-ST, [x,y,z,t]) features. For this, we propose 4D space-time interest points (4D-STIPs, [x,y,z,t]) which are extracted using 3D space (3D-S, [x,y,z]) volumes reconstructed from images of a finite number of different views. Since the proposed features are constructed using volumetric information, the features for arbitrary 2D space (2D-S, [x,y]) viewpoint can be generated by projecting the 3D-S volumes and 4D-STIPs on corresponding image planes in training step. We can recognize the directions of actors in the test video since our training sets, which are projections of 3D-S volumes and 4D-STIPs to various image planes, contain the direction information. The process for recognizing action direction is divided into two steps, firstly we recognize the class of actions and then recognize the action direction using direction information. For the action and direction of action recognition, with the projected 3D-S volumes and 4D-STIPs we construct motion history images (MHIs) and non-motion history images (NMHIs) which encode the moving and non-moving parts of an action respectively. For the action recognition, features are trained by support vector data description (SVDD) according to the action class and recognized by support vector domain density description (SVDDD). For the action direction recognition after recognizing actions, each actions are trained using SVDD according to the direction class and then recognized by SVDDD. In experiments, we train the models using 3D-S volumes from INRIA Xmas Motion Acquisition Sequences (IXMAS) dataset and recognize action direction by constructing a new SNU dataset made for evaluating the action direction recognition.

Human Activity Recognition using Model-based Gaze Direction Estimation (모델 기반의 시선 방향 추정을 이용한 사람 행동 인식)

  • Jung, Do-Joon;Yoon, Jeong-Oh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.9-18
    • /
    • 2011
  • In this paper, we propose a method which recognizes human activity using model-based gaze direction estimation in an indoor environment. The method consists of two steps. First, we detect a head region and estimate its gaze direction as prior information in the human activity recognition. We use color and shape information for the detection of head region and use Bayesian Network model representing relationships between a head and a face for the estimation of gaze direction. Second, we recognize event and scenario describing the human activity. We use change of human state for the event recognition and use a rule-based method with combination of events and some constraints. We define 4 types of scenarios related to the gaze direction. We show performance of the gaze direction estimation and human activity recognition with results of experiments.

A Falling Direction Detection Method Using Smartphone Accelerometer and Deep Learning Multiple Layers (스마트폰 가속도 센서와 딥러닝 다중 레이어를 이용한 넘어짐 방향 판단 방법)

  • Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1165-1171
    • /
    • 2022
  • Human behavior recognition using an accelerometer has been applied to various fields. As smartphones have become used commonly, a method for human behavior recognition using the acceleration sensor built into the smartphone is being studied. In the case of the elderly, falling often leads to serious injuries, and falls are one of the major causes of accidents at construction fields. In this article, we proposed recognition method for human falling direction using built-in acceleration sensor and orientation sensor in the smartphone. In the past, it was a common method to use the magnitude of the acceleration vector to recognize human behavior. These days, deep learning has been actively studied and applied to various areas. In this article, we propose a method for recognizing the direction of human falling by applying the deep learning multilayer technique, which has been widely used recently.

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.

Classification and Recognition of Movement Behavior of Animal based on Decision Tree (의사결정나무를 이용한 생물의 행동 패턴 구분과 인식)

  • Lee, Seng-Tai;Kim, Sung-Shin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.225-228
    • /
    • 2005
  • 본 논문에서는 생물의 2차원영상에서 4가지의 특징을 추출한 다음 약품에 대한 생물의 행동 패턴 반응에 대하여 의사결정나무를 적용하여 패턴의 인식 및 분류를 하였다. 생물의 행동패턴을 대변하는 물리적인 특징인 속도, 방향전환 각도, 이동거리에 대하여 각각 중간이상속도비율, FFT(Fast Fourier Transformation), 2차원 히스토그램 면적, 프렉탈, 무게중심을 사용하여 특징을 추출하였다. 이렇게 추출된 4가지의 특징변수들을 사용하여 의사결정나무 모델을 구성한 다음 생물의 약품 첨가에 대한 반응을 분석하였다. 또한 결과에서는 기존의 생물의 행동패턴 구분에 쓰였던 전형적인 기법(conventional methods)보다 본 연구에서 적용한 의사결정나무가 생물의 행동패턴이 가지는 물리적 요소에 대한 독해력을 가짐을 보임으로써 특정환경에서 이동행동에 대한 분석을 용이하게 하고자 하였다.

  • PDF

Driver Assistance System By the Image Based Behavior Pattern Recognition (영상기반 행동패턴 인식에 의한 운전자 보조시스템)

  • Kim, Sangwon;Kim, Jungkyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.12
    • /
    • pp.123-129
    • /
    • 2014
  • In accordance with the development of various convergence devices, cameras are being used in many types of the systems such as security system, driver assistance device and so on, and a lot of people are exposed to these system. Therefore the system should be able to recognize the human behavior and support some useful functions with the information that is obtained from detected human behavior. In this paper we use a machine learning approach based on 2D image and propose the human behavior pattern recognition methods. The proposed methods can provide valuable information to support some useful function to user based on the recognized human behavior. First proposed one is "phone call behavior" recognition. If a camera of the black box, which is focused on driver in a car, recognize phone call pose, it can give a warning to driver for safe driving. The second one is "looking ahead" recognition for driving safety where we propose the decision rule and method to decide whether the driver is looking ahead or not. This paper also shows usefulness of proposed recognition methods with some experiment results in real time.

Human Activity Pattern Recognition Using Motion Information and Joints of Human Body (인체의 조인트와 움직임 정보를 이용한 인간의 행동패턴 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.6
    • /
    • pp.1179-1186
    • /
    • 2012
  • In this paper, we propose an algorithm that recognizes human activity patterns using the human body's joints and the information of the joints. The proposed method extracts the object from inputted video, automatically extracts joints using the ratio of the human body, applies block-matching algorithm for each joint and gets the motion information of joints. The proposed method uses the joints to move, the directional vector of motions of joints, and the sign to represent the increase or decrease of x and y coordinates of joints as basic parameters for human recognition of activity. The proposed method was tested for 8 human activities of inputted video from a web camera and had the good result for the ration of recognition of the human activities.

Human Activity Recognition using an Image Sensor and a 3-axis Accelerometer Sensor (이미지 센서와 3축 가속도 센서를 이용한 인간 행동 인식)

  • Nam, Yun-Young;Choi, Yoo-Joo;Cho, We-Duke
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.129-141
    • /
    • 2010
  • In this paper, we present a wearable intelligent device based on multi-sensor for monitoring human activity. In order to recognize multiple activities, we developed activity recognition algorithms utilizing an image sensor and a 3-axis accelerometer sensor. We proposed a grid?based optical flow method and used a SVM classifier to analyze data acquired from multi-sensor. We used the direction and the magnitude of motion vectors extracted from the image sensor. We computed the correlation between axes and the magnitude of the FFT with data extracted from the 3-axis accelerometer sensor. In the experimental results, we showed that the accuracy of activity recognition based on the only image sensor, the only 3-axis accelerometer sensor, and the proposed multi-sensor method was 55.57%, 89.97%, and 89.97% respectively.

Implementation of a Human Body Motion Pattern Classifier using Extensions of Primitive Pattern Sequences (프리미티브 패턴 나열의 확장에 의한 사람 몸 동작 패턴 분류기의 구현)

  • 조경은;조형제
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2000.11a
    • /
    • pp.475-478
    • /
    • 2000
  • 사람의 몸 동작을 인식해야하는 여러 응용분야에서의 필요성이 대두되면서 이 분야로의 연구가 활발해지고 있다. 이 논문은 사람의 비언어적 행동을 자동적으로 분석할 수 있는 인식기 개발에 관한 것으로 실세계 3 차원 좌표값을 입력으로 하는 사람 몸 동작 패턴 분류기의 구현방법을 소개한 것이다. 하나의 사람 몸 동작은 각 몸 구성 성분(손, 아래팔, 위팔, 어깨, 머리, 몸통 등)의 움직임을 조합해서 정의한 수가 있기 때문에 개별적인 각 몸 구성성분의 움직임을 인식하여 조합해서 임의의 동작을 판별하려는 방법을 적용한다. 사람 몸 동작 패턴 분류기는 측정된 실세계 3 차원 좌표 자료를 양자화한 후 xy, zy 평면에 투영한 값을 자자 구한다. 이 결과를 각각 8 방향 체인 코드로 바꾸고 2 단계 체인 코드 평활화 사업을 하여, 4 방향 코드 체적화 및 대표 코드로의 압축단계를 거친다. 이로서 생성된 프리미티브 패턴나열들을 동작 클래스별로 분류하여 프리미티브 패턴나열의 확장으로 각각의 식별기를 구축하여 각 몸 구성 성분별 동작들을 분류한다. 일련의 실험이 행해져 그 타당성을 확인하였으며, 차후에 이 분류기는 비언어적 행동 분석을 위한 사람 몸 동작 인식기의 전처리 단계로 사용되어진 것이다.

  • PDF