• Title/Summary/Keyword: 인간 행동 인식

Search Result 277, Processing Time 0.026 seconds

Human Activity Pattern Recognition Using Motion Information and Joints of Human Body (인체의 조인트와 움직임 정보를 이용한 인간의 행동패턴 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.6
    • /
    • pp.1179-1186
    • /
    • 2012
  • In this paper, we propose an algorithm that recognizes human activity patterns using the human body's joints and the information of the joints. The proposed method extracts the object from inputted video, automatically extracts joints using the ratio of the human body, applies block-matching algorithm for each joint and gets the motion information of joints. The proposed method uses the joints to move, the directional vector of motions of joints, and the sign to represent the increase or decrease of x and y coordinates of joints as basic parameters for human recognition of activity. The proposed method was tested for 8 human activities of inputted video from a web camera and had the good result for the ration of recognition of the human activities.

Deep learning-based Human Action Recognition Technique Considering the Spatio-Temporal Relationship of Joints (관절의 시·공간적 관계를 고려한 딥러닝 기반의 행동인식 기법)

  • Choi, Inkyu;Song, Hyok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.413-415
    • /
    • 2022
  • Since human joints can be used as useful information for analyzing human behavior as a component of the human body, many studies have been conducted on human action recognition using joint information. However, it is a very complex problem to recognize human action that changes every moment using only each independent joint information. Therefore, an additional information extraction method to be used for learning and an algorithm that considers the current state based on the past state are needed. In this paper, we propose a human action recognition technique considering the positional relationship of connected joints and the change of the position of each joint over time. Using the pre-trained joint extraction model, position information of each joint is obtained, and bone information is extracted using the difference vector between the connected joints. In addition, a simplified neural network is constructed according to the two types of inputs, and spatio-temporal features are extracted by adding LSTM. As a result of the experiment using a dataset consisting of 9 behaviors, it was confirmed that when the action recognition accuracy was measured considering the temporal and spatial relationship features of each joint, it showed superior performance compared to the result using only single joint information.

  • PDF

A Falling Direction Detection Method Using Smartphone Accelerometer and Deep Learning Multiple Layers (스마트폰 가속도 센서와 딥러닝 다중 레이어를 이용한 넘어짐 방향 판단 방법)

  • Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1165-1171
    • /
    • 2022
  • Human behavior recognition using an accelerometer has been applied to various fields. As smartphones have become used commonly, a method for human behavior recognition using the acceleration sensor built into the smartphone is being studied. In the case of the elderly, falling often leads to serious injuries, and falls are one of the major causes of accidents at construction fields. In this article, we proposed recognition method for human falling direction using built-in acceleration sensor and orientation sensor in the smartphone. In the past, it was a common method to use the magnitude of the acceleration vector to recognize human behavior. These days, deep learning has been actively studied and applied to various areas. In this article, we propose a method for recognizing the direction of human falling by applying the deep learning multilayer technique, which has been widely used recently.

STAGCN-based Human Action Recognition System for Immersive Large-Scale Signage Content (몰입형 대형 사이니지 콘텐츠를 위한 STAGCN 기반 인간 행동 인식 시스템)

  • Jeongho Kim;Byungsun Hwang;Jinwook Kim;Joonho Seon;Young Ghyu Sun;Jin Young Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.89-95
    • /
    • 2023
  • In recent decades, human action recognition (HAR) has demonstrated potential applications in sports analysis, human-robot interaction, and large-scale signage content. In this paper, spatial temporal attention graph convolutional network (STAGCN)-based HAR system is proposed. Spatioal-temmporal features of skeleton sequences are assigned different weights by STAGCN, enabling the consideration of key joints and viewpoints. From simulation results, it has been shown that the performance of the proposed model can be improved in terms of classification accuracy in the NTU RGB+D dataset.

Driver Assistance System By the Image Based Behavior Pattern Recognition (영상기반 행동패턴 인식에 의한 운전자 보조시스템)

  • Kim, Sangwon;Kim, Jungkyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.12
    • /
    • pp.123-129
    • /
    • 2014
  • In accordance with the development of various convergence devices, cameras are being used in many types of the systems such as security system, driver assistance device and so on, and a lot of people are exposed to these system. Therefore the system should be able to recognize the human behavior and support some useful functions with the information that is obtained from detected human behavior. In this paper we use a machine learning approach based on 2D image and propose the human behavior pattern recognition methods. The proposed methods can provide valuable information to support some useful function to user based on the recognized human behavior. First proposed one is "phone call behavior" recognition. If a camera of the black box, which is focused on driver in a car, recognize phone call pose, it can give a warning to driver for safe driving. The second one is "looking ahead" recognition for driving safety where we propose the decision rule and method to decide whether the driver is looking ahead or not. This paper also shows usefulness of proposed recognition methods with some experiment results in real time.

Tempo-oriented music recommendation system based on human activity recognition using accelerometer and gyroscope data (가속도계와 자이로스코프 데이터를 사용한 인간 행동 인식 기반의 템포 지향 음악 추천 시스템)

  • Shin, Seung-Su;Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.4
    • /
    • pp.286-291
    • /
    • 2020
  • In this paper, we propose a system that recommends music through tempo-oriented music classification and sensor-based human activity recognition. The proposed method indexes music files using tempo-oriented music classification and recommends suitable music according to the recognized user's activity. For accurate music classification, a dynamic classification based on a modulation spectrum and a sequence classification based on a Mel-spectrogram are used in combination. In addition, simple accelerometer and gyroscope sensor data of the smartphone are applied to deep spiking neural networks to improve activity recognition performance. Finally, music recommendation is performed through a mapping table considering the relationship between the recognized activity and the indexed music file. The experimental results show that the proposed system is suitable for use in any practical mobile device with a music player.

Activity Recognition based on Multi-modal Sensors using Dynamic Bayesian Networks (동적 베이지안 네트워크를 이용한 델티모달센서기반 사용자 행동인식)

  • Yang, Sung-Ihk;Hong, Jin-Hyuk;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.1
    • /
    • pp.72-76
    • /
    • 2009
  • Recently, as the interest of ubiquitous computing has been increased there has been lots of research about recognizing human activities to provide services in this environment. Especially, in mobile environment, contrary to the conventional vision based recognition researches, lots of researches are sensor based recognition. In this paper we propose to recognize the user's activity with multi-modal sensors using hierarchical dynamic Bayesian networks. Dynamic Bayesian networks are trained by the OVR(One-Versus-Rest) strategy. The inferring part of this network uses less calculation cost by selecting the activity with the higher percentage of the result of a simpler Bayesian network. For the experiment, we used an accelerometer and a physiological sensor recognizing eight kinds of activities, and as a result of the experiment we gain 97.4% of accuracy recognizing the user's activity.

Control of Ubiquitous Environment using Sensors Module (센서모듈을 이용한 유비쿼터스 환경의 제어)

  • Jung, Tae-Min;Choi, Woo-Kyung;Kim, Seong-Joo;Jeon, Hong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.190-195
    • /
    • 2007
  • As Ubiquitous era comes, it became necessary to construct environment which can provide more useful information to human in the spaces where people live like homes or offices. On this account, network of the peripheral devices of Ubiquitous should constitute efficiently. For it, this paper researched human pattern by classified motion recognition using sensors module data. (This data processing by Neural network and fuzzy algorithm.) This pattern classification can help control home network system communication. I suggest the system which can control home network system more easily through patterned movement, and control Ubiquitous environment by grasp human's movement and condition.

Preprocessing Methods for Action Recognition Model in 360-degree ERP Video (360 도 ERP 영상에서 행동 인식 모델 성능 향상을 위한 전처리 기법)

  • Park, Eun-Soo;Ryu, Jaesung;Kim, Seunghwan;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.252-255
    • /
    • 2019
  • 본 논문에서 Equirectangular projection(ERP) 영상을 행동 인식 모델에 입력하기전 제안하는 전처리를 통하여 성능을 향상시키는 것을 보인다. ERP 영상의 특성상 행동 인식을 하는데 불필요한 영역이 일반적인 2D 카메라로 촬영한 영상보다 많다. 또한 행동 인식은 사람이 Object of Interest(OOI)이다. 따라서 객체 인식모델로 인간 객체를 인식한 후 Region of Interest(ROI)를 추출하여 불필요한 영역을 없애고, 왜곡 또한 줄어든다. 본 논문에서 제안하는 기법으로 전처리 후 CNN-LSTM 모델로 성능을 테스트했다. 제안하는 방법으로 전처리를 한 데이터와 하지 않은 데이터로 행동 인식을 한 정확도로 비교하였으며 제안하는 기법으로 전처리 한 데이터로 행동 인식을 한 경우 데이터의 특성에 따라 다르지만, 최대 61%까지 성능향상을 보였다.

  • PDF

Deep Learning-based Action Recognition using Skeleton Joints Mapping (스켈레톤 조인트 매핑을 이용한 딥 러닝 기반 행동 인식)

  • Tasnim, Nusrat;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.2
    • /
    • pp.155-162
    • /
    • 2020
  • Recently, with the development of computer vision and deep learning technology, research on human action recognition has been actively conducted for video analysis, video surveillance, interactive multimedia, and human machine interaction applications. Diverse techniques have been introduced for human action understanding and classification by many researchers using RGB image, depth image, skeleton and inertial data. However, skeleton-based action discrimination is still a challenging research topic for human machine-interaction. In this paper, we propose an end-to-end skeleton joints mapping of action for generating spatio-temporal image so-called dynamic image. Then, an efficient deep convolution neural network is devised to perform the classification among the action classes. We use publicly accessible UTD-MHAD skeleton dataset for evaluating the performance of the proposed method. As a result of the experiment, the proposed system shows better performance than the existing methods with high accuracy of 97.45%.