• 제목/요약/키워드: Human activity Recognition

검색결과 197건 처리시간 0.027초

Real-world multimodal lifelog dataset for human behavior study

  • Chung, Seungeun;Jeong, Chi Yoon;Lim, Jeong Mook;Lim, Jiyoun;Noh, Kyoung Ju;Kim, Gague;Jeong, Hyuntae
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.426-437
    • /
    • 2022
  • To understand the multilateral characteristics of human behavior and physiological markers related to physical, emotional, and environmental states, extensive lifelog data collection in a real-world environment is essential. Here, we propose a data collection method using multimodal mobile sensing and present a long-term dataset from 22 subjects and 616 days of experimental sessions. The dataset contains over 10 000 hours of data, including physiological, data such as photoplethysmography, electrodermal activity, and skin temperature in addition to the multivariate behavioral data. Furthermore, it consists of 10 372 user labels with emotional states and 590 days of sleep quality data. To demonstrate feasibility, human activity recognition was applied on the sensor data using a convolutional neural network-based deep learning model with 92.78% recognition accuracy. From the activity recognition result, we extracted the daily behavior pattern and discovered five representative models by applying spectral clustering. This demonstrates that the dataset contributed toward understanding human behavior using multimodal data accumulated throughout daily lives under natural conditions.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

3축 가속도 센서를 이용한 자세 및 활동 모니터링 (Posture and activity monitoring using a 3-axis accelerometer)

  • 정도운;정완영
    • 센서학회지
    • /
    • 제16권6호
    • /
    • pp.467-474
    • /
    • 2007
  • The real-time monitoring about the activity of the human provides useful information about the activity quantity and ability. The present study implemented a small-size and low-power acceleration monitoring system for convenient monitoring of activity quantity and recognition of emergent situations such as falling during daily life. For the wireless transmission of acceleration sensor signal, we developed a wireless transmission system based on a wireless sensor network. In addition, we developed a program for storing and monitoring wirelessly transmitted signals on PC in real-time. The performance of the implemented system was evaluated by assessing the output characteristic of the system according to the change of posture, and parameters and acontext recognition algorithm were developed in order to monitor activity volume during daily life and to recognize emergent situations such as falling. In particular, recognition error in the sudden change of acceleration was minimized by the application of a falling correction algorithm

Three-dimensional human activity recognition by forming a movement polygon using posture skeletal data from depth sensor

  • Vishwakarma, Dinesh Kumar;Jain, Konark
    • ETRI Journal
    • /
    • 제44권2호
    • /
    • pp.286-299
    • /
    • 2022
  • Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massive database. However, handcrafted features-based machine learning models need less computing power and very accurate where features are effectively extracted. In this study, we propose a handcrafted model based on three-dimensional sequential skeleton data. The human body skeleton movement over a frame is computed through joint positions in a frame. The joints of these skeletal frames are projected into two-dimensional space, forming a "movement polygon." These polygons are further transformed into a one-dimensional space by computing amplitudes at different angles from the centroid of polygons. The feature vector is formed by the sampling of these amplitudes at different angles. The performance of the algorithm is evaluated using a support vector machine on four public datasets: MSR Action3D, Berkeley MHAD, TST Fall Detection, and NTU-RGB+D, and the highest accuracies achieved on these datasets are 94.13%, 93.34%, 95.7%, and 86.8%, respectively. These accuracies are compared with similar state-of-the-art and show superior performance.

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk;Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • ETRI Journal
    • /
    • 제42권1호
    • /
    • pp.78-89
    • /
    • 2020
  • Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

인간의 활동 인정 가보 필터 기반의 특징 추출 (Gabor Filter-based Feature Extraction for Human Activity Recognition)

  • 윈안 누;이영구;이승룡
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(C)
    • /
    • pp.429-432
    • /
    • 2011
  • Recognizing human activities from image sequences is an active area of research in computer vision. Most of the previous work on activity recognition focuses on recognition from a single view and ignores the issue of view invariance. In this paper, we present an independent Gabor features (IGFs) method comes from the derivation of independent Gabor features in the feature extraction stage. The Gabor transformed human image exhibit strong characteristics of spatial locality, scale and orientation selectivity.

가속도계와 자이로스코프 데이터를 사용한 인간 행동 인식 기반의 템포 지향 음악 추천 시스템 (Tempo-oriented music recommendation system based on human activity recognition using accelerometer and gyroscope data)

  • 신승수;이기용;김형국
    • 한국음향학회지
    • /
    • 제39권4호
    • /
    • pp.286-291
    • /
    • 2020
  • 본 논문에서는 템포 기반의 음악 분류와 센서 기반의 인간 행동 인식을 통한 음악을 추천하는 시스템을 제안한다. 제안하는 방식은 템포 기반의 음악 분류를 통해 음악 파일을 색인하고, 인식된 행동에 따라 적합한 음악을 추천한다. 정확한 음악 분류를 위해 변조 스펙트럼 기반의 동적 분류기와 멜 스펙트로그램 기반의 시퀀스 분류기가 함께 사용된다. 또한, 간단한 스마트폰 가속도계, 자이로스코프 센서 데이터가 심층 스파이킹 신경망에 적용되어 행동 인식 성능을 향상시킨다. 마지막으로 인식된 행동과 색인된 음악 파일의 관계를 고려한 매핑 테이블을 통해 음악 추천이 수행된다. 실험 결과는 제안된 시스템이 음악 플레이어가 있는 실제 모바일 장치에 사용하기에 적합하다는 것을 보여준다.

사용자 행동 자세를 이용한 시각계 기반의 감정 인식 연구 (A Study on Visual Perception based Emotion Recognition using Body-Activity Posture)

  • 김진옥
    • 정보처리학회논문지B
    • /
    • 제18B권5호
    • /
    • pp.305-314
    • /
    • 2011
  • 사람의 의도를 인지하기 위해 감정을 시각적으로 인식하는 연구는 전통적으로 감정을 드러내는 얼굴 표정을 인식하는 데 집중해 왔다. 최근에는 감정을 드러내는 신체 언어 즉 신체 행동과 자세를 통해 감정을 나타내는 방법에서 감정 인식의 새로운 가능성을 찾고 있다. 본 연구는 신경생리학의 시각계 처리 방법을 적용한 신경모델을 구축하여 행동에서 기본 감정 의도를 인식하는 방법을 제안한다. 이를 위해 시각 피질의 정보 처리 모델에 따라 생물학적 체계의 신경모델 검출기를 구축하여 신체 행동의 정적 자세에서 6가지 주요 기본 감정을 판별한다. 파라미터 변화에 강건한 제안 모델의 성능은 신체행동 자세 집합을 대상으로 사람 관측자와의 평가 결과를 비교 평가하여 가능성을 제시한다.

Dynamic Human Activity Recognition Based on Improved FNN Model

  • Xu, Wenkai;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제15권4호
    • /
    • pp.417-424
    • /
    • 2012
  • In this paper, we propose an automatic system that recognizes dynamic human gestures activity, including Arabic numbers from 0 to 9. We assume the gesture trajectory is almost in a plane that called principal gesture plane, then the Least Squares Method is used to estimate the plane and project the 3-D trajectory model onto the principal. An improved FNN model combined with HMM is proposed for dynamic gesture recognition, which combines ability of HMM model for temporal data modeling with that of fuzzy neural network. The proposed algorithm shows that satisfactory performance and high recognition rate.