• Title/Summary/Keyword: Human-activity Recognition

Search Result 201, Processing Time 0.021 seconds

Development of a Hybrid Deep-Learning Model for the Human Activity Recognition based on the Wristband Accelerometer Signals

  • Jeong, Seungmin;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.9-16
    • /
    • 2021
  • This study aims to develop a human activity recognition (HAR) system as a Deep-Learning (DL) classification model, distinguishing various human activities. We solely rely on the signals from a wristband accelerometer worn by a person for the user's convenience. 3-axis sequential acceleration signal data are gathered within a predefined time-window-slice, and they are used as input to the classification system. We are particularly interested in developing a Deep-Learning model that can outperform conventional machine learning classification performance. A total of 13 activities based on the laboratory experiments' data are used for the initial performance comparison. We have improved classification performance using the Convolutional Neural Network (CNN) combined with an auto-encoder feature reduction and parameter tuning. With various publically available HAR datasets, we could also achieve significant improvement in HAR classification. Our CNN model is also compared against Recurrent-Neural-Network(RNN) with Long Short-Term Memory(LSTM) to demonstrate its superiority. Noticeably, our model could distinguish both general activities and near-identical activities such as sitting down on the chair and floor, with almost perfect classification accuracy.

Logical Activity Recognition Model for Smart Home Environment

  • Choi, Jung-In;Lim, Sung-Ju;Yong, Hwan-Seung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.67-72
    • /
    • 2015
  • Recently, studies that interact with human and things through motion recognition are increasing due to the expansion of IoT(Internet of Things). This paper proposed the system that recognizes the user's logical activity in home environment by attaching some sensors to various objects. We employ Arduino sensors and appreciate the logical activity by using the physical activitymodel that we processed in the previous researches. In this System, we can cognize the activities such as watching TV, listening music, talking, eating, cooking, sleeping and using computer. After we produce experimental data through setting virtual scenario, then the average result of recognition rate was 95% but depending on experiment sensor situation and physical activity errors the consequence could be changed. To provide the recognized results to user, we visualized diverse graphs.

Performance of Exercise Posture Correction System Based on Deep Learning (딥러닝 기반 운동 자세 교정 시스템의 성능)

  • Hwang, Byungsun;Kim, Jeongho;Lee, Ye-Ram;Kyeong, Chanuk;Seon, Joonho;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.177-183
    • /
    • 2022
  • Recently, interesting of home training is getting bigger due to COVID-19. Accordingly, research on applying HAR(human activity recognition) technology to home training has been conducted. However, existing paper of HAR proposed static activity instead of dynamic activity. In this paper, the deep learning model where dynamic exercise posture can be analyzed and the accuracy of the user's exercise posture can be shown is proposed. Fitness images of AI-hub are analyzed by blaze pose. The experiment is compared with three types of deep learning model: RNN(recurrent neural network), LSTM(long short-term memory), CNN(convolution neural network). In simulation results, it was shown that the f1-score of RNN, LSTM and CNN is 0.49, 0.87 and 0.98, respectively. It was confirmed that CNN is more suitable for human activity recognition than other models from simulation results. More exercise postures can be analyzed using a variety learning data.

Real-world multimodal lifelog dataset for human behavior study

  • Chung, Seungeun;Jeong, Chi Yoon;Lim, Jeong Mook;Lim, Jiyoun;Noh, Kyoung Ju;Kim, Gague;Jeong, Hyuntae
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.426-437
    • /
    • 2022
  • To understand the multilateral characteristics of human behavior and physiological markers related to physical, emotional, and environmental states, extensive lifelog data collection in a real-world environment is essential. Here, we propose a data collection method using multimodal mobile sensing and present a long-term dataset from 22 subjects and 616 days of experimental sessions. The dataset contains over 10 000 hours of data, including physiological, data such as photoplethysmography, electrodermal activity, and skin temperature in addition to the multivariate behavioral data. Furthermore, it consists of 10 372 user labels with emotional states and 590 days of sleep quality data. To demonstrate feasibility, human activity recognition was applied on the sensor data using a convolutional neural network-based deep learning model with 92.78% recognition accuracy. From the activity recognition result, we extracted the daily behavior pattern and discovered five representative models by applying spectral clustering. This demonstrates that the dataset contributed toward understanding human behavior using multimodal data accumulated throughout daily lives under natural conditions.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Posture and activity monitoring using a 3-axis accelerometer (3축 가속도 센서를 이용한 자세 및 활동 모니터링)

  • Jeong, Do-Un;Chung, Wan-Young
    • Journal of Sensor Science and Technology
    • /
    • v.16 no.6
    • /
    • pp.467-474
    • /
    • 2007
  • The real-time monitoring about the activity of the human provides useful information about the activity quantity and ability. The present study implemented a small-size and low-power acceleration monitoring system for convenient monitoring of activity quantity and recognition of emergent situations such as falling during daily life. For the wireless transmission of acceleration sensor signal, we developed a wireless transmission system based on a wireless sensor network. In addition, we developed a program for storing and monitoring wirelessly transmitted signals on PC in real-time. The performance of the implemented system was evaluated by assessing the output characteristic of the system according to the change of posture, and parameters and acontext recognition algorithm were developed in order to monitor activity volume during daily life and to recognize emergent situations such as falling. In particular, recognition error in the sudden change of acceleration was minimized by the application of a falling correction algorithm

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

Three-dimensional human activity recognition by forming a movement polygon using posture skeletal data from depth sensor

  • Vishwakarma, Dinesh Kumar;Jain, Konark
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.286-299
    • /
    • 2022
  • Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massive database. However, handcrafted features-based machine learning models need less computing power and very accurate where features are effectively extracted. In this study, we propose a handcrafted model based on three-dimensional sequential skeleton data. The human body skeleton movement over a frame is computed through joint positions in a frame. The joints of these skeletal frames are projected into two-dimensional space, forming a "movement polygon." These polygons are further transformed into a one-dimensional space by computing amplitudes at different angles from the centroid of polygons. The feature vector is formed by the sampling of these amplitudes at different angles. The performance of the algorithm is evaluated using a support vector machine on four public datasets: MSR Action3D, Berkeley MHAD, TST Fall Detection, and NTU-RGB+D, and the highest accuracies achieved on these datasets are 94.13%, 93.34%, 95.7%, and 86.8%, respectively. These accuracies are compared with similar state-of-the-art and show superior performance.

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk;Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.78-89
    • /
    • 2020
  • Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

Gabor Filter-based Feature Extraction for Human Activity Recognition (인간의 활동 인정 가보 필터 기반의 특징 추출)

  • AnhTu, Nguyen;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06c
    • /
    • pp.429-432
    • /
    • 2011
  • Recognizing human activities from image sequences is an active area of research in computer vision. Most of the previous work on activity recognition focuses on recognition from a single view and ignores the issue of view invariance. In this paper, we present an independent Gabor features (IGFs) method comes from the derivation of independent Gabor features in the feature extraction stage. The Gabor transformed human image exhibit strong characteristics of spatial locality, scale and orientation selectivity.