• Title/Summary/Keyword: Human Activity Recognition (HAR)

Search Result 18, Processing Time 0.022 seconds

Human Activity Recognition in Smart Homes Based on a Difference of Convex Programming Problem

  • Ghasemi, Vahid;Pouyan, Ali A.;Sharifi, Mohsen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.321-344
    • /
    • 2017
  • Smart homes are the new generation of homes where pervasive computing is employed to make the lives of the residents more convenient. Human activity recognition (HAR) is a fundamental task in these environments. Since critical decisions will be made based on HAR results, accurate recognition of human activities with low uncertainty is of crucial importance. In this paper, a novel HAR method based on a difference of convex programming (DCP) problem is represented, which manages to handle uncertainty. For this purpose, given an input sensor data stream, a primary belief in each activity is calculated for the sensor events. Since the primary beliefs are calculated based on some abstractions, they naturally bear an amount of uncertainty. To mitigate the effect of the uncertainty, a DCP problem is defined and solved to yield secondary beliefs. In this procedure, the uncertainty stemming from a sensor event is alleviated by its neighboring sensor events in the input stream. The final activity inference is based on the secondary beliefs. The proposed method is evaluated using a well-known and publicly available dataset. It is compared to four HAR schemes, which are based on temporal probabilistic graphical models, and a convex optimization-based HAR procedure, as benchmarks. The proposed method outperforms the benchmarks, having an acceptable accuracy of 82.61%, and an average F-measure of 82.3%.

Human Activity Recognition Using Body Joint-Angle Features and Hidden Markov Model

  • Uddin, Md. Zia;Thang, Nguyen Duc;Kim, Jeong-Tai;Kim, Tae-Seong
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.569-579
    • /
    • 2011
  • This paper presents a novel approach for human activity recognition (HAR) using the joint angles from a 3D model of a human body. Unlike conventional approaches in which the joint angles are computed from inverse kinematic analysis of the optical marker positions captured with multiple cameras, our approach utilizes the body joint angles estimated directly from time-series activity images acquired with a single stereo camera by co-registering a 3D body model to the stereo information. The estimated joint-angle features are then mapped into codewords to generate discrete symbols for a hidden Markov model (HMM) of each activity. With these symbols, each activity is trained through the HMM, and later, all the trained HMMs are used for activity recognition. The performance of our joint-angle-based HAR has been compared to that of a conventional binary and depth silhouette-based HAR, producing significantly better results in the recognition rate, especially for the activities that are not discernible with the conventional approaches.

Development of a Machine-Learning based Human Activity Recognition System including Eastern-Asian Specific Activities

  • Jeong, Seungmin;Choi, Cheolwoo;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.127-135
    • /
    • 2020
  • The purpose of this study is to develop a human activity recognition (HAR) system, which distinguishes 13 activities, including five activities commonly dealt with in conventional HAR researches and eight activities from the Eastern-Asian culture. The eight special activities include floor-sitting/standing, chair-sitting/standing, floor-lying/up, and bed-lying/up. We used a 3-axis accelerometer sensor on the wrist for data collection and designed a machine learning model for the activity classification. Data clustering through preprocessing and feature extraction/reduction is performed. We then tested six machine learning algorithms for recognition accuracy comparison. As a result, we have achieved an average accuracy of 99.7% for the 13 activities. This result is far better than the average accuracy of current HAR researches based on a smartwatch (89.4%). The superiority of the HAR system developed in this study is proven because we have achieved 98.7% accuracy with publically available 'pamap2' dataset of 12 activities, whose conventionally met the best accuracy is 96.6%.

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

Trends in Activity Recognition Using Smartphone Sensors (스마트폰 기반 행동인식 기술 동향)

  • Kim, M.S.;Jeong, C.Y.;Sohn, J.M.;Lim, J.Y.;Chung, S.E.;Jeong, H.T.;Shin, H.C.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.3
    • /
    • pp.89-99
    • /
    • 2018
  • Human activity recognition (HAR) is a technology that aims to offer an automatic recognition of what a person is doing with respect to their body motion and gestures. HAR is essential in many applications such as human-computer interaction, health care, rehabilitation engineering, video surveillance, and artificial intelligence. Smartphones are becoming the most popular platform for activity recognition owing to their convenience, portability, and ease of use. The noticeable change in smartphone-based activity recognition is the adoption of a deep learning algorithm leading to successful learning outcomes. In this article, we analyze the technology trend of activity recognition using smartphone sensors, challenging issues for future development, and a strategy change in terms of the generation of a activity recognition dataset.

Performance of Exercise Posture Correction System Based on Deep Learning (딥러닝 기반 운동 자세 교정 시스템의 성능)

  • Hwang, Byungsun;Kim, Jeongho;Lee, Ye-Ram;Kyeong, Chanuk;Seon, Joonho;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.177-183
    • /
    • 2022
  • Recently, interesting of home training is getting bigger due to COVID-19. Accordingly, research on applying HAR(human activity recognition) technology to home training has been conducted. However, existing paper of HAR proposed static activity instead of dynamic activity. In this paper, the deep learning model where dynamic exercise posture can be analyzed and the accuracy of the user's exercise posture can be shown is proposed. Fitness images of AI-hub are analyzed by blaze pose. The experiment is compared with three types of deep learning model: RNN(recurrent neural network), LSTM(long short-term memory), CNN(convolution neural network). In simulation results, it was shown that the f1-score of RNN, LSTM and CNN is 0.49, 0.87 and 0.98, respectively. It was confirmed that CNN is more suitable for human activity recognition than other models from simulation results. More exercise postures can be analyzed using a variety learning data.

Development of a Hybrid Deep-Learning Model for the Human Activity Recognition based on the Wristband Accelerometer Signals

  • Jeong, Seungmin;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.9-16
    • /
    • 2021
  • This study aims to develop a human activity recognition (HAR) system as a Deep-Learning (DL) classification model, distinguishing various human activities. We solely rely on the signals from a wristband accelerometer worn by a person for the user's convenience. 3-axis sequential acceleration signal data are gathered within a predefined time-window-slice, and they are used as input to the classification system. We are particularly interested in developing a Deep-Learning model that can outperform conventional machine learning classification performance. A total of 13 activities based on the laboratory experiments' data are used for the initial performance comparison. We have improved classification performance using the Convolutional Neural Network (CNN) combined with an auto-encoder feature reduction and parameter tuning. With various publically available HAR datasets, we could also achieve significant improvement in HAR classification. Our CNN model is also compared against Recurrent-Neural-Network(RNN) with Long Short-Term Memory(LSTM) to demonstrate its superiority. Noticeably, our model could distinguish both general activities and near-identical activities such as sitting down on the chair and floor, with almost perfect classification accuracy.

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk;Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.78-89
    • /
    • 2020
  • Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

Human Activity Recognition with LSTM Using the Egocentric Coordinate System Key Points

  • Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_1
    • /
    • pp.693-698
    • /
    • 2021
  • As technology advances, there is increasing need for research in different fields where this technology is applied. On of the most researched topic in computer vision is Human activity recognition (HAR), which has widely been implemented in various fields which include healthcare, video surveillance and education. We therefore present in this paper a human activity recognition system based on scale and rotation while employing the Kinect depth sensors to obtain the human skeleton joints. In contrast to previous approaches that use joint angles, in this paper we propose that each limb has an angle with the X, Y, Z axes which we employ as feature vectors. The use of the joint angles makes our system scale invariant. We further calculate the body relative direction in the egocentric coordinates in order to provide the rotation invariance. For the system parameters, we employ 8 limbs with their corresponding angles each having the X, Y, Z axes from the coordinate system as feature vectors. The extracted features are finally trained and tested with the Long short term memory (LSTM) Network which gives us an average accuracy of 98.3%.

Human Activity Recognition Using Sensor Fusion and Kernel Discriminant Analysis on Smartphones (스마트폰에서 센서 융합과 커널 판별 분석을 이용한 인간 활동 인식)

  • Cho, Jung-Gil
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.5
    • /
    • pp.9-17
    • /
    • 2020
  • Human activity recognition(HAR) using smartphones is a hot research topic in computational intelligence. Smartphones are equipped with a variety of sensors. Fusing the data of these sensors could enable applications to recognize a large number of activities. However, these devices have fewer resources because of the limited number of sensors available, and feature selection and classification methods are required to achieve optimal performance and efficient feature extraction. This paper proposes a smartphone-based HAR scheme according to these requirements. The proposed method in this paper extracts time-domain features from acceleration sensors, gyro sensors, and barometer sensors, and recognizes activities with high accuracy by applying KDA and SVM. This approach selects the most relevant feature of each sensor for each activity. Our comparison results shows that the proposed system outperforms previous smartphone-based HAR systems.