• Title/Summary/Keyword: human activities recognition

Search Result 135, Processing Time 0.023 seconds

Real-time Recognition of Daily Human Activities Using A Single Tri-axial Accelerometer

  • Rubaiyeat, Husne Ara;Khan, Adil Mehmood;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.289-292
    • /
    • 2010
  • Recently human activity recognition using accelerometer has become a prominent research area in proactive computing. In this paper, we present a real-time activity recognition system using a single tri-axial accelerometer. Our system recognizes four primary daily human activities: namely walking, going upstairs, going downstairs, and sitting. The system also computes extra information from the recognized activities such as number of steps, energy expenditure, activity duration, etc. Finally, all generated information is stored in a database as daily log.

Development of a Machine-Learning based Human Activity Recognition System including Eastern-Asian Specific Activities

  • Jeong, Seungmin;Choi, Cheolwoo;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.127-135
    • /
    • 2020
  • The purpose of this study is to develop a human activity recognition (HAR) system, which distinguishes 13 activities, including five activities commonly dealt with in conventional HAR researches and eight activities from the Eastern-Asian culture. The eight special activities include floor-sitting/standing, chair-sitting/standing, floor-lying/up, and bed-lying/up. We used a 3-axis accelerometer sensor on the wrist for data collection and designed a machine learning model for the activity classification. Data clustering through preprocessing and feature extraction/reduction is performed. We then tested six machine learning algorithms for recognition accuracy comparison. As a result, we have achieved an average accuracy of 99.7% for the 13 activities. This result is far better than the average accuracy of current HAR researches based on a smartwatch (89.4%). The superiority of the HAR system developed in this study is proven because we have achieved 98.7% accuracy with publically available 'pamap2' dataset of 12 activities, whose conventionally met the best accuracy is 96.6%.

A Robust and Device-Free Daily Activities Recognition System using Wi-Fi Signals

  • Ding, Enjie;Zhang, Yue;Xin, Yun;Zhang, Lei;Huo, Yu;Liu, Yafeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2377-2397
    • /
    • 2020
  • Human activity recognition is widely used in smart homes, health care and indoor monitor. Traditional approaches all need hardware installation or wearable sensors, which incurs additional costs and imposes many restrictions on usage. Therefore, this paper presents a novel device-free activities recognition system based on the advanced wireless technologies. The fine-grained information channel state information (CSI) in the wireless channel is employed as the indicator of human activities. To improve accuracy, both amplitude and phase information of CSI are extracted and shaped into feature vectors for activities recognition. In addition, we discuss the classification accuracy of different features and select the most stable features for feature matrix. Our experimental evaluation in two laboratories of different size demonstrates that the proposed scheme can achieve an average accuracy over 95% and 90% in different scenarios.

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

Human Activities Recognition Based on Skeleton Information via Sparse Representation

  • Liu, Suolan;Kong, Lizhi;Wang, Hongyuan
    • Journal of Computing Science and Engineering
    • /
    • v.12 no.1
    • /
    • pp.1-11
    • /
    • 2018
  • Human activities recognition is a challenging task due to its complexity of human movements and the variety performed by different subjects for the same action. This paper presents a recognition algorithm by using skeleton information generated from depth maps. Concatenating motion features and temporal constraint feature produces feature vector. Reducing dictionary scale proposes an improved fast classifier based on sparse representation. The developed method is shown to be effective by recognizing different activities on the UTD-MHAD dataset. Comparison results indicate superior performance of our method over some existing methods.

A Human Activity Recognition System Using ICA and HMM

  • Uddin, Zia;Lee, J.J.;Kim, T.S.
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.499-503
    • /
    • 2008
  • In this paper, a novel human activity recognition method is proposed which utilizes independent components of activity shape information from image sequences and Hidden Markov Model (HMM) for recognition. Activities are represented by feature vectors from Independent Component Analysis (ICA) on video images, and based on these features; recognition is achieved by trained HMMs of activities. Our recognition performance has been compared to the conventional method where Principle Component Analysis (PCA) is typically used to derive activity shape features. Our results show that superior recognition is achieved with our proposed method especially for activities (e.g., skipping) that cannot be easily recognized by the conventional method.

  • PDF

Human Posture Recognition: Methodology and Implementation

  • Htike, Kyaw Kyaw;Khalifa, Othman O.
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.4
    • /
    • pp.1910-1914
    • /
    • 2015
  • Human posture recognition is an attractive and challenging topic in computer vision due to its promising applications in the areas of personal health care, environmental awareness, human-computer-interaction and surveillance systems. Human posture recognition in video sequences consists of two stages: the first stage is training and evaluation and the second is deployment. In the first stage, the system is trained and evaluated using datasets of human postures to ‘teach’ the system to classify human postures for any future inputs. When the training and evaluation process is deemed satisfactory as measured by recognition rates, the trained system is then deployed to recognize human postures in any input video sequence. Different classifiers were used in the training such as Multilayer Perceptron Feedforward Neural networks, Self-Organizing Maps, Fuzzy C Means and K Means. Results show that supervised learning classifiers tend to perform better than unsupervised classifiers for the case of human posture recognition.

Development of a Hybrid Deep-Learning Model for the Human Activity Recognition based on the Wristband Accelerometer Signals

  • Jeong, Seungmin;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.9-16
    • /
    • 2021
  • This study aims to develop a human activity recognition (HAR) system as a Deep-Learning (DL) classification model, distinguishing various human activities. We solely rely on the signals from a wristband accelerometer worn by a person for the user's convenience. 3-axis sequential acceleration signal data are gathered within a predefined time-window-slice, and they are used as input to the classification system. We are particularly interested in developing a Deep-Learning model that can outperform conventional machine learning classification performance. A total of 13 activities based on the laboratory experiments' data are used for the initial performance comparison. We have improved classification performance using the Convolutional Neural Network (CNN) combined with an auto-encoder feature reduction and parameter tuning. With various publically available HAR datasets, we could also achieve significant improvement in HAR classification. Our CNN model is also compared against Recurrent-Neural-Network(RNN) with Long Short-Term Memory(LSTM) to demonstrate its superiority. Noticeably, our model could distinguish both general activities and near-identical activities such as sitting down on the chair and floor, with almost perfect classification accuracy.

Human Activity Pattern Recognition Using Motion Information and Joints of Human Body (인체의 조인트와 움직임 정보를 이용한 인간의 행동패턴 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.6
    • /
    • pp.1179-1186
    • /
    • 2012
  • In this paper, we propose an algorithm that recognizes human activity patterns using the human body's joints and the information of the joints. The proposed method extracts the object from inputted video, automatically extracts joints using the ratio of the human body, applies block-matching algorithm for each joint and gets the motion information of joints. The proposed method uses the joints to move, the directional vector of motions of joints, and the sign to represent the increase or decrease of x and y coordinates of joints as basic parameters for human recognition of activity. The proposed method was tested for 8 human activities of inputted video from a web camera and had the good result for the ration of recognition of the human activities.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.