• Title/Summary/Keyword: Human-activity Recognition

Search Result 201, Processing Time 0.024 seconds

Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Dong-Seong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1189-1204
    • /
    • 2018
  • Detecting and capturing 3D human structures from the intensity-based image sequences is an inherently arguable problem, which attracted attention of several researchers especially in real-time activity recognition (Real-AR). These Real-AR systems have been significantly enhanced by using depth intensity sensors that gives maximum information, in spite of the fact that conventional Real-AR systems are using RGB video sensors. This study proposed a depth-based routine-logging Real-AR system to identify the daily human activity routines and to make these surroundings an intelligent living space. Our real-time routine-logging Real-AR system is categorized into two categories. The data collection with the use of a depth camera, feature extraction based on joint information and training/recognition of each activity. In-addition, the recognition mechanism locates, and pinpoints the learned activities and induces routine-logs. The evaluation applied on the depth datasets (self-annotated and MSRAction3D datasets) demonstrated that proposed system can achieve better recognition rates and robust as compare to state-of-the-art methods. Our Real-AR should be feasibly accessible and permanently used in behavior monitoring applications, humanoid-robot systems and e-medical therapy systems.

LoS/NLoS Identification-based Human Activity Recognition System Using Channel State Information (채널 상태 정보를 활용한 LoS/NLoS 식별 기반 인간 행동 인식 시스템)

  • Hyeok-Don Kwon;Jung-Hyok Kwon;Sol-Bee Lee;Eui-Jik Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.57-64
    • /
    • 2024
  • In this paper, we propose a Line-of-Sight (LoS)/Non-Line-of-Sight (NLoS) identification- based Human Activity Recognition (HAR) system using Channel State Information (CSI) to improve the accuracy of HAR, which dynamically changes depending on the reception environment. to consider the reception environment of HAR system, the proposed system includes three operational phases: Preprocessing phase, Classification phase, and Activity recognition phase. In the preprocessing phase, amplitude is extracted from CSI raw data, and noise in the extracted amplitude is removed. In the Classification phase, the reception environment is categorized into LoS and NLoS. Then, based on the categorized reception environment, the HAR model is determined based on the result of the reception environment categorization. Finally, in the activity recognition phase, human actions are classified into sitting, walking, standing, and absent using the determined HAR model. To demonstrate the superiority of the proposed system, an experimental implementation was performed and the accuracy of the proposed system was compared with that of the existing HAR system. The results showed that the proposed system achieved 16.25% higher accuracy than the existing system.

Development of a Machine-Learning based Human Activity Recognition System including Eastern-Asian Specific Activities

  • Jeong, Seungmin;Choi, Cheolwoo;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.127-135
    • /
    • 2020
  • The purpose of this study is to develop a human activity recognition (HAR) system, which distinguishes 13 activities, including five activities commonly dealt with in conventional HAR researches and eight activities from the Eastern-Asian culture. The eight special activities include floor-sitting/standing, chair-sitting/standing, floor-lying/up, and bed-lying/up. We used a 3-axis accelerometer sensor on the wrist for data collection and designed a machine learning model for the activity classification. Data clustering through preprocessing and feature extraction/reduction is performed. We then tested six machine learning algorithms for recognition accuracy comparison. As a result, we have achieved an average accuracy of 99.7% for the 13 activities. This result is far better than the average accuracy of current HAR researches based on a smartwatch (89.4%). The superiority of the HAR system developed in this study is proven because we have achieved 98.7% accuracy with publically available 'pamap2' dataset of 12 activities, whose conventionally met the best accuracy is 96.6%.

Activity recognition of stroke-affected people using wearable sensor

  • Anusha David;Rajavel Ramadoss;Amutha Ramachandran;Shoba Sivapatham
    • ETRI Journal
    • /
    • v.45 no.6
    • /
    • pp.1079-1089
    • /
    • 2023
  • Stroke is one of the leading causes of long-term disability worldwide, placing huge burdens on individuals and society. Further, automatic human activity recognition is a challenging task that is vital to the future of healthcare and physical therapy. Using a baseline long short-term memory recurrent neural network, this study provides a novel dataset of stretching, upward stretching, flinging motions, hand-to-mouth movements, swiping gestures, and pouring motions for improved model training and testing of stroke-affected patients. A MATLAB application is used to output textual and audible prediction results. A wearable sensor with a triaxial accelerometer is used to collect preprocessed real-time data. The model is trained with features extracted from the actual patient to recognize new actions, and the recognition accuracy provided by multiple datasets is compared based on the same baseline model. When training and testing using the new dataset, the baseline model shows recognition accuracy that is 11% higher than the Activity Daily Living dataset, 22% higher than the Activity Recognition Single Chest-Mounted Accelerometer dataset, and 10% higher than another real-world dataset.

Spatial-temporal texture features for 3D human activity recognition using laser-based RGB-D videos

  • Ming, Yue;Wang, Guangchao;Hong, Xiaopeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1595-1613
    • /
    • 2017
  • The IR camera and laser-based IR projector provide an effective solution for real-time collection of moving targets in RGB-D videos. Different from the traditional RGB videos, the captured depth videos are not affected by the illumination variation. In this paper, we propose a novel feature extraction framework to describe human activities based on the above optical video capturing method, namely spatial-temporal texture features for 3D human activity recognition. Spatial-temporal texture feature with depth information is insensitive to illumination and occlusions, and efficient for fine-motion description. The framework of our proposed algorithm begins with video acquisition based on laser projection, video preprocessing with visual background extraction and obtains spatial-temporal key images. Then, the texture features encoded from key images are used to generate discriminative features for human activity information. The experimental results based on the different databases and practical scenarios demonstrate the effectiveness of our proposed algorithm for the large-scale data sets.

Active Contours Level Set Based Still Human Body Segmentation from Depth Images For Video-based Activity Recognition

  • Siddiqi, Muhammad Hameed;Khan, Adil Mehmood;Lee, Seok-Won
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2839-2852
    • /
    • 2013
  • Context-awareness is an essential part of ubiquitous computing, and over the past decade video based activity recognition (VAR) has emerged as an important component to identify user's context for automatic service delivery in context-aware applications. The accuracy of VAR significantly depends on the performance of the employed human body segmentation algorithm. Previous human body segmentation algorithms often engage modeling of the human body that normally requires bulky amount of training data and cannot competently handle changes over time. Recently, active contours have emerged as a successful segmentation technique in still images. In this paper, an active contour model with the integration of Chan Vese (CV) energy and Bhattacharya distance functions are adapted for automatic human body segmentation using depth cameras for VAR. The proposed technique not only outperforms existing segmentation methods in normal scenarios but it is also more robust to noise. Moreover, it is unsupervised, i.e., no prior human body model is needed. The performance of the proposed segmentation technique is compared against conventional CV Active Contour (AC) model using a depth-camera and obtained much better performance over it.

Face and Hand Activity Detection Based on Haar Wavelet and Background Updating Algorithm

  • Shang, Yiting;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.992-999
    • /
    • 2011
  • This paper proposed a human body posture recognition program based on haar-like feature and hand activity detection. Its distinguishing features are the combination of face detection and motion detection. Firstly, the program uses the haar-like feature face detection to receive the location of human face. The haar-like feature is provided with the advantages of speed. It means the less amount of calculation the haar-like feature can exclude a large number of interference, and it can discriminate human face more accurately, and achieve the face position. Then the program uses the frame subtraction to achieve the position of human body motion. This method is provided with good performance of the motion detection. Afterwards, the program recognises the human body motion by calculating the relationship of the face position with the position of human body motion contour. By the test, we know that the recognition rate of this algorithm is more than 92%. The results show that, this algorithm can achieve the result quickly, and guarantee the exactitude of the result.

Activity Data Modeling and Visualization Method for Human Life Activity Recognition (인간의 일상동작 인식을 위한 동작 데이터 모델링과 가시화 기법)

  • Choi, Jung-In;Yong, Hwan-Seung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.1059-1066
    • /
    • 2012
  • With the development of Smartphone, Smartphone contains diverse functions including many sensors that can describe users' state. So there has been increased studies rapidly about activity recognition and life pattern recognition with Smartphone sensors. This research suggest modeling of the activity data to classify extracted data in existing activity recognition study. Activity data is divided into two parts: Physical activity and Logical Activity. In this paper, activity data modeling is theoretical analysis. We classified the basic activity(walking, standing, sitting, lying) as physical activity and the other activities including object, target and place as logical activity. After that we suggested a method of visualizing modeling data for users. Our approach will contribute to generalize human's life by modeling activity data. Also it can contribute to visualize user's activity data for existing activity recognition study.

Human Activity Recognition using an Image Sensor and a 3-axis Accelerometer Sensor (이미지 센서와 3축 가속도 센서를 이용한 인간 행동 인식)

  • Nam, Yun-Young;Choi, Yoo-Joo;Cho, We-Duke
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.129-141
    • /
    • 2010
  • In this paper, we present a wearable intelligent device based on multi-sensor for monitoring human activity. In order to recognize multiple activities, we developed activity recognition algorithms utilizing an image sensor and a 3-axis accelerometer sensor. We proposed a grid?based optical flow method and used a SVM classifier to analyze data acquired from multi-sensor. We used the direction and the magnitude of motion vectors extracted from the image sensor. We computed the correlation between axes and the magnitude of the FFT with data extracted from the 3-axis accelerometer sensor. In the experimental results, we showed that the accuracy of activity recognition based on the only image sensor, the only 3-axis accelerometer sensor, and the proposed multi-sensor method was 55.57%, 89.97%, and 89.97% respectively.

Bio-signal Data Augumentation Technique for CNN based Human Activity Recognition (CNN 기반 인간 동작 인식을 위한 생체신호 데이터의 증강 기법)

  • Gerelbat BatGerel;Chun-Ki Kwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.2
    • /
    • pp.90-96
    • /
    • 2023
  • Securing large amounts of training data in deep learning neural networks, including convolutional neural networks, is of importance for avoiding overfitting phenomenon or for the excellent performance. However, securing labeled training data in deep learning neural networks is very limited in reality. To overcome this, several augmentation methods have been proposed in the literature to generate an additional large amount of training data through transformation or manipulation of the already acquired traing data. However, unlike training data such as images and texts, it is barely to find an augmentation method in the literature that additionally generates bio-signal training data for convolutional neural network based human activity recognition. Thus, this study proposes a simple but effective augmentation method of bio-signal training data for convolutional neural network based human activity recognition. The usefulness of the proposed augmentation method is validated by showing that human activity is recognized with high accuracy by convolutional neural network trained with its augmented bio-signal training data.