• Title/Summary/Keyword: Human activity Recognition

Search Result 198, Processing Time 0.021 seconds

Detecting user status from smartphone sensor data

  • Nguyen, Thu-Trang;Nguyen, Thi-Hau;Nguyen, Ha-Nam;Nguyen, Duc-Nhan;Choi, GyooSeok
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.1
    • /
    • pp.28-30
    • /
    • 2016
  • Due to the high increment in usage and built-in advanced technology of smartphones, human activity recognition relying on smartphone sensor data has become a focused research area. In order to reduce noise of collected data, most of previous studies assume that smartphones are fixed at certain positions. This strategy is impractical for real life applications. To overcome this issue, we here investigate a framework that allows detecting the status of a traveller as idle or moving regardless the position and the direction of smartphones. The application of our work is to estimate the total energy consumption of a traveller during a trip. A number of experiments have been carried out to show the effectiveness of our framework when travellers are not only walking but also using primitive vehicles like motorbikes.

Activity Recognition based on Accelerometer using Self Organizing Maps and Hidden Markov Model (자기 구성 지도와 은닉 마르코프 모델을 이용한 가속도 센서 기반 행동 인식)

  • Hwang, Keum-Sung;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.245-250
    • /
    • 2008
  • 최근 동작 및 행동 인식에 대한 연구가 활발하다. 특히, 센서가 소형화되고 저렴해지면서 그 활용을 위한 관심이 증가하고 있다. 기존의 많은 행동 인식 연구에서 사용되어 온 정적 분류 기술 기반 동작 인식 방법은 연속적인 데이터 분류 기술에 비해 유연성 및 활용성이 부족할 수 있다. 본 논문에서는 연속적인 데이터의 패턴 분류 및 인식에 효과적인 확률적 추론 기법인 은닉 마르코프 모델(Hidden Markov Model)과 사전 지식 없이도 자동 학습이 가능하며 의미 깊은 궤적 패턴을 클러스터링하고 효과적인 양자화가 가능한 자기구성지도(Self Organizing Map)를 이용한 동작 인식 기술을 소개한다. 또한, 그 유용성을 입증하기 위해 실제 가속도 센서를 이용하여 다양한 동작에 대한 데이터를 수집하고 분류 성능을 분석 및 평가한다. 실험에서는 실제 가속도 센서를 통해 수집된 숫자를 그리는 동작의 성능 평가 결과를 보이고, 행동 인식기 별 성능과 전체 인식기별 성능을 비교한다.

  • PDF

Event Cognition-based Daily Activity Prediction Using Wearable Sensors (웨어러블 센서를 이용한 사건인지 기반 일상 활동 예측)

  • Lee, Chung-Yeon;Kwak, Dong Hyun;Lee, Beom-Jin;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.781-785
    • /
    • 2016
  • Learning from human behaviors in the real world is essential for human-aware intelligent systems such as smart assistants and autonomous robots. Most of research focuses on correlations between sensory patterns and a label for each activity. However, human activity is a combination of several event contexts and is a narrative story in and of itself. We propose a novel approach of human activity prediction based on event cognition. Egocentric multi-sensor data are collected from an individual's daily life by using a wearable device and smartphone. Event contexts about location, scene and activities are then recognized, and finally the users" daily activities are predicted from a decision rule based on the event contexts. The proposed method has been evaluated on a wearable sensor data collected from the real world over 2 weeks by 2 people. Experimental results showed improved recognition accuracies when using the proposed method comparing to results directly using sensory features.

Design of Prototype-Based Emotion Recognizer Using Physiological Signals

  • Park, Byoung-Jun;Jang, Eun-Hye;Chung, Myung-Ae;Kim, Sang-Hyeob
    • ETRI Journal
    • /
    • v.35 no.5
    • /
    • pp.869-879
    • /
    • 2013
  • This study is related to the acquisition of physiological signals of human emotions and the recognition of human emotions using such physiological signals. To acquire physiological signals, seven emotions are evoked through stimuli. Regarding the induced emotions, the results of skin temperature, photoplethysmography, electrodermal activity, and an electrocardiogram are recorded and analyzed as physiological signals. The suitability and effectiveness of the stimuli are evaluated by the subjects themselves. To address the problem of the emotions not being recognized, we introduce a methodology for a recognizer using prototype-based learning and particle swarm optimization (PSO). The design involves two main phases: i) PSO selects the P% of the patterns to be treated as prototypes of the seven emotions; ii) PSO is instrumental in the formation of the core set of features. The experiments show that a suitable selection of prototypes and a substantial reduction of the feature space can be accomplished, and the recognizer formed in this manner is characterized by high recognition accuracy for the seven emotions using physiological signals.

A Method for Body Keypoint Localization based on Object Detection using the RGB-D information (RGB-D 정보를 이용한 객체 탐지 기반의 신체 키포인트 검출 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.85-92
    • /
    • 2017
  • Recently, in the field of video surveillance, a Deep Learning based learning method has been applied to a method of detecting a moving person in a video and analyzing the behavior of a detected person. The human activity recognition, which is one of the fields this intelligent image analysis technology, detects the object and goes through the process of detecting the body keypoint to recognize the behavior of the detected object. In this paper, we propose a method for Body Keypoint Localization based on Object Detection using RGB-D information. First, the moving object is segmented and detected from the background using color information and depth information generated by the two cameras. The input image generated by rescaling the detected object region using RGB-D information is applied to Convolutional Pose Machines for one person's pose estimation. CPM are used to generate Belief Maps for 14 body parts per person and to detect body keypoints based on Belief Maps. This method provides an accurate region for objects to detect keypoints an can be extended from single Body Keypoint Localization to multiple Body Keypoint Localization through the integration of individual Body Keypoint Localization. In the future, it is possible to generate a model for human pose estimation using the detected keypoints and contribute to the field of human activity recognition.

Decoding Brain Patterns for Colored and Grayscale Images using Multivariate Pattern Analysis

  • Zafar, Raheel;Malik, Muhammad Noman;Hayat, Huma;Malik, Aamir Saeed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1543-1561
    • /
    • 2020
  • Taxonomy of human brain activity is a complicated rather challenging procedure. Due to its multifaceted aspects, including experiment design, stimuli selection and presentation of images other than feature extraction and selection techniques, foster its challenging nature. Although, researchers have focused various methods to create taxonomy of human brain activity, however use of multivariate pattern analysis (MVPA) for image recognition to catalog the human brain activities is scarce. Moreover, experiment design is a complex procedure and selection of image type, color and order is challenging too. Thus, this research bridge the gap by using MVPA to create taxonomy of human brain activity for different categories of images, both colored and gray scale. In this regard, experiment is conducted through EEG testing technique, with feature extraction, selection and classification approaches to collect data from prequalified criteria of 25 graduates of University Technology PETRONAS (UTP). These participants are shown both colored and gray scale images to record accuracy and reaction time. The results showed that colored images produces better end result in terms of accuracy and response time using wavelet transform, t-test and support vector machine. This research resulted that MVPA is a better approach for the analysis of EEG data as more useful information can be extracted from the brain using colored images. This research discusses a detail behavior of human brain based on the color and gray scale images for the specific and unique task. This research contributes to further improve the decoding of human brain with increased accuracy. Besides, such experiment settings can be implemented and contribute to other areas of medical, military, business, lie detection and many others.

Roles of Transcription Factor Binding Sites in the D-raf Promoter Region

  • Kwon, Eun-Jeong;Kim, Hyeong-In;Kim, In-Ju
    • Animal cells and systems
    • /
    • v.2 no.1
    • /
    • pp.117-122
    • /
    • 1998
  • D-raf, a Drosophila homolog of the human c-raf-1, is known as a signal transducer in cell proliferation and differentiation. A previous study found that the D-raf gene expression is regulated by the DNA replication-related element (DRE)/DRE-binding factor (DREF) system. In this study, we found the sequences homologous to transcription factor C/EBP, MyoD, STAT and Myc recognition sites in the D-raf promoter. We have generated various base substitutional mutations in these recognition sites and subsequently examined their effects on D-raf promoter activity through transient CAT assays in Kc cells with reporter plasmids p5'-878DrafCAT carrying the mutations in these binding sites. Through gel mobility shift assay using nuclear extracts of Kc cells, we detected factors binding to these recognition sites. Our results show that transcription factor C/EBP, STAT and Myc binding sites in D-raf promoter region play a positive role in transcriptional regulation of the D-raf gene and the Myo D binding site plays a negative role.

  • PDF

A Study of Object Recognition for the Efficient Management of Construction Equipment

  • Hyeok-Jun Ryu;Suk-Won Lee;Ju-Hyung Kim;Jae-Jun Kim
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.587-591
    • /
    • 2013
  • Measuring the process of construction operations for productivity improvement remains a difficult task for most construction companies due to the manual effort required in most activity measurement methods. There are many ways to measuring the process. But past measurement methods was inefficient. Because they needed a lot of manpower and time. So, this article focus on the vision-based object recognition and tracking methods for automated construction. These methods have the advantage of efficient that human intervention was reduced. Therefore, this article is analyzed the performance of vision-based methods in the construction sites and is expected to contribute to selection of vision-based methods.

  • PDF

Wrist joint analysis of Myoelectronic Hand using Accelerometer (가속도계를 이용한 전동의수의 손목관절 시스템 해석)

  • 장대진;김명회;양현석
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2003.05a
    • /
    • pp.876-881
    • /
    • 2003
  • This study focused on to design and toanalysis of a myoelectronic hand. We considered a low frequency factor in human life and to quantify low frequency which a human body responded to using a 1-axis ant a 3-axis accelerometer. The dynamic myoelectronic hand are important for tasks such a continuous prosthetic control and a EMG signal recognition, which have not been successfully mastered by the most neural approached To control myoelectronic hand, classifying myoelectronic patterns are also important. Experimental results of FEM are 110㎫ on Thumb, 200㎫ on Index finger, 220㎫ on Middle finger 260㎫ on Ring finger and 270㎫ on Little finger. Experimental results of accelerometer are 1.4-0.4(m/s2) ,(5-20(〔Hz〕) in Feeding activity and 0.4-0(m/s2) (0-10〔Hz〕) in Lifting activity. Considering these facts, we suggest a new type myoelectronic hand.

  • PDF

Deep Learning based violent protest detection system

  • Lee, Yeon-su;Kim, Hyun-chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.3
    • /
    • pp.87-93
    • /
    • 2019
  • In this paper, we propose a real-time drone-based violent protest detection system. Our proposed system uses drones to detect scenes of violent protest in real-time. The important problem is that the victims and violent actions have to be manually searched in videos when the evidence has been collected. Firstly, we focused to solve the limitations of existing collecting evidence devices by using drone to collect evidence live and upload in AWS(Amazon Web Service)[1]. Secondly, we built a Deep Learning based violence detection model from the videos using Yolov3 Feature Pyramid Network for human activity recognition, in order to detect three types of violent action. The built model classifies people with possession of gun, swinging pipe, and violent activity with the accuracy of 92, 91 and 80.5% respectively. This system is expected to significantly save time and human resource of the existing collecting evidence.