• Title/Summary/Keyword: sensor recognition

Search Result 1,109, Processing Time 0.032 seconds

Behavior recognition system based fog cloud computing

  • Lee, Seok-Woo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • v.6 no.3
    • /
    • pp.29-37
    • /
    • 2017
  • The current behavior recognition system don't match data formats between sensor data measured by user's sensor module or device. Therefore, it is necessary to support data processing, sharing and collaboration services between users and behavior recognition system in order to process sensor data of a large capacity, which is another formats. It is also necessary for real time interaction with users and behavior recognition system. To solve this problem, we propose fog cloud based behavior recognition system for human body sensor data processing. Fog cloud based behavior recognition system solve data standard formats in DbaaS (Database as a System) cloud by servicing fog cloud to solve heterogeneity of sensor data measured in user's sensor module or device. In addition, by placing fog cloud between users and cloud, proximity between users and servers is increased, allowing for real time interaction. Based on this, we propose behavior recognition system for user's behavior recognition and service to observers in collaborative environment. Based on the proposed system, it solves the problem of servers overload due to large sensor data and the inability of real time interaction due to non-proximity between users and servers. This shows the process of delivering behavior recognition services that are consistent and capable of real time interaction.

Hierarchical Deep Belief Network for Activity Recognition Using Smartphone Sensor (스마트폰 센서를 이용하여 행동을 인식하기 위한 계층적인 심층 신뢰 신경망)

  • Lee, Hyunjin
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1421-1429
    • /
    • 2017
  • Human activity recognition has been studied using various sensors and algorithms. Human activity recognition can be divided into sensor based and vision based on the method. In this paper, we proposed an activity recognition system using acceleration sensor and gyroscope sensor in smartphone among sensor based methods. We used Deep Belief Network (DBN), which is one of the most popular deep learning methods, to improve an accuracy of human activity recognition. DBN uses the entire input set as a common input. However, because of the characteristics of different time window depending on the type of human activity, the RBMs, which is a component of DBN, are configured hierarchically by combining them from different time windows. As a result of applying to real data, The proposed human activity recognition system showed stable precision.

Hand Gesture Recognition Suitable for Wearable Devices using Flexible Epidermal Tactile Sensor Array

  • Byun, Sung-Woo;Lee, Seok-Pil
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.4
    • /
    • pp.1732-1739
    • /
    • 2018
  • With the explosion of digital devices, interaction technologies between human and devices are required more than ever. Especially, hand gesture recognition is advantageous in that it can be easily used. It is divided into the two groups: the contact sensor and the non-contact sensor. Compared with non-contact gesture recognition, the advantage of contact gesture recognition is that it is able to classify gestures that disappear from the sensor's sight. Also, since there is direct contacted with the user, relatively accurate information can be acquired. Electromyography (EMG) and force-sensitive resistors (FSRs) are the typical methods used for contact gesture recognition based on muscle activities. The sensors, however, are generally too sensitive to environmental disturbances such as electrical noises, electromagnetic signals and so on. In this paper, we propose a novel contact gesture recognition method based on Flexible Epidermal Tactile Sensor Array (FETSA) that is used to measure electrical signals according to movements of the wrist. To recognize gestures using FETSA, we extracted feature sets, and the gestures were subsequently classified using the support vector machine. The performance of the proposed gesture recognition method is very promising in comparison with two previous non-contact and contact gesture recognition studies.

Motion Recognition of Smartphone using Sensor Data (센서 정보를 활용한 스마트폰 모션 인식)

  • Lee, Yong Cheol;Lee, Chil Woo
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.12
    • /
    • pp.1437-1445
    • /
    • 2014
  • A smartphone has very limited input methods regardless of its various functions. In this respect, it is one alternative that sensor motion recognition can make intuitive and various user interface. In this paper, we recognize user's motion using acceleration sensor, magnetic field sensor, and gyro sensor in smartphone. We try to reduce sensing error by gradient descent algorithm because in single sensor it is hard to obtain correct data. And we apply vector quantization by conversion of rotation displacement to spherical coordinate system for elevated recognition rate and recognition of small motion. After vector quantization process, we recognize motion using HMM(Hidden Markov Model).

Hybrid Model-Based Motion Recognition for Smartphone Users

  • Shin, Beomju;Kim, Chulki;Kim, Jae Hun;Lee, Seok;Kee, Changdon;Lee, Taikjin
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.1016-1022
    • /
    • 2014
  • This paper presents a hybrid model solution for user motion recognition. The use of a single classifier in motion recognition models does not guarantee a high recognition rate. To enhance the motion recognition rate, a hybrid model consisting of decision trees and artificial neural networks is proposed. We define six user motions commonly performed in an indoor environment. To demonstrate the performance of the proposed model, we conduct a real field test with ten subjects (five males and five females). Experimental results show that the proposed model provides a more accurate recognition rate compared to that of other single classifiers.

Human Activity Recognition using an Image Sensor and a 3-axis Accelerometer Sensor (이미지 센서와 3축 가속도 센서를 이용한 인간 행동 인식)

  • Nam, Yun-Young;Choi, Yoo-Joo;Cho, We-Duke
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.129-141
    • /
    • 2010
  • In this paper, we present a wearable intelligent device based on multi-sensor for monitoring human activity. In order to recognize multiple activities, we developed activity recognition algorithms utilizing an image sensor and a 3-axis accelerometer sensor. We proposed a grid?based optical flow method and used a SVM classifier to analyze data acquired from multi-sensor. We used the direction and the magnitude of motion vectors extracted from the image sensor. We computed the correlation between axes and the magnitude of the FFT with data extracted from the 3-axis accelerometer sensor. In the experimental results, we showed that the accuracy of activity recognition based on the only image sensor, the only 3-axis accelerometer sensor, and the proposed multi-sensor method was 55.57%, 89.97%, and 89.97% respectively.

Sensor Fusion System for Improving the Recognition Performance of 3D Object (3차원 물체의 인식 성능 향상을 위한 감각 융합 시스템)

  • Kim, Ji-Kyoung;Oh, Yeong-Jae;Chong, Kab-Sung;Wee, Jae-Woo;Lee, Chong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.107-109
    • /
    • 2004
  • In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile information. The proposed system focuses on improving recognition performance of 3D object. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse these informations. Tactual signals are obtained from the reaction force by the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of teaming iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though visual information has a defect. The experimental results show that the proposed system can improve recognition rate and reduce learning time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme of 3D object.

  • PDF

Neural Network Approach to Sensor Fusion System for Improving the Recognition Performance of 3D Objects (3차원 물체의 인식 성능 향상을 위한 감각 융합 신경망 시스템)

  • Dong Sung Soo;Lee Chong Ho;Kim Ji Kyoung
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.156-165
    • /
    • 2005
  • Human being recognizes the physical world by integrating a great variety of sensory inputs, the information acquired by their own action, and their knowledge of the world using hierarchically parallel-distributed mechanism. In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile informations. The proposed system focuses on improving recognition performance of 3D objects. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse the two sensory signals. Tactual signals are obtained from the reaction force of the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of learning iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though the visual sensory signals get defects. The experimental results show that the proposed system can improve recognition rate and reduce teeming time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme for 3D objects.

A Study of an MEMS-based finger wearable computer input devices (MEMS 기반 손가락 착용형 컴퓨터 입력장치에 관한 연구)

  • Kim, Chang-su;Jung, Se-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.791-793
    • /
    • 2016
  • In the development of various types of sensor technology, the general users smartphone, the environment is increased, which can be seen in contact with the movement recognition device, such as a console game machine (Nintendo Wii), an increase in the user needs of the action recognition-based input device there is a tendency to have. Mouse existing behavior recognition, attached to the outside, is mounted in the form of mouse button is deformed, the left mouse was the role of the right button and a wheel, an acceleration sensor (or a gyro sensor) inside to, plays the role of a mouse cursor, is to manufacture a compact, there is a difficulty in operating the button, to apply a motion recognition technology is used to operate recognition technology only pointing cursor is limited. Therefore, in this paper, using a MEMS-based motion-les Koguni tion sensor (Motion Recognition Sensor), to recognize the behavior of the two points of the human body (thumb and forefinger), to generate the motion data, and this to the foundation, compared to the pre-determined matching table (moving and mouse button events cursor), and generates a control signal by determining, were studied the generated control signal input device of the computer wirelessly transmitting.

  • PDF

A Light-weight ANN-based Hand Motion Recognition Using a Wearable Sensor (웨어러블 센서를 활용한 경량 인공신경망 기반 손동작 인식기술)

  • Lee, Hyung Gyu
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.229-237
    • /
    • 2022
  • Motion recognition is very useful for implementing an intuitive HMI (Human-Machine Interface). In particular, hands are the body parts that can move most precisely with relatively small portion of energy. Thus hand motion has been used as an efficient communication interface with other persons or machines. In this paper, we design and implement a light-weight ANN (Artificial Neural Network)-based hand motion recognition using a state-of-the-art flex sensor. The proposed design consists of data collection from a wearable flex sensor, preprocessing filters, and a light-weight NN (Neural Network) classifier. For verifying the performance and functionality of the proposed design, we implement it on a low-end embedded device. Finally, our experiments and prototype implementation demonstrate that the accuracy of the proposed hand motion recognition achieves up to 98.7%.