• Title/Summary/Keyword: real time motion tracking

Search Result 234, Processing Time 0.024 seconds

Real-time pupil motion recognition and efficient character selection system using FPGA and OpenCV (FPGA와 OpenCV를 이용한 실시간 눈동자 모션인식과 효율적인 문자 선택 시스템)

  • Lee, Hee Bin;Heo, Seung Won;Lee, Seung Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.393-394
    • /
    • 2018
  • In this paper, the new system which improve the previously reported "Implementation to human-computer interface system with motion tracking using OpenCV and FPGA" is introduced and in this system, a character selection system for the physically uncomfortable patients is proposed. Using OpenCV, the eye area is detected, the pupil position is determined, and then the results are sent to the FPGA, and the character is selected finally. The method to minimize the pupil movement of the patient is used to output the character according to the user's intention. Using OpenCV, various computer vision algorithms can be easily applied, and using programmable FPGA, a pupil motion recognition and character selection system are implemented with a low cost.

  • PDF

Real-time user motion generation and correction using RGBD sensor (RGBD 센서를 이용한 실시간 사용자 동작 생성 및 보정)

  • Gu, Tae Hong;Kim, Un Mi;Kim, Jong Min;Kwon, Tae Soo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.5
    • /
    • pp.67-73
    • /
    • 2017
  • We propose several techniques which can be employed in a 3D fitness program for monitoring and correcting user's posture. To implement a 3D fitness program, improved reference motion generating techniques and visualizing techniques are necessary. First, in order to understand the difference between the user and the reference movement of a professional, a retargeting method between two different body shapes are studied. Second, the problem of self-occlusion, which occurs when using a low-cost depth sensor to represent complex motions, is solved by using a sample database and time consistency. The system proposed in this paper evaluates the user's posture considering the physical characteristics of the user, and then provides feedback to the user.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

Performance Comparison for Exercise Motion classification using Deep Learing-based OpenPose (OpenPose기반 딥러닝을 이용한 운동동작분류 성능 비교)

  • Nam Rye Son;Min A Jung
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.59-67
    • /
    • 2023
  • Recently, research on behavior analysis tracking human posture and movement has been actively conducted. In particular, OpenPose, an open-source software developed by CMU in 2017, is a representative method for estimating human appearance and behavior. OpenPose can detect and estimate various body parts of a person, such as height, face, and hands in real-time, making it applicable to various fields such as smart healthcare, exercise training, security systems, and medical fields. In this paper, we propose a method for classifying four exercise movements - Squat, Walk, Wave, and Fall-down - which are most commonly performed by users in the gym, using OpenPose-based deep learning models, DNN and CNN. The training data is collected by capturing the user's movements through recorded videos and real-time camera captures. The collected dataset undergoes preprocessing using OpenPose. The preprocessed dataset is then used to train the proposed DNN and CNN models for exercise movement classification. The performance errors of the proposed models are evaluated using MSE, RMSE, and MAE. The performance evaluation results showed that the proposed DNN model outperformed the proposed CNN model.

Analysis of Respiratory Motion Artifacts in PET Imaging Using Respiratory Gated PET Combined with 4D-CT (4D-CT와 결합한 호흡게이트 PET을 이용한 PET영상의 호흡 인공산물 분석)

  • Cho, Byung-Chul;Park, Sung-Ho;Park, Hee-Chul;Bae, Hoon-Sik;Hwang, Hee-Sung;Shin, Hee-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.174-181
    • /
    • 2005
  • Purpose: Reduction of respiratory motion artifacts in PET images was studied using respiratory-gated PET (RGPET) with moving phantom. Especially a method of generating simulated helical CT images from 4D-CT datasets was developed and applied to a respiratory specific RGPET images for more accurate attenuation correction. Materials and Methods: Using a motion phantom with periodicity of 6 seconds and linear motion amplitude of 26 mm, PET/CT (Discovery ST: GEMS) scans with and without respiratory gating were obtained for one syringe and two vials with each volume of 3, 10, and 30 ml respectively. RPM (Real-Time Position Management, Varian) was used for tracking motion during PET/CT scanning. Ten datasets of RGPET and 4D-CT corresponding to every 10% phase intervals were acquired. from the positions, sizes, and uptake values of each subject on the resultant phase specific PET and CT datasets, the correlations between motion artifacts in PET and CT images and the size of motion relative to the size of subject were analyzed. Results: The center positions of three vials in RGPET and 4D-CT agree well with the actual position within the estimated error. However, volumes of subjects in non-gated PET images increase proportional to relative motion size and were overestimated as much as 250% when the motion amplitude was increased two times larger than the size of the subject. On the contrary, the corresponding maximal uptake value was reduced to about 50%. Conclusion: RGPET is demonstrated to remove respiratory motion artifacts in PET imaging, and moreover, more precise image fusion and more accurate attenuation correction is possible by combining with 4D-CT.

A Ubiquitous Vision System based on the Identified Contract Net Protocol (Identified Contract Net 프로토콜 기반의 유비쿼터스 시각시스템)

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hagbae
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.10
    • /
    • pp.620-629
    • /
    • 2005
  • In this paper, a new protocol-based approach was proposed for development of a ubiquitous vision system. It is possible to apply the approach by regarding the ubiquitous vision system as a multiagent system. Thus, each vision sensor can be regarded as an agent (vision agent). Each vision agent independently performs exact segmentation for a target by color and motion information, visual tracking for multiple targets in real-time, and location estimation by a simple perspective transform. Matching problem for the identity of a target during handover between vision agents is solved by the Identified Contract Net (ICN) protocol implemented for the protocol-based approach. The protocol-based approach by the ICN protocol is independent of the number of vision agents and moreover the approach doesn't need calibration and overlapped region between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. The protocol-based approach was successfully applied for our ubiquitous vision system and operated well through several experiments.

Effects of Motor Imagery Practice in Conjunction with Repetitive Transcranial Magnetic Stimulation on Stroke Patients

  • Ji, Sang-Goo;Cha, Hyun-Gyu;Kim, Ki-Jong;Kim, Myoung-Kwon
    • Journal of Magnetics
    • /
    • v.19 no.2
    • /
    • pp.181-184
    • /
    • 2014
  • The aim of the present study was to examine whether motor imagery (MI) practice in conjunction with repetitive transcranial magnetic stimulation (rTMS) applied to stroke patients could improve theirgait ability. This study was conducted with 29 subjects diagnosed with hemiparesis due to stroke.The experimental group consisted of 15 members who were performed MI practice in conjunction with repetitive transcranial magnetic stimulation, while the control group consisted of 14 members who were performed MI practice and sham therapy. Both groups received traditional physical therapy for 30 minutes a day, 5 days a week, for 6 weeks; additionally, they received mental practice for 15 minutes. The experimental group was instructed to perform rTMS and the control group was instructed to apply sham stimulation for 15 minutes. Gait analysis was performed using a three-dimensional motion capture system, which is a real-time tracking device that delivers data via infrared reflective markers using six cameras. Results showed that the velocity, step length, and cadence of both groups were significantly improved after the practice (p<0.05). Significant differences were found between the groups in velocity and cadence (p<0.05) as well as with respect to the change rate (p<0.05) after practice. The results showed that MI practice in conjunction with rTMS is more effective in improving gait ability than MI practice alone.

Development of electric vehicle maintenance education ability using digital twin technology and VR

  • Lee, Sang-Hyun;Jung, Byeong-Soo
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.2
    • /
    • pp.58-67
    • /
    • 2020
  • In this paper, the maintenance training manual of EV vehicle was produced by utilizing digital twin technology and various sensors such as IR-based light house tracking and head tracker. In addition, through digital twin technology and VR to provide high immersiveness to users, sensory content creation technology was secured through animation and effect realization suitable for EV vehicle maintenance situation. EV vehicle maintenance training manual is 3D engine programming and real-time creation of 3D objects and minimization of screen obstacles and selection of specific menus in virtual space in the form of training simulation. In addition, automatic output from the Head Mount Display (HUD), EV vehicle maintenance and inspection, etc., user can easily operate content was produced. This technology development can enhance immersion to users through implementation of detailed scenarios for maintenance / inspection of EV vehicles" and 3D parts display by procedure, realization of animations and effects for maintenance situations. Through this study, familiarity with improving the quality of education and safety accidents and correct maintenance process and the experienced person was very helpful in learning how to use equipment naturally and how to maintain EV vehicles.

Object Detection using Multiple Color Normalization and Moving Color Information (다중색상정규화와 움직임 색상정보를 이용한 물체검출)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.721-728
    • /
    • 2005
  • This paper suggests effective object detection system for moving objects with specified color and motion information. The proposed detection system includes the object extraction and definition process which uses MCN(Multiple Color Normalization) and MCWUPC(Moving Color Weighted Unmatched Pixel Count) computation to decide the existence of moving object and object segmentation technique using signature information is used to exactly extract the objects with high probability. Finally, real time detection system is implemented to verify the effectiveness of the technique and experiments show that the success rate of object tracking is more than $89\%$ of total 120 image frames.