• Title/Summary/Keyword: real time motion tracking

Search Result 234, Processing Time 0.041 seconds

Structure of Cyber Physical System in Dance Performances with Real Time Interactive Media System (실시간 인터렉티브 미디어 시스템을 활용한 무용공연의 사이버 물리 시스템 구조)

  • Kim, Eun Jung;Cho, Sunghee
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.279-280
    • /
    • 2018
  • 본 연구는 무용공연에서 사용되는 실시간 인터렉티브 미디어 시스템(interative media system)을 사이버물리시스템(CPS, Cyber-Physical System)의 관점으로 분석한 것이다. 4차 산업혁명시대에서 극대화된 가상공간과 현실공간의 연결성과 자동화가 무용예술에서 어떤 양상으로 일어나고 있는지 알아보기 위해 실시간 인터렉티브 미디어시스템을 활용한 무용공연들을 대상으로 연구한 결과, 사이버물리시스템의 연산, 통신, 조작의 과정들은 모션트레킹시스템(motion tracking system)을 통해 물리개체인 무용수들의 움직임, 근육 내부의 반응 등의 정보에 따라 조작되어 소프트웨어의 연산을 거쳐 프로젝션 맵핑(projection mapping)으로 영상으로 출력되는 과정을 거치는 것으로 나타났다. 추후, 다양한 사례연구를 통해 무용공연에서의 4차 산업혁명 요소들을 구체화하여야한다.

  • PDF

Marker Classification by Sensor Fusion for Hand Pose Tracking in HMD Environments using MLP (HMD 환경에서 사용자 손의 자세 추정을 위한 MLP 기반 마커 분류)

  • Vu, Luc Cong;Choi, Eun-Seok;You, Bum-Jae
    • Annual Conference of KIPS
    • /
    • 2018.10a
    • /
    • pp.920-922
    • /
    • 2018
  • This paper describes a method to classify simple circular artificial markers on surfaces of a box on the back of hand to detect the pose of user's hand for VR/AR applications by using a Leap Motion camera and two IMU sensors. One IMU sensor is located in the box and the other IMU sensor is fixed with the camera. Multi-layer Perceptron (MLP) algorithm is adopted to classify artificial markers on each surface tracked by the camera using IMU sensor data. It is experimented successfully in real-time, 70Hz, under PC environments.

Motion-compensated Radial Representation-based Real-Time Object Tracking (움직임 보상한 방사상 표현 기반 실시간 객체 추적)

  • Ra, Jeong-Jung;Seo, Kyung-Seok;Choi, Heung-Moon
    • Annual Conference of KIPS
    • /
    • 2004.05a
    • /
    • pp.719-722
    • /
    • 2004
  • 객체 중심점에서 움직임을 추정하고 보상하여 빠르게 움직이는 객체의 윤곽선을 실시간으로 추적 할 수 있는 알고리듬을 제안하였다. 방사상 표현 (radial representation) 방식을 적용하여 객체 중심점에서만 블록정합 (block matching) 알고리듬으로 움직임을 추정하고 보상하여 적은 계산량으로 객체 움직임을 추정하고 보상함으로써 객체 윤곽선을 실시간으로 추적하였다. 에너지 수렴 과정에서 기울기 영상과 차영상 (differential image)을 에너지 함수로 함께 사용함으로 배경 잡영 등에도 강건하도록 하였다. 실험 결과 움직임이 빠른 객체와 배경 잡영 속의 객체도 실시간으로 강건하게 추적함을 확인하였다.

  • PDF

Mobile Performance Evaluation of Mecanum Wheeled Omni-directional Mobile Robot (메카넘휠 기반의 전방향 이동로봇 주행성능 평가)

  • Chu, Baeksuk;Sung, Young Whee
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.4
    • /
    • pp.374-379
    • /
    • 2014
  • Mobile robots with omni-directional wheels can generate instant omni-directional motion without requiring extra space to change the direction of the body. Therefore, they are capable of moving in an arbitrary direction under any orientation even in narrow aisles or tight areas. In this research, an omni-directional mobile robot based on Mecanum wheels was developed to achieve omni-directionality. A CompactRIO embedded real-time controller and C series motion and I/O modules were employed in the control system design. Ultrasonic sensors installed on the front and lateral sides were utilized to measure the distance between the mobile robot and the side wall of a workspace. Through intensive experiments, a performance evaluation of the mobile robot was conducted to confirm its feasibility for industrial purposes. Mobility, omni-directionality, climbing capacity, and tracking performance of a squared trajectory were selected as performance indices to assess the omni-directional mobile robot.

Real-time Body Surface Motion Tracking using the Couch Based Computer-controlled Motion Phantom (CBMP) and Ultrasonic Sensor: A Feasibility Study (CBMP (Couch Based Computer-Controlled Motion Phantom)와 초음파센서에 기반한 실시간 체표면 추적 시스템 개발: 타당성 연구)

  • Lee, Suk;Yang, Dae-Sik;Park, Young-Je;Shin, Dong-Ho;Huh, Hyun-Do;Lee, Sang-Hoon;Cho, Sam-Ju;Lim, Sang-Wook;Jang, Ji-Sun;Cho, Kwang-Hwan;Shin, Hun-Joo;Kim, Chul-Yong
    • Progress in Medical Physics
    • /
    • v.18 no.1
    • /
    • pp.27-34
    • /
    • 2007
  • Respiration sating radiotherapy technique developed In consideration of the movement of body surface and Internal organs during respiration, is categorized into the method of analyzing the respiratory volume for data processing and that of keeping track of fiducial landmark or dermatologic markers based on radiography. However, since these methods require high-priced equipments for treatment and are used for the specific radiotherapy. Therefore, we should develop new essential method whilst ruling out the possible problems. This study alms to obtain body surface motion by using the couch based computer-controlled motion phantom (CBMP) and US sensor, and to develop respiration gating techniques that can adjust patients' beds by using opposite values of the data obtained. The CBMP made to measure body surface motion is composed of a BS II microprocessor, sensor, host computer and stopping motor etc. And the program to control and operate It was developed. After the CBMP was adjusted by entering random movement data, and the phantom movements were acquired using the sensors, the two data were compared and analyzed. And then, after the movements by respiration were acquired by using a rabbit, the real-time respiration gating techniques were drawn by operating the phantom with the opposite values of the data. The result of analysing the acquisition-correction delay time for the data value shows that the data value coincided within 1% and that the acquistition-correction delay time was obtained real-time $(2.34{\times}10^{-4}sec)$. And the movement was the maximum movement was 6 mm In Z direction, In which the respiratory cycle was 2.9 seconds. This study successfully confirms the clinical application possibility of respiration gating techniques by using a CBWP and sensor.

  • PDF

A frequency tracking semi-active algorithm for control of edgewise vibrations in wind turbine blades

  • Arrigan, John;Huang, Chaojun;Staino, Andrea;Basu, Biswajit;Nagarajaiah, Satish
    • Smart Structures and Systems
    • /
    • v.13 no.2
    • /
    • pp.177-201
    • /
    • 2014
  • With the increased size and flexibility of the tower and blades, structural vibrations are becoming a limiting factor towards the design of even larger and more powerful wind turbines. Research into the use of vibration mitigation devices in the turbine tower has been carried out but the use of dampers in the blades has yet to be investigated in detail. Mitigating vibrations will increase the design life and hence economic viability of the turbine blades and allow for continual operation with decreased downtime. The aim of this paper is to investigate the effectiveness of Semi-Active Tuned Mass Dampers (STMDs) in reducing the edgewise vibrations in the turbine blades. A frequency tracking algorithm based on the Short Time Fourier Transform (STFT) technique is used to tune the damper. A theoretical model has been developed to capture the dynamic behaviour of the blades including the coupling with the tower to accurately model the dynamics of the entire turbine structure. The resulting model consists of time dependent equations of motion and negative damping terms due to the coupling present in the system. The performances of the STMDs based vibration controller have been tested under different loading and operating conditions. Numerical analysis has shown that variation in certain parameters of the system, along with the time varying nature of the system matrices has led to the need for STMDs to allow for real-time tuning to the resonant frequencies of the system.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

Aerial Video Summarization Approach based on Sensor Operation Mode for Real-time Context Recognition (실시간 상황 인식을 위한 센서 운용 모드 기반 항공 영상 요약 기법)

  • Lee, Jun-Pyo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.6
    • /
    • pp.87-97
    • /
    • 2015
  • An Aerial video summarization is not only the key to effective browsing video within a limited time, but also an embedded cue to efficiently congregative situation awareness acquired by unmanned aerial vehicle. Different with previous works, we utilize sensor operation mode of unmanned aerial vehicle, which is global, local, and focused surveillance mode in order for accurately summarizing the aerial video considering flight and surveillance/reconnaissance environments. In focused mode, we propose the moving-react tracking method which utilizes the partitioning motion vector and spatiotemporal saliency map to detect and track the interest moving object continuously. In our simulation result, the key frames are correctly detected for aerial video summarization according to the sensor operation mode of aerial vehicle and finally, we verify the efficiency of video summarization using the proposed mothed.

AdaBoost-Based Gesture Recognition Using Time Interval Trajectory Features (시간 간격 특징 벡터를 이용한 AdaBoost 기반 제스처 인식)

  • Hwang, Seung-Jun;Ahn, Gwang-Pyo;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.2
    • /
    • pp.247-254
    • /
    • 2013
  • The task of 3D gesture recognition for controlling equipments is highly challenging due to the propagation of 3D smart TV recently. In this paper, the AdaBoost algorithm is applied to 3D gesture recognition by using Kinect sensor. By tracking time interval trajectory of hand, wrist and arm by Kinect, AdaBoost algorithm is used to train and classify 3D gesture. Experimental results demonstrate that the proposed method can successfully extract trained gestures from continuous hand, wrist and arm motion in real time.