• Title/Summary/Keyword: motion-tracking

Search Result 1,225, Processing Time 0.031 seconds

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Evaluation of Real-time Measurement Liver Tumor's Movement and $Synchrony^{TM}$ System's Accuracy of Radiosurgery using a Robot CyberKnife (로봇사이버나이프를 이용한 간 종양의 실시간 움직임 측정과 방사선수술 시 호흡추적장치의 정확성 평가)

  • Kim, Gha-Jung;Shim, Su-Jung;Kim, Jeong-Ho;Min, Chul-Kee;Chung, Weon-Kuu
    • Radiation Oncology Journal
    • /
    • v.26 no.4
    • /
    • pp.263-270
    • /
    • 2008
  • Purpose: This study aimed to quantitatively measure the movement of tumors in real-time and evaluate the treatment accuracy, during the treatment of a liver tumor patient, who underwent radiosurgery with a Synchrony Respiratory motion tracking system of a robot CyberKnife. Materials and Methods: The study subjects included 24 liver tumor patients who underwent CyberKnife treatment, which included 64 times of treatment with the Synchrony Respiratory motion tracking system ($Synchrony^{TM}$). The treatment involved inserting 4 to 6 acupuncture needles into the vicinity of the liver tumor in all the patients using ultrasonography as a guide. A treatment plan was set up using the CT images for treatment planning uses. The position of the acupuncture needle was identified for every treatment time by Digitally Reconstructed Radiography (DRR) prepared at the time of treatment planning and X-ray images photographed in real-time. Subsequent results were stored through a Motion Tracking System (MTS) using the Mtsmain.log treatment file. In this way, movement of the tumor was measured. Besides, the accuracy of radiosurgery using CyberKnife was evaluated by the correlation errors between the real-time positions of the acupuncture needles and the predicted coordinates. Results: The maximum and the average translational movement of the liver tumor were measured 23.5 mm and $13.9{\pm}5.5\;mm$, respectively from the superior to the inferior direction, 3.9 mm and $1.9{\pm}0.9mm$, respectively from left to right, and 8.3 mm and $4.9{\pm}1.9\;mm$, respectively from the anterior to the posterior direction. The maximum and the average rotational movement of the liver tumor were measured to be $3.3^{\circ}$ and $2.6{\pm}1.3^{\circ}$, respectively for X (Left-Right) axis rotation, $4.8^{\circ}$ and $2.3{\pm}1.0^{\circ}$, respectively for Y (Crania-Caudal) axis rotation, $3.9^{\circ}$ and $2.8{\pm}1.1^{\circ}$, respectively for Z (Anterior-Posterior) axis rotation. In addition, the average correlation error, which represents the treatment's accuracy was $1.1{\pm}0.7\;mm$. Conclusion: In this study real-time movement of a liver tumor during the radiosurgery could be verified quantitatively and the accuracy of the radiosurgery with the Synchrony Respiratory motion tracking system of robot could be evaluated. On this basis, the decision of treatment volume in radiosurgery or conventional radiotherapy and useful information on the movement of liver tumor are supposed to be provided.

Human Tracking and Body Silhouette Extraction System for Humanoid Robot (휴머노이드 로봇을 위한 사람 검출, 추적 및 실루엣 추출 시스템)

  • Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.6C
    • /
    • pp.593-603
    • /
    • 2009
  • In this paper, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with an active stereo camera. The proposed system consists of three modules: detection, tracking and silhouette extraction. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap is estimated in advance and then this was effectively incorporated into the graph cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data and it was shown to detect and track multiple people very well and also produce high quality silhouettes. The proposed system can assist in gesture and gait recognition in field of Human-Robot Interaction (HRI).

Fuzzy Nonlinear Adaptive Control of Overhead Cranes for Anti-Sway Trajectory Tracking and High-Speed Hoisting Motion (고속 권상운동과 흔들림억제 궤적추종을 위한 천정주행 크레인의 퍼지 비선형 적응제어)

  • Park, Mun-Soo;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.582-590
    • /
    • 2007
  • Nonlinear adaptive control of overhead cranes is investigated for anti-sway trajectory tracking with high-speed hoisting motion. The sway dynamics of two dimensional underactuated overhead cranes is heavily coupled with the trolley acceleration, hoisting rope length, and the hoisting velocity which is an obstacle in the design of decoupling control based anti-sway trajectory tracking control law To cope with this obstacle. we propose a fuzzy nonlinear adaptive anti-sway trajectory tracking control law guaranteeing the uniform ultimate boundedness of the sway dynamics even in the presence of uncertainties in such a way that it cancels the effect of the trolley acceleration and hoisting velocity on the sway dynamics. In particular. system uncertainties, including system parameter uncertainty unmodelled dynamics, and external disturbances, are compensated in an adaptive manner by utilizing fuzzy uncertainty observers. Accordingly, the ultimate bound of the tracking errors and the sway angle decrease to zero when the fuzzy approximation errors decrease to zero. Finally, numerical simulations are performed to confirm the effectiveness of the proposed scheme.

Mixed-reality simulation for orthognathic surgery

  • Fushima, Kenji;Kobayashi, Masaru
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.38
    • /
    • pp.13.1-13.12
    • /
    • 2016
  • Background: Mandibular motion tracking system (ManMoS) has been developed for orthognathic surgery. This article aimed to introduce the ManMoS and to examine the accuracy of this system. Methods: Skeletal and dental models are reconstructed in a virtual space from the DICOM data of three-dimensional computed tomography (3D-CT) recording and the STL data of 3D scanning, respectively. The ManMoS uniquely integrates the virtual dento-skeletal model with the real motion of the dental cast mounted on the simulator, using the reference splint. Positional change of the dental cast is tracked by using the 3D motion tracking equipment and reflects on the jaw position of the virtual model in real time, generating the mixed-reality surgical simulation. ManMoS was applied for two clinical cases having a facial asymmetry. In order to assess the accuracy of the ManMoS, the positional change of the lower dental arch was compared between the virtual and real models. Results: With the measurement data of the real lower dental cast as a reference, measurement error for the whole simulation system was less than 0.32 mm. In ManMoS, the skeletal and dental asymmetries were adequately diagnosed in three dimensions. Jaw repositioning was simulated with priority given to the skeletal correction rather than the occlusal correction. In two cases, facial asymmetry was successfully improved while a normal occlusal relationship was reconstructed. Positional change measured in the virtual model did not differ significantly from that in the real model. Conclusions: It was suggested that the accuracy of the ManMoS was good enough for a clinical use. This surgical simulation system appears to meet clinical demands well and is an important facilitator of communication between orthodontists and surgeons.

Multiple Pedestrians Detection and Tracking using Color Information from a Moving Camera (이동 카메라 영상에서 컬러 정보를 이용한 다수 보행자 검출 및 추적)

  • Lim, Jong-Seok;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.317-326
    • /
    • 2004
  • This paper presents a new method for the detection of multiple pedestrians and tracking of a specific pedestrian using color information from a moving camera. We first extract motion vector on the input image using BMA. Next, a difference image is calculated on the basis of the motion vector. The difference image is converted to a binary image. The binary image has an unnecessary noise. So, it is removed by means of the proposed noise deletion method. Then, we detect pedestrians through the projection algorithm. But, if pedestrians are very adjacent to each other, we separate them using RGB color information. And we track a specific pedestrian using RGB color information in center region of it. The experimental results on our test sequences demonstrated the high efficiency of our approach as it had shown detection success ratio of 97% and detection failure ratio of 3% and excellent tracking.

Occluded Object Motion Tracking Method based on Combination of 3D Reconstruction and Optical Flow Estimation (3차원 재구성과 추정된 옵티컬 플로우 기반 가려진 객체 움직임 추적방법)

  • Park, Jun-Heong;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.537-542
    • /
    • 2011
  • A mirror neuron is a neuron fires both when an animal acts and when the animal observes the same action performed by another. We propose a method of 3D reconstruction for occluded object motion tracking like Mirror Neuron System to fire in hidden condition. For modeling system that intention recognition through fire effect like Mirror Neuron System, we calculate depth information using stereo image from a stereo camera and reconstruct three dimension data. Movement direction of object is estimated by optical flow with three-dimensional image data created by three dimension reconstruction. For three dimension reconstruction that enables tracing occluded part, first, picture data was get by stereo camera. Result of optical flow is made be robust to noise by the kalman filter estimation algorithm. Image data is saved as history from reconstructed three dimension image through motion tracking of object. When whole or some part of object is disappeared form stereo camera by other objects, it is restored to bring image date form history of saved past image and track motion of object.

Trace of Moving Object using Structured Kalman Filter (구조적 칼만 필터를 이용한 이동 물체의 추적)

  • Jang, Dae-Sik;Jang, Seok-Woo;Kim, Gye-young;Choi, Hyung-Il
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.5
    • /
    • pp.319-325
    • /
    • 2002
  • Tracking moving objects is one of the most important techniques in motion analysis and understanding, and it has many difficult problems to solve. Especially, estimating and identifying moving objects, when the background and moving objects vary dynamically, are very difficult. It is possible under such a complex environment that targets may disappear totally or partially due to occlusion by other objects. The Kalman filter has been used to estimate motion information and use the information in predicting the appearance of targets in succeeding frames. In this paper, we propose another version of the Kalman filter, to be called structured Kalman filter, which can successfully work its role of estimating motion information under a deteriorating condition such as occlusion. Experimental results show that the suggested approach is very effective in estimating and tracking non-rigid moving objects reliably.

The MPI CyberMotion Simulator: A Novel Research Platform to Investigate Human Control Behavior

  • Nieuwenhuizen, Frank M.;Bulthoff, Heinrich H.
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.2
    • /
    • pp.122-131
    • /
    • 2013
  • The MPI CyberMotion Simulator provides a unique motion platform, as it features an anthropomorphic robot with a large workspace, combined with an actuated cabin and a linear track for lateral movement. This paper introduces the simulator as a tool for studying human perception, and compares its characteristics to conventional Stewart platforms. Furthermore, an experimental evaluation is presented in which multimodal human control behavior is studied by identifying the visual and vestibular responses of participants in a roll-lateral helicopter hover task. The results show that the simulator motion allows participants to increase tracking performance by changing their control strategy, shifting from reliance on visual error perception to reliance on simulator motion cues. The MPI CyberMotion Simulator has proven to be a state-of-the-art motion simulator for psychophysical research to study humans with various experimental paradigms, ranging from passive perception experiments to active control tasks, such as driving a car or flying a helicopter.