• Title/Summary/Keyword: 3D Tracking

Search Result 768, Processing Time 0.04 seconds

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Development of 3D Laser Welding System (3차원 레이저 용접시스템 개발)

  • Kang H.S.;Suh J.;Lee J.H.;LEE M.Y.;Jung B.H.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.932-935
    • /
    • 2005
  • Three dimensional laser welding technology for light car body is studied. A robot, a seam tracking system and 4kW CW Nd:YAG laser are used for three dimensional robot laser welding system. The Laser system is used 4kW Nd:YAG laser(HL4006D) of Trumpf and the Robot system is used IRB6400R of ABB. The Seam tracking system is SMRT-20LS of ServoRobot. The welding joint of steel plate are butt and lap joint. The 3-D welding for Non-linear Tailored blank is performed after the observation experiments of bead on plate. Finally, the welding process for non-linear tailored blank and front side member is developed.

  • PDF

Stabilization Loop Design Method on Dynamic Platform

  • Kwon, Young-Shin;Kim, Doh-Hyun;Kim, Lee-Han;Hwang, Hong-Yeon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.156.5-156
    • /
    • 2001
  • Stabilized tracking platform in a missile consisting of a flat planar antenna, pitch/yaw gimbals, gear trains, and current controlled DC drive motors for pitch and yaw gimbal must have a capability to track a target as an inertial sensor in the presence of missile body motion such as maneuvering and vibration. Because of this reason, tracking a target from dynamic platform requires a servo architecture that includes a outer tracking loop(position loop) and inner rate loop that stabilizes the line of sight(LOS). This paper presents a gimbaled platform model including nonlinear phenomena due to viscous and Coulomb friction based on experimental data and torque equilibrium equation, the design concept for the inner tacholoop having P controller structure ...

  • PDF

Prospect of Film Industry by Digital Technology -Focusing on 'UD(Ultra-Definition)' and 'Head Tracking (3D)'- (디지털 기술에 의한 영화산업의 전망 -'UD(Ultra-Definition)'와 'Head Tracking (3D)'를 중심으로-)

  • Kim, jin-wook
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2013.05a
    • /
    • pp.329-330
    • /
    • 2013
  • 초기의 영화는 그 당시에 현실을 있는 그대로 보여주는 도구로서, 인간의 현실을 표현하고자 했던 많은 도구들 중에서 유일했던 것이다. 하지만 시간이 지나고 TV의 발달과 더 나아가 케이블TV, 현재에는 인터넷과 개인 스마트모바일 디스플레이의 등장으로 그 기능이 상쇄되어 갔고, 이에 따른 영화 자체의 생존전략을 찾아가기 위해 공간적 측면에서는 '새로운 스크린'을, 구현하는 방식과 현실을 재현하는 측면에서는 영상처리의 '신기술'을 과거에서부터 현재까지 계속 이어진다. 하지만 현재 영화와 비슷한 기능을 하는 매체들이 많은 가운데, 이러한 신기술들이 공유하고자 하는 매체들도 늘어나게 되었고, 이제는 하나의 신기술을 누가 먼저 시현, 선점하느냐가 큰 관건인 상태에서 '영화'가 선택할 수 있는 향후 10년 후 기술들은 현재 진화하고 있는 디스플레이 측면에서 'UD(Ultra-Definition)'와 영상기술 측면에서는 'Head Tracking 3D' 일 것이다. 본 연구는 이 두 가지 기술에 의해 영화산업의 미래를 기능적, 경제적인 측면으로 전망해 보았다.

  • PDF

Quantification of Fibers through Automatic Fiber Reconstruction from 3D Fluorescence Confocal Images

  • Park, Doyoung
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.25-36
    • /
    • 2020
  • Motivation: Fibers as the extracellular filamentous structures determine the shape of the cytoskeletal structures. Their characterization and reconstruction from a 3D cellular image represent very useful quantitative information at the cellular level. In this paper, we presented a novel automatic method to extract fiber diameter distribution through a pipeline to reconstruct fibers from 3D fluorescence confocal images. The pipeline is composed of four steps: segmentation, skeletonization, template fitting and fiber tracking. Segmentation of fiber is achieved by defining an energy based on tensor voting framework. After skeletonizing segmented fibers, we fit a template for each seed point. Then, the fiber tracking step reconstructs fibers by finding the best match of the next fiber segment from the previous template. Thus, we define a fiber as a set of templates, based on which we calculate a diameter distribution of fibers.

A Study on Performance Improvement of Target Motion Analysis using Target Elevation Tracking and Fusion in Conformal Array Sonar (컨포멀 소나에서의 표적고각 추적 및 융합을 이용한 표적기동분석 성능향상 연구)

  • Lee, HaeHo;Park, GyuTae;Shin, KeeCheol;Cho, SungIl
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.3
    • /
    • pp.320-331
    • /
    • 2019
  • In this paper, we propose a method of TMA(Target Motion Analysis) performance improvement using target elevation tracking and fusion in conformal array sonar. One of the most important characteristics of conformal array sonar is to detect a target elevation by a vertical beam. It is possible to get a target range to maximize advantages of the proposed TMA technology using this characteristic. And the proposed techniques include target tracking, target fusion, calculation of target range by multipath as well as TMA. A simulation study demonstrates the outstanding performance of proposed techniques.

A object tracking based robot manipulator built on fast stereo vision

  • Huang, Hua;Won, Sangchul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.99.5-99
    • /
    • 2002
  • $\textbullet$ 3-D object tracking framework $\textbullet$ Using fast stereo vision system for range image $\textbullet$ Using CONDENSATION algorithm to tracking object $\textbullet$ For recognizing object, superquardrics model is used $\textbullet$ Our target object is like coils in steel works

  • PDF

Vanishing point-based 3D object detection method for improving traffic object recognition accuracy

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.93-101
    • /
    • 2023
  • In this paper, we propose a method of creating a 3D bounding box for an object using a vanishing point to increase the accuracy of object recognition in an image when recognizing an traffic object using a video camera. Recently, when vehicles captured by a traffic video camera is to be detected using artificial intelligence, this 3D bounding box generation algorithm is applied. The vertical vanishing point (VP1) and horizontal vanishing point (VP2) are derived by analyzing the camera installation angle and the direction of the image captured by the camera, and based on this, the moving object in the video subject to analysis is specified. If this algorithm is applied, it is easy to detect object information such as the location, type, and size of the detected object, and when applied to a moving type such as a car, it is tracked to determine the location, coordinates, movement speed, and direction of each object by tracking it. Able to know. As a result of application to actual roads, tracking improved by 10%, in particular, the recognition rate and tracking of shaded areas (extremely small vehicle parts hidden by large cars) improved by 100%, and traffic data analysis accuracy was improved.

Accuracy of simulation surgery of Le Fort I osteotomy using optoelectronic tracking navigation system (광학추적항법장치를 이용한 르포씨 제1형 골절단 가상 수술의 정확성에 대한 연구)

  • Bu, Yeon-Ji;Kim, Soung-Min;Kim, Ji-Youn;Park, Jung-Min;Myoung, Hoon;Lee, Jong-Ho;Kim, Myung-Jin
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.37 no.2
    • /
    • pp.114-121
    • /
    • 2011
  • Introduction: The aim of this study was to demonstrate that the simulation surgery on rapid prototype (RP) model, which is based on the 3-dimensional computed tomography (3D CT) data taken before surgery, has the same accuracy as traditional orthograthic surgery with an intermediate splint, using an optoelectronic tracking navigation system. Materials and Methods: Simulation surgery with the same treatment plan as the Le Fort I osteotomy on the patient was done on a RP model based on the 3D CT data of 12 patients who had undergone a Le Fort I osteotomy in the department of oral and maxillofacial surgery, Seoul National University Dental Hospital. The 12 distances between 4 points on the skull, such as both infraorbital foramen and both supraorbital foramen, and 3 points on maxilla, such as the contact point of both maxillary central incisors and mesiobuccal cuspal tip of both maxillary first molars, were tracked using an optoelectronic tracking navigation system. The distances before surgery were compared to evaluate the accuracy of the RP model and the distance changes of 3D CT image after surgery were compared with those of the RP model after simulation surgery. Results: A paired t-test revealed a significant difference between the distances in the 3D CT image and RP model before surgery.(P<0.0001) On the other hand, Pearson's correlation coefficient, 0.995, revealed a significant positive correlation between the distances.(P<0.0001) There was a significant difference between the change in the distance of the 3D CT image and RP model in before and after surgery.(P<0.05) The Pearson's correlation coefficient was 0.13844, indicating positive correlation.(P<0.1) Conclusion: Theses results suggest that the simulation surgery of a Le Fort I osteotomy using an optoelectronic tracking navigation system I s relatively accurate in comparing the pre-, and post-operative 3D CT data. Furthermore, the application of an optoelectronic tracking navigation system may be a predictable and efficient method in Le Fort I orthognathic surgery.

Luminance Change Independent 3D Snail Tracking

  • Dewi, Primastuti;Choi, Yoen-Seok;Chon, Tae-Soo;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.175-178
    • /
    • 2010
  • Slow movement of snail can be a benefit since it means less speed of tracking is required to get accurate movement track, but in the other side it is difficult to extract the object because the snail is almost as static as the background. In this paper, we present a technique to track the snail by using one of its common characteristic, dark color of its shell. The technique needs to be robust to illumination change since the experiment is usually to observe the movement of snail both at bright and dim condition. Snail position coordinate in 3D space is calculated using orthogonal stereo vision which combines the information from two images taken from cameras at the top and in front of the aquarium. Experimental results show this technique does not need prior background image extraction and robust to gradual or sudden illumination change.

  • PDF