• Title/Summary/Keyword: 3D tracker

Search Result 52, Processing Time 0.022 seconds

On 3-D measuring technique of large structure

  • Sawada, Hideyuki;Ichimura, Kazuo;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.251-254
    • /
    • 1992
  • We present a system to measure 3-dimensional coordinates of large structures such as ships, buildings and oil tanks. Our system consists of two important units which are a laser spot pointer and a laser spot tracker. Employing a tactful image processing, our system has some features: e.g. downsize, cost, accuracy and robustness to hazardous environments.

  • PDF

3-D Measuring system of huge structures using laser spot-ray projection

  • Ishimatsu, T.;Suehiro, K.;Okazaki, C.;Ochiai, T.;Matsui, R.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.1162-1166
    • /
    • 1990
  • We present a system to measure 3-dimensional coordinates of huge structures like ships, buildings and oil tanks. Two important units are a laser spot projector and a laser spot tracker. Employing a tactful image processing, our system has some features :e.g. compactness, cost, accuracy and robustness to hazardous emvironments.

  • PDF

A Tool Box to Evaluate the Phased Array Coil Performance Using Retrospective 3D Coil Modeling (3차원 코일 모델링을 통해 위상배열코일 성능을 평가하기 위한 프로그램)

  • Perez, Marlon;Hernandez, Daniel;Michel, Eric;Cho, Min Hyoung;Lee, Soo Yeol
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.2
    • /
    • pp.107-119
    • /
    • 2014
  • Purpose : To efficiently evaluate phased array coil performance using a software tool box with which we can make visual comparison of the sensitivity of every coil element between the real experiment and EM simulation. Materials and Methods: We have developed a $C^{{+}{+}}$- and MATLAB-based software tool called Phased Array Coil Evaluator (PACE). PACE has the following functions: Building 3D models of the coil elements, importing the FDTD simulation results, and visualizing the coil sensitivity of each coil element on the ordinary Cartesian coordinate and the relative coil position coordinate. To build a 3D model of the phased array coil, we used an electromagnetic 3D tracker in a stylus form. After making the 3D model, we imported the 3D model into the FDTD electromagnetic field simulation tool. Results: An accurate comparison between the coil sensitivity simulation and real experiment on the tool box platform has been made through fine matching of the simulation and real experiment with aids of the 3D tracker. In the simulation and experiment, we used a 36-channel helmet-style phased array coil. At the 3D MRI data acquisition using the spoiled gradient echo sequence, we used the uniform cylindrical phantom that had the same geometry as the one in the FDTD simulation. In the tool box, we can conveniently choose the coil element of interest and we can compare the coil sensitivities element-by-element of the phased array coil. Conclusion: We expect the tool box can be greatly used for developing phased array coils of new geometry or for periodic maintenance of phased array coils in a more accurate and consistent manner.

Cross-covariance 3D Coordinate Estimation Method for Virtual Space Movement Platform (가상공간 이동플랫폼을 위한 교차 공분산 3D 좌표 추정 방법)

  • Jung, HaHyoung;Park, Jinha;Kim, Min Kyoung;Chang, Min Hyuk
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.41-48
    • /
    • 2020
  • Recently, as the demand for the mobile platform market in the virtual/augmented/mixed reality field is increasing, experiential content that gives users a real-world felt through a virtual environment is drawing attention. In this paper, as a method of tracking a tracker for user location estimation in a virtual space movement platform for motion capture of trainees, we present a method of estimating 3D coordinates of the 3D cross covariance through the coordinates of the markers projected on the image. In addition, the validity of the proposed algorithm is verified through rigid body tracking experiments.

Development of electric vehicle maintenance education ability using digital twin technology and VR

  • Lee, Sang-Hyun;Jung, Byeong-Soo
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.2
    • /
    • pp.58-67
    • /
    • 2020
  • In this paper, the maintenance training manual of EV vehicle was produced by utilizing digital twin technology and various sensors such as IR-based light house tracking and head tracker. In addition, through digital twin technology and VR to provide high immersiveness to users, sensory content creation technology was secured through animation and effect realization suitable for EV vehicle maintenance situation. EV vehicle maintenance training manual is 3D engine programming and real-time creation of 3D objects and minimization of screen obstacles and selection of specific menus in virtual space in the form of training simulation. In addition, automatic output from the Head Mount Display (HUD), EV vehicle maintenance and inspection, etc., user can easily operate content was produced. This technology development can enhance immersion to users through implementation of detailed scenarios for maintenance / inspection of EV vehicles" and 3D parts display by procedure, realization of animations and effects for maintenance situations. Through this study, familiarity with improving the quality of education and safety accidents and correct maintenance process and the experienced person was very helpful in learning how to use equipment naturally and how to maintain EV vehicles.

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

Autonomous Stereo Object Tracking using BMA and JTC

  • Lee, Jae-Soo;Ko, Jung-Hwan;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2000.01a
    • /
    • pp.79-80
    • /
    • 2000
  • General stereo vision system shows things in 3D, using two visions of left and right side. When the viewpoints of left/right sides are not in accord with each other, it gives fatigue to human eyes and prevents them from having the 3-D feeling. Also, it would be difficult to track mobile objects that are not in the middle of a screen. Therefore, the object tracking function of stereo vision system is to control tracking objects to always be in the middle of a screen while controlling convergence angles of mobile objects in the input image of the left/right cameras. In this paper, object-tracker in stereo vision system is presented which would track mobile objects by using block matching algorithm of preprocessing and JTC.

  • PDF

3-D Object Tracking using 3-D Information and Optical Correlator in the Stereo Vision System (스테레오 비젼 시스템에서 3차원정보와 광 상관기를 이용한 3차원 물체추적 방법)

  • 서춘원;이승현;김은수
    • Journal of Broadcast Engineering
    • /
    • v.7 no.3
    • /
    • pp.248-261
    • /
    • 2002
  • In this paper, we proposed a new 3-dimensional(3-D) object-tracking algorithm that can control a stereo camera using a variable window mask supported by which uses ,B-D information and an optical BPEJTC. Hence, three-dimensional information characteristics of a stereo vision system, distance information from the stereo camera to the tracking object. can be easily acquired through the elements of a stereo vision system. and with this information, we can extract an area of the tracking object by varying window masks. This extractive area of the tracking object is used as the next updated reference image. furthermore, by carrying out an optical BPEJTC between a reference image and a stereo input image the coordinates of the tracking objects location can be acquired, and with this value a 3-D object tracking can be accomplished through manipulation of the convergence angie and a pan/tilt of a stereo camera. From the experimental results, the proposed algorithm was found to be able to the execute 3-D object tracking by extracting the area of the target object from an input image that is independent of the background noise in the stereo input image. Moreover a possible implementation of a 3-D tele-working or an adaptive 3-D object tracker, using the proposed algorithm is suggested.

Online Multi-view Range Image Registration using Geometric and Photometric Feature Tracking (3차원 기하정보 및 특징점 추적을 이용한 다시점 거리영상의 온라인 정합)

  • Baek, Jae-Won;Moon, Jae-Kyoung;Park, Soon-Yong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.493-502
    • /
    • 2007
  • An on-line registration technique is presented to register multi-view range images for the 3D reconstruction of real objects. Using a range camera, we first acquire range images and photometric images continuously. In the range images, we divide object and background regions using a predefined threshold value. For the coarse registration of the range images, the centroid of the images are used. After refining the registration of range images using a projection-based technique, we use a modified KLT(Kanade-Lucas-Tomasi) tracker to match photometric features in the object images. Using the modified KLT tracker, we can track image features fast and accurately. If a range image fails to register, we acquire new range images and try to register them continuously until the registration process resumes. After enough range images are registered, they are integrated into a 3D model in offline step. Experimental results and error analysis show that the proposed method can be used to reconstruct 3D model very fast and accurately.

Evaluation of Robot Calibration Performance based on a Three Dimensional Small Displacement Measuring Sensor (3차원 미소변위센서 기반 로봇 캘리브레이션 성능 검토)

  • Nguyen, Hoai-Nhan;Kang, Hee-Jun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.12
    • /
    • pp.1267-1271
    • /
    • 2014
  • There have been many autonomous robot calibration methods which form closed loop structures through the various attached sensors and mechanical fixtures. Single point calibration among them has been used for on-site calibration due to its convenience of implementation. The robot can reach a single point with infinitely many configurations so that single point calibration algorithm can be set up and easily implemented relative to the other methods. However, it is not still easy to drive the robots' sharp edge to its corresponding edge of the fixture. This is error-prone process. In this paper, we propose a 3 dimensional small displacement measuring sensor and a robot calibration algorithm based on this sensor. This method relieves the difficulty of matching two edges in the single point calibration and improves the resulting robot accuracy. Simulated study is carried out on a Hyundai HA06 robot to show the effectiveness of the proposed method over the single point calibration. And also, the resulting robot accuracy is compared with that from 3D laser tracker based calibration to show the dependency of robot accuracy on range of the workspace where the measurement data are collected.