• Title, Summary, Keyword: 3D position coordinate

Search Result 115, Processing Time 0.05 seconds

The Position Estimation of a Body Using 2-D Slit Light Vision Sensors (2-D 슬리트광 비젼 센서를 이용한 물체의 자세측정)

  • Kim, Jung-Kwan;Han, Myung-Chul
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.133-142
    • /
    • 1999
  • We introduce the algorithms of 2-D and 3-D position estimation using 2-D vision sensors. The sensors used in this research issue red laser slit light to the body. So, it is very convenient to obtain the coordinates of corner point or edge in sensor coordinate. Since the measured points are normally not fixed in the body coordinate, the additional conditions, that corner lines or edges are straight and fixed in the body coordinate, are used to find out the position and orientation of the body. In the case of 2-D motional body, we can find the solution analytically. But in the case of 3-D motional body, linearization technique and least mean squares method are used because of hard nonlinearity.

  • PDF

Position Detection Algorithm for Auto-Landing Containers by Laser-Sensor, Part I: 3-D Measurement (컨테이너의 자동랜딩을 위한 레이저센서 기반의 절대위치 검출 알고리즘: 3차원 측정 (Part I))

  • Hong, Keum-Shik;Lim, Sung-Jin;Hong, Kyung-Tae
    • Journal of Ocean Engineering and Technology
    • /
    • v.21 no.4
    • /
    • pp.45-54
    • /
    • 2007
  • In the context of auto-landing containers from a container ship to a truck or automatic guided vehicle and vice versa, this research investigates three schemes, one in Part I and two in Part II, for measuring the absolute position of a container. Coordinate transformations between the reference-coordinate, sensor-coordinate, and body-coordinate systems are briefly discussed. The scheme explored in Part I aims the use of three laser-slit sensors, which are relatively inexpensive. In this case, nine nonlinear equations are formulated for six unknown variables (three for orientation and three for position), so a closed-form solution is not available. Instead, an approximate solution through linearization was derived. An advantage of the method in Part I is its ability to measure an absolute position in 3D space, while a disadvantage is the computation time required to obtain pseudo-inverses and the approximate nature of the obtained solution. Numerical examples are provided.

The Position Estimation of a Car Using 2D Vision Sensors (2D 비젼 센서를 이용한 차체의 3D 자세측정)

  • 한명철;김정관
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • /
    • pp.296-300
    • /
    • 1996
  • This paper presents 3D position estimation algorithm with the images of 2D vision sensors which issues Red Laser Slit light and recieves the line images. Since the sensor usually measures 2D position of corner(or edge) of a body and the measured point is not fixed in the body, the additional information of the corner(or edge) is used. That is, corner(or edge) line is straight and fixed in the body. For the body which moves in a plane, the Transformation matrix between the body coordinate and the reference coordinate is analytically found. For the 3D motion body, linearization technique and least mean squares method are used.

  • PDF

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

Rotor Position and Speed Estimation of Interior Permanent Magnet Synchronous Motor using Unscented Kalman Filter

  • An, Lu;Hameyer, Kay
    • Journal of international Conference on Electrical Machines and Systems
    • /
    • v.3 no.4
    • /
    • pp.458-464
    • /
    • 2014
  • This paper proposes the rotor position and rotor speed estimation for an interior permanent magnet synchronous machines (IPMSM) using Unscented Kalman Filter (UKF) in alpha-beta coordinate system. Conventional algorithms using UKF are based on the simple observer model of IPMSM in d-q coordinate system. Rotor acceleration is neglected within the sampling step. An expansion of the observer model in an alpha-beta coordinate system with the consideration of the rotor speed variation provides the improved rotor position and speed estimation. The results show good stability concerning the expansion of observer model for the IPMSM.

종합병원관리 전산화 System-MEDIOS

  • 이승훈
    • Journal of Biomedical Engineering Research
    • /
    • v.3 no.1
    • /
    • pp.55-58
    • /
    • 1982
  • In this paper, a method for camera position estimation in gaster using elechoendoscopic image sequence is proposed. In order to obtain proper image sequences, the gaster in divided into three sections. It is presented that camera position modeling for 3D information extraction and image distortion due to the endoscopic lenses is corrected.The feature points are represented with respect to the reference coordinate system belpw 10 percents error rate. The faster distortion correction algorithm is proposed in this paper. This algorithm uses error table which is faster than coordinate transform method using n-th order polynomials.

  • PDF

Robust Pelvic Coordinate System Determination for Pose Changes in Multidetector-row Computed Tomography Images

  • Kobashi, Syoji;Fujimoto, Satoshi;Nishiyama, Takayuki;Kanzaki, Noriyuki;Fujishiro, Takaaki;Shibanuma, Nao;Kuramoto, Kei;Kurosaka, Masahiro;Hata, Yutaka
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.1
    • /
    • pp.65-72
    • /
    • 2010
  • For developing navigation system of total hip arthroplasty (THA) and evaluating hip joint kinematics, 3-D pose position of the femur and acetabulum in the pelvic coordinate system has been quantified. The pelvic coordinate system is determined by manually indicating pelvic landmarks in multidetector-row computed tomography (MDCT) images. It includes intra- and inter-observer variability, and may result in a variability of THA operation or diagnosis. To reduce the variability of pelvic coordinate system determination, this paper proposes an automated method in MDCT images. The proposed method determines pelvic coordinate system automatically by detecting pelvic landmarks on anterior pelvic plane (APP) from MDCT images. The method calibrates pelvic pose by using silhouette images to suppress the affect of pelvic pose change. As a result of comparing with manual determination, the proposed method determined the coordinate system with a mean displacement of $2.6\;{\pm}\;1.6$ mm and a mean angle error of $0.78\;{\pm}\;0.34$ deg on 5 THA subjects. For changes of pelvic pose position within 10 deg, standard deviation of displacement was 3.7 mm, and of pose was 1.28 deg. We confirmed the proposed method was robust for pelvic pose changes.

A Study m Camera Calibration Using Artificial Neural Network (신경망을 이용한 카메라 보정에 관한 연구)

  • Jeon, Kyong-Pil;Woo, Dong-Min;Park, Dong-Chul
    • Proceedings of the KIEE Conference
    • /
    • /
    • pp.1248-1250
    • /
    • 1996
  • The objective of camera calibration is to obtain the correlation between camera image coordinate and 3-D real world coordinate. Most calibration methods are based on the camera model which consists of physical parameters of the camera like position, orientation, focal length, etc and in this case camera calibration means the process of computing those parameters. In this research, we suggest a new approach which must be very efficient because the artificial neural network(ANN) model implicitly contains all the physical parameters, some of which are very difficult to be estimated by the existing calibration methods. Implicit camera calibration which means the process of calibrating a camera without explicitly computing its physical parameters can be used for both 3-D measurement and generation of image coordinates. As training each calibration points having different height, we can find the perspective projection point. The point can be used for reconstruction 3-D real world coordinate having arbitrary height and image coordinate of arbitrary 3-D real world coordinate. Experimental comparison of our method with well-known Tsai's 2 stage method is made to verify the effectiveness of the proposed method.

  • PDF

Calculation of Dumping Vehicle Trajectory and Camera Coordinate Transform for Detection of Waste Dumping Position (폐기물 매립위치의 검출을 위한 매립차량 궤적 추적 계산 및 카메라 좌표변환)

  • Lee, Dong-Gyu;Lee, Young-Dae;Cho, Sung-Yun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.243-249
    • /
    • 2013
  • In waste repository environment, we can process the waste history efficiently for reuse by recording the history trajectory of the vehicle which loaded waste and the dumping position of the waste vehicle. By mapping the unloaded waste to 3D and by extracting the dumping point, a new method was implemented so as to record the final dumping position and the waste content under various experiments. In this paper, we developed the algorithm which tracking the vehicle and deciding the moment of dumping in landfills. We first trace the position of vehicle using the difference image between current image and background image and then we decide the stop point from the shape of vehicle route and detect the dumping point by comparing the dumping image with the image that vehicle is stopping. From the camera parameters, The transform method between screen coordinate and real coordinate of landfills is proposed.

A Measurement Error Correction Algorithm of Road Image for Traveling Vehicle's Fluctuation Using V.F. Modeling (V.F. 모델링을 이용한 주행차량의 진동에 대한 도로영상의 계측오차 보정 알고리듬)

  • Kim Tae-Hyo;Seo Kyung-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.8
    • /
    • pp.824-833
    • /
    • 2006
  • In this paper, the image modelling of road's lane markings is established using view frustum(VF) model. From this model, a measurement system of lane markings and obstacles is proposed. The system also involve the real time processing of the 3D position coordinate and the distance data from the camera to the points on the 3D world coordinate by virtue of the camera calibration. In order to reduce their measurement error, an useful algorithm for which analyze the geometric variations due to traveling vehicle's fluctuation using VF model is proposed. In experiments, without correction, for instance, the $0.4^{\circ}$ of pitching rotation gives the error of $0.4{\sim}0.6m$ at the distance of 10m, but the more far distance cause exponentially the more error. We con finned that this algorithm can be reduced less than 0.1m of error at the same condition.