• Title/Summary/Keyword: 3D position coordinate

Search Result 124, Processing Time 0.026 seconds

Robot Target Tracking Method using a Structured Laser Beam (레이저 구조광을 이용한 로봇 목표 추적 방법)

  • Kim, Jong Hyeong;Koh, Kyung-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.12
    • /
    • pp.1067-1071
    • /
    • 2013
  • A 3D visual sensing method using a laser structured beam is presented for robotic tracking applications in a simple and reliable manner. A cylindrical shaped laser structured beam is proposed to measure the pose and position of the target surface. When the proposed laser beam intersects on the surface along the target trajectory, an elliptic pattern is generated. Its ellipse parameters can be induced mathematically by the geometrical relationship of the sensor coordinate and target coordinate. The depth and orientation of the target surface are directly determined by the ellipse parameters. In particular, two discontinuous points on the ellipse pattern, induced by seam trajectory, indicate mathematically the 3D direction for robotic tracking. To investigate the performance of this method, experiments with a 6 axis robot system are conducted on two different types of seam trajectories. The results show that this method is very suitable for robot seam tracking applications due to its excellence in accuracy and efficiency.

Development of Rotational Motion Estimation System for a UUV/USV based on TMS320F28335 microprocessor

  • Tran, Ngoc-Huy;Choi, Hyeung-Sik;Kim, Joon-Young;Lee, Min-Ho
    • International Journal of Ocean System Engineering
    • /
    • v.2 no.4
    • /
    • pp.223-232
    • /
    • 2012
  • For the accurate estimation of the position and orientation of a UUV (unmanned underwater vehicle), a low-cost AHRS (attitude heading reference system) was developed using a low-cost IMU (inertial measurement unit) sensor which provides information on the 3D acceleration, 3D turning rate and 3D earth-magnetic field data in the object coordinate system. The main hardware system is composed of an IMU sensor (ADIS16405) and TMS320F28335, which is coded with an extended kalman filter algorithm with a 50-Hz sampling frequency. Through an experimental gimbal device, good estimation performance for the pitch, roll, and yaw angles of the developed AHRS was verified by comparing to those of a commercial AHRS called the MTi system. The experimental results are here presented and analyzed.

Registration of the 3D Range Data Using the Curvature Value (곡률 정보를 이용한 3차원 거리 데이터 정합)

  • Kim, Sang-Hoon;Kim, Tae-Eun
    • Convergence Security Journal
    • /
    • v.8 no.4
    • /
    • pp.161-166
    • /
    • 2008
  • This paper proposes a new approach to align 3D data sets by using curvatures of feature surface. We use the Gaussian curvatures and the covariance matrix which imply the physical characteristics of the model to achieve registration of unaligned 3D data sets. First, the physical characteristics of local area are obtained by the Gaussian curvature. And the camera position of 3D range finder system is calculated from by using the projection matrix between 3D data set and 2D image. Then, the physical characteristics of whole area are obtained by the covariance matrix of the model. The corresponding points can be found in the overlapping region with the cross-projection method and it concentrates by removed points of self-occlusion. By the repeatedly the process discussed above, we finally find corrected points of overlapping region and get the optimized registration result.

  • PDF

Construction of Static 3D Ultrasonography Image by Radiation Beam Tracking Method from 1D Array Probe (1차원 배열 탐촉자의 방사빔추적기법을 이용한 정적 3차원 초음파진단영상 구성)

  • Kim, Yong Tae;Doh, Il;Ahn, Bongyoung;Kim, Kwang-Youn
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.35 no.2
    • /
    • pp.128-133
    • /
    • 2015
  • This paper describes the construction of a static 3D ultrasonography image by tracking the radiation beam position during the handy operation of a 1D array probe to enable point-of-care use. The theoretical model of the transformation from the translational and rotational information of the sensor mounted on the probe to the reference Cartesian coordinate system was given. The signal amplification and serial communication interface module was made using a commercially available sensor. A test phantom was also made using silicone putty in a donut shape. During the movement of the hand-held probe, B-mode movie and sensor signals were recorded. B-mode images were periodically selected from the movie, and the gray levels of the pixels for each image were converted to the gray levels of 3D voxels. 3D and 2D images of arbitrary cross-section of the B-mode type were also constructed from the voxel data, and agreed well with the shape of the test phantom.

Wearable Robot System Enabling Gaze Tracking and 3D Position Acquisition for Assisting a Disabled Person with Disabled Limbs (시선위치 추적기법 및 3차원 위치정보 획득이 가능한 사지장애인 보조용 웨어러블 로봇 시스템)

  • Seo, Hyoung Kyu;Kim, Jun Cheol;Jung, Jin Hyung;Kim, Dong Hwan
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.10
    • /
    • pp.1219-1227
    • /
    • 2013
  • A new type of wearable robot is developed for a disabled person with disabled limbs, that is, a person who cannot intentionally move his/her legs and arms. This robot can enable the disabled person to grip an object using eye movements. A gaze tracking algorithm is employed to detect pupil movements by which the person observes the object to be gripped. By using this gaze tracking 2D information, the object is identified and the distance to the object is measured using a Kinect device installed on the robot shoulder. By using several coordinate transformations and a matching scheme, the final 3D information about the object from the base frame can be clearly identified, and the final position data is transmitted to the DSP-controlled robot controller, which enables the target object to be gripped successfully.

3D Particle Image Detection by Using Color Encoded Illumination System

  • Kawahashi M.;Hirahara H.
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2001.12a
    • /
    • pp.100-107
    • /
    • 2001
  • A simple new technique of particle depth position measurement, which can be applied for three-dimensional velocity measurement of fluid flows, is proposed. Two color illumination system that intensity is encoded as a function of z-coordinate is introduced. A calibration procedure is described and a profile of small sphere is detected by using the present method as preliminary test. Then, this method is applied to three-dimensional velocity field measurement of simple flow fields seeded with tracer particles. The motion of the particles is recorded by color 3CCD camera. The particle position in the image plane is read directly from the recorded image and the depth of each particle is measured by calculation of the intensity ratio of encoded two color illumination. Therefore three-dimensional velocity components are reconstructed. Although the result includes to some extent error, the feasibility of the present technique for three-dimensional velocity measurement was confirmed.

  • PDF

Patient Position Verification and Corrective Evaluation Using Cone Beam Computed Tomography (CBCT) in Intensity.modulated Radiation Therapy (세기조절방사선치료 시 콘빔CT (CBCT)를 이용한 환자자세 검증 및 보정평가)

  • Do, Gyeong-Min;Jeong, Deok-Yang;Kim, Young-Bum
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.21 no.2
    • /
    • pp.83-88
    • /
    • 2009
  • Purpose: Cone beam computed tomography (CBCT) using an on board imager (OBI) can check the movement and setup error in patient position and target volume by comparing with the image of computer simulation treatment in real.time during patient treatment. Thus, this study purposed to check the change and movement of patient position and target volume using CBCT in IMRT and calculate difference from the treatment plan, and then to correct the position using an automated match system and to test the accuracy of position correction using an electronic portal imaging device (EPID) and examine the usefulness of CBCT in IMRT and the accuracy of the automatic match system. Materials and Methods: The subjects of this study were 3 head and neck patients and 1 pelvis patient sampled from IMRT patients treated in our hospital. In order to investigate the movement of treatment position and resultant displacement of irradiated volume, we took CBCT using OBI mounted on the linear accelerator. Before each IMRT treatment, we took CBCT and checked difference from the treatment plan by coordinate by comparing it with the image of CT simulation. Then, we made correction through the automatic match system of 3D/3D match to match the treatment plan, and verified and evaluated using electronic portal imaging device. Results: When CBCT was compared with the image of CT simulation before treatment, the average difference by coordinate in the head and neck was 0.99 mm vertically, 1.14 mm longitudinally, 4.91 mm laterally, and 1.07o in the rotational direction, showing somewhat insignificant differences by part. In testing after correction, when the image from the electronic portal imaging device was compared with DRR image, it was found that correction had been made accurately with error less than 0.5 mm. Conclusion: By comparing a CBCT image before treatment with a 3D image reconstructed into a volume instead of a 2D image for the patient's setup error and change in the position of the organs and the target, we could measure and correct the change of position and target volume and treat more accurately, and could calculate and compare the errors. The results of this study show that CBCT was useful to deliver accurate treatment according to the treatment plan and to increase the reproducibility of repeated treatment, and satisfactory results were obtained. Accuracy enhanced through CBCT is highly required in IMRT, in which the shape of the target volume is complex and the change of dose distribution is radical. In addition, further research is required on the criteria for match focus by treatment site and treatment purpose.

  • PDF

Comparison of landmark position between conventional cephalometric radiography and CT scans projected to midsagittal plane (3차원 CT자료에서 선정된 계측점을 정중시상면으로 투사한 영상과 두부계측방사선사진상의 계측정의 위치 비교)

  • Park, Jae-Woo;Kim, Nam-Kug;Chang, Young-Il
    • The korean journal of orthodontics
    • /
    • v.38 no.6
    • /
    • pp.427-436
    • /
    • 2008
  • Objective: The purpose of this study is to compare landmark position between cephalometric radiography and midsagittal plane projected images from 3 dimensional (3D) CT. Methods: Cephalometric radiographs and CT scans were taken from 20 patients for treatment of mandibular prognathism. After selection of land-marks, CT images were projected to the midsagittal plane and magnified to 110% according to the magnifying power of radiographs. These 2 images were superimposed with frontal and occipital bone. Common coordinate system was established on the base of FH plane. The coordinate value of each landmark was compared by paired t test and mean and standard deviation of difference was calculated. Results: The difference was from $-0.14{\pm}0.65$ to $-2.12{\pm}2.89\;mm$ in X axis, from $0.34{\pm}0.78$ to $-2.36{\pm}2.55\;mm$ ($6.79{\pm}3.04\;mm$) in Y axis. There was no significant difference only 9 in X axis, and 7 in Y axis out of 20 landmarks. This might be caused by error from the difference of head positioning, by masking the subtle end structures, identification error from the superimposition and error from the different definition.

Theoretical Studies on the Photoreaction Paths of the Monocyanopentaamminechromium(Ⅲ) Ion ([Cr(NH$_3$)$_5$CN]$^{2+}$이온의 광반응 경로에 대한 이론적 고찰)

  • Jong Jae Chung;Jong Ha Choi
    • Journal of the Korean Chemical Society
    • /
    • v.29 no.1
    • /
    • pp.38-44
    • /
    • 1985
  • Photoreaction path for the monocyanochromium (Ⅲ) ion was inferred from the experimentally observed product ratio and theoretical analysis. The angular overlap model was used to analyze the d-orbital of various intermediates along a selected reaction coordinate and to determine quartet state energy level. A loss of equatorial ammine leads to pentacoordinated square pyramid with CN- ligand in an equatorial position. The SP(CNeq) intermediate undergoes a rearrangement by the N-Cr-CN bending. This process leads to a trigonal bipyramidal intermediate in which the CN- ligand is located in equatorial position. The subsequent association with a solvent molecule should probably proceed by lateral attack an one edge of the equatorial triangle. The assumption adopted above was consistent with experimental results.

  • PDF

Point Cloud Generation Method Based on Lidar and Stereo Camera for Creating Virtual Space (가상공간 생성을 위한 라이다와 스테레오 카메라 기반 포인트 클라우드 생성 방안)

  • Lim, Yo Han;Jeong, In Hyeok;Lee, San Sung;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1518-1525
    • /
    • 2021
  • Due to the growth of VR industry and rise of digital twin industry, the importance of implementing 3D data same as real space is increasing. However, the fact that it requires expertise personnel and huge amount of time is a problem. In this paper, we propose a system that generates point cloud data with same shape and color as a real space, just by scanning the space. The proposed system integrates 3D geometric information from lidar and color information from stereo camera into one point cloud. Since the number of 3D points generated by lidar is not enough to express a real space with good quality, some of the pixels of 2D image generated by camera are mapped to the correct 3D coordinate to increase the number of points. Additionally, to minimize the capacity, overlapping points are filtered out so that only one point exists in the same 3D coordinates. Finally, 6DoF pose information generated from lidar point cloud is replaced with the one generated from camera image to position the points to a more accurate place. Experimental results show that the proposed system easily and quickly generates point clouds very similar to the scanned space.