• Title/Summary/Keyword: 3D Image Reconstruction

Search Result 593, Processing Time 0.028 seconds

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

3D Reconstruction of Internal Zonation in Zircon (저어콘의 내부 누대구조의 3차원적 복원기법)

  • Kim, Sook Ju;Yi, Keewook
    • The Journal of the Petrological Society of Korea
    • /
    • v.23 no.2
    • /
    • pp.139-144
    • /
    • 2014
  • A series of the planar cathodoluminescence (CL) and backscattered-electron (BSE) images of magmatic zircon from the Paleozoic Yeongdeok pluton in the southeastern Korean Peninsula were taken using a scanning electron microscope for a 3D reconstruction of internal zonation of zircon. Seven zircon crystals mounted in epoxy were serially polished with average $3{\mu}m$ thickness to their disappearance. Their 3D reconstruction of zonation was performed using the Volume Viewer function in the ImageJ software. The 3D oscillatory zoning pattern of zircon was apparently shown in all the analyzed crystals. This method can further be applied to zircon crystals of multiple growth histories as well as other geological materials.

Analytical Study of the Image Reconstruction of Fourier Holograms Using Varifocal Electric-Field-Driven Liquid Crystal Fresnel Lenses

  • Kim, Taehyeon;Lee, Seung-Chul;Park, Woo-Sang
    • Current Optics and Photonics
    • /
    • v.4 no.2
    • /
    • pp.115-120
    • /
    • 2020
  • A novel method is proposed for controlling the distance of an image plane in Fourier holograms using varifocal electric-field-driven liquid-crystal (ELC) lenses. Phase Fresnel lenses are employed to reduce the thickness and response time of the ELC lenses. The voltages applied to the electrodes of the ELC Fresnel lens are adjusted so that the lens has the same retardation distribution as an ideal lens. The focal length can be controlled by changing the retardation distribution with the applied voltages. Simulations were conducted for the image reconstruction of Fourier holograms with various focal lengths of the ELC Fresnel lenses. The simulation results indicate that the distance of the image plane can be properly controlled with the varifocal ELC Fresnel lens.

Center Determination for Cone-Beam X-ray Tomography

  • Narkbuakaew, W.;Ngamanekrat, S.;Withayachumnankul, W.;Pintavirooj, C.;Sangworasil, M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1885-1888
    • /
    • 2004
  • In order to render 3D model of the bone, the stack of cross-sectional images must be reconstructed from a series of X-ray radiographs, served as the projections. In the case where the distance between x-ray source and detector is not infinite, image reconstruction from projection based on parallel-beam geometry provides an error in the cross-sectional image. In such case, image reconstruction from projection based on conebeam geometry must be exercised instead. This paper is devoted to the determination of detector center for SART conebeam Technique which is critically effect the performance of the resulting 3D modeling.

  • PDF

Free-view Pixels of Elemental Image Rearrangement Technique (FPERT)

  • Lee, Jaehoon;Cho, Myungjin;Inoue, Kotaro;Tashiro, Masaharu;Lee, Min-Chul
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.1
    • /
    • pp.60-66
    • /
    • 2019
  • In this paper, we propose a new free-view three-dimensional (3D) computational reconstruction of integral imaging to improve the visual quality of reconstructed 3D images when low-resolution elemental images are used. In a conventional free-view reconstruction, the visual quality of the reconstructed 3D images is insufficient to provide 3D information to applications because of the shift and sum process. In addition, its processing speed is slow. To solve these problems, our proposed method uses a pixel rearrangement technique (PERT) with locally selective elemental images. In general, PERT can reconstruct 3D images with a high visual quality at a fast processing speed. However, PERT cannot provide a free-view reconstruction. Therefore, using our proposed method, free-view reconstructed 3D images with high visual qualities can be generated when low-resolution elemental images are used. To show the feasibility of our proposed method, we applied it to optical experiments.

3D Printed Titanium Implant for the Skull Reconstruction: A Preliminary Case Study

  • Choi, Jong-Woo;Ahn, Jae-Sung
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.99-102
    • /
    • 2014
  • The skull defect can be made after the trauma, oncologic problems or neurosurgery. The skull reconstruction has been the challenging issue in craniofacial fields for a long time. So far the skull reconstruction with autogenous bone would be the standard. Although the autogenous bone would be the ideal one for skull reconstruction, donor site morbidity would be the inevitable problem in many cases. Meanwhile various types of allogenic and alloplastic materials have been also used. However, skull reconstruction with many alloplastic material have produced no less complications including infection, exposure, and delayed wound healing. Because the 3D printing technique evolved so fast that 3D printed titanium implant were possible recently. The aim of this trial is to try to restore the original skull anatomy as possible using the 3D printed titanium implant, based on the mirrored three dimensional CT images based on the computer simulation. Preoperative computed tomography (CT) data were processed for the patient and a rapid prototyping (RP) model was produced. At the same time, the uninjured side was mirrored and superimposed onto the traumatized side, to create a mirror-image of the RP model. And we fabricated Titanium implant to reconstruct three-dimensional orbital structure in advance, using the 3D printer. This prefabricated Titanium-implant was then inserted onto the defected skull and fixed. Three dimensional printing technique of titanium material based on the computer simulation turned out to be very successful in this patient. Individualized approach for each patient could be an ideal way to manage the traumatic patients in near future.

3D Reconstruction using the Key-frame Selection from Reprojection Error (카메라 재투영 오차로부터 중요영상 선택을 이용한 3차원 재구성)

  • Seo, Yung-Ho;Kim, Sang-Hoon;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.38-46
    • /
    • 2008
  • Key-frame selection algorithm is defined as the process of selecting a necessary images for 3D reconstruction from the uncalibrated images. Also, camera calibration of images is necessary for 3D reconstuction. In this paper, we propose a new method of Key-frame selection with the minimal error for camera calibration. Using the full-auto-calibration, we estimate camera parameters for all selected Key-frames. We remove the false matching using the fundamental matrix computed by algebraic deviation from the estimated camera parameters. Finally we obtain 3D reconstructed data. Our experimental results show that the proposed approach is required rather lower time costs than others, the error of reconstructed data is the smallest. The elapsed time for estimating the fundamental matrix is very fast and the error of estimated fundamental matrix is similar to others.

Volume measurement of limb edema using three dimensional registration method of depth images based on plane detection (깊이 영상의 평면 검출 기반 3차원 정합 기법을 이용한 상지 부종의 부피 측정 기술)

  • Lee, Wonhee;Kim, Kwang Gi;Chung, Seung Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.818-828
    • /
    • 2014
  • After emerging of Microsoft Kinect, the interest in three-dimensional (3D) depth image was significantly increased. Depth image data of an object can be converted to 3D coordinates by simple arithmetic calculation and then can be reconstructed as a 3D model on computer. However, because the surface coordinates can be acquired only from the front area facing Kinect, total solid which has a closed surface cannot be reconstructed. In this paper, 3D registration method for multiple Kinects was suggested, in which surface information from each Kinect was simultaneously collected and registered in real time to build 3D total solid. To unify relative coordinate system used by each Kinect, 3D perspective transform was adopted. Also, to detect control points which are necessary to generate transformation matrix, 3D randomized Hough transform was used. Once transform matrices were generated, real time 3D reconstruction of various objects was possible. To verify the usefulness of suggested method, human arms were 3D reconstructed and the volumes of them were measured by using four Kinects. This volume measuring system was developed to monitor the level of lymphedema of patients after cancer treatment and the measurement difference with medical CT was lower than 5%, expected CT reconstruction error.

Coupled Line Cameras as a New Geometric Tool for Quadrilateral Reconstruction (사각형 복원을 위한 새로운 기하학적 도구로서의 선분 카메라 쌍)

  • Lee, Joo-Haeng
    • Korean Journal of Computational Design and Engineering
    • /
    • v.20 no.4
    • /
    • pp.357-366
    • /
    • 2015
  • We review recent research results on coupled line cameras (CLC) as a new geometric tool to reconstruct a scene quadrilateral from image quadrilaterals. Coupled line cameras were first developed as a camera calibration tool based on geometric insight on the perspective projection of a scene rectangle to an image plane. Since CLC comprehensively describes the relevant projective structure in a single image with a set of simple algebraic equations, it is also useful as a geometric reconstruction tool, which is an important topic in 3D computer vision. In this paper we first introduce fundamentals of CLC with reals examples. Then, we cover the related works to optimize the initial solution, to extend for the general quadrilaterals, and to apply for cuboidal reconstruction.

High-resolution 3D Object Reconstruction using Multiple Cameras (다수의 카메라를 활용한 고해상도 3차원 객체 복원 시스템)

  • Hwang, Sung Soo;Yoo, Jisung;Kim, Hee-Dong;Kim, Sujung;Paeng, Kyunghyun;Kim, Seong Dae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.150-161
    • /
    • 2013
  • This paper presents a new system which produces high resolution 3D contents by capturing multiview images of an object using multiple cameras, and estimating geometric and texture information of the object from the captured images. Even though a variety of multiview image-based 3D reconstruction systems have been proposed, it was difficult to generate high resolution 3D contents because multiview image-based 3D reconstruction requires a large amount of memory and computation. In order to reduce computational complexity and memory size for 3D reconstruction, the proposed system predetermines the regions in input images where an object can exist to extract object boundaries fast. And for fast computation of a visual hull, the system represents silhouettes and 3D-2D projection/back-projection relations by chain codes and 1D homographies, respectively. The geometric data of the reconstructed object is compactly represented by a 3D segment-based data format which is called DoCube, and the 3D object is finally reconstructed after 3D mesh generation and texture mapping are performed. Experimental results show that the proposed system produces 3D object contents of $800{\times}800{\times}800$ resolution with a rate of 2.2 seconds per frame.