• Title/Summary/Keyword: 3D Data Reconstruction

Search Result 351, Processing Time 0.047 seconds

Analysis of the Increase of Matching Points for Accuracy Improvement in 3D Reconstruction Using Stereo CCTV Image Data

  • Moon, Kwang-il;Pyeon, MuWook;Eo, YangDam;Kim, JongHwa;Moon, Sujung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.2
    • /
    • pp.75-80
    • /
    • 2017
  • Recently, there has been growing interest in spatial data that combines information and communication technology with smart cities. The high-precision LiDAR (Light Dectection and Ranging) equipment is mainly used to collect three-dimensional spatial data, and the acquired data is also used to model geographic features and to manage plant construction and cultural heritages which require precision. The LiDAR equipment can collect precise data, but also has limitations because they are expensive and take long time to collect data. On the other hand, in the field of computer vision, research is being conducted on the methods of acquiring image data and performing 3D reconstruction based on image data without expensive equipment. Thus, precise 3D spatial data can be constructed efficiently by collecting and processing image data using CCTVs which are installed as infrastructure facilities in smart cities. However, this method can have an accuracy problem compared to the existing equipment. In this study, experiments were conducted and the results were analyzed to increase the number of extracted matching points by applying the feature-based method and the area-based method in order to improve the precision of 3D spatial data built with image data acquired from stereo CCTVs. For techniques to extract matching points, SIFT algorithm and PATCH algorithm were used. If precise 3D reconstruction is possible using the image data from stereo CCTVs, it will be possible to collect 3D spatial data with low-cost equipment and to collect and build data in real time because image data can be easily acquired through the Web from smart-phones and drones.

Design of a Mapping Framework on Image Correction and Point Cloud Data for Spatial Reconstruction of Digital Twin with an Autonomous Surface Vehicle (무인수상선의 디지털 트윈 공간 재구성을 위한 이미지 보정 및 점군데이터 간의 매핑 프레임워크 설계)

  • Suhyeon Heo;Minju Kang;Jinwoo Choi;Jeonghong Park
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.61 no.3
    • /
    • pp.143-151
    • /
    • 2024
  • In this study, we present a mapping framework for 3D spatial reconstruction of digital twin model using navigation and perception sensors mounted on an Autonomous Surface Vehicle (ASV). For improving the level of realism of digital twin models, 3D spatial information should be reconstructed as a digitalized spatial model and integrated with the components and system models of the ASV. In particular, for the 3D spatial reconstruction, color and 3D point cloud data which acquired from a camera and a LiDAR sensors corresponding to the navigation information at the specific time are required to map without minimizing the noise. To ensure clear and accurate reconstruction of the acquired data in the proposed mapping framework, a image preprocessing was designed to enhance the brightness of low-light images, and a preprocessing for 3D point cloud data was included to filter out unnecessary data. Subsequently, a point matching process between consecutive 3D point cloud data was conducted using the Generalized Iterative Closest Point (G-ICP) approach, and the color information was mapped with the matched 3D point cloud data. The feasibility of the proposed mapping framework was validated through a field data set acquired from field experiments in a inland water environment, and its results were described.

Realistic 3D Scene Reconstruction from an Image Sequence (연속적인 이미지를 이용한 3차원 장면의 사실적인 복원)

  • Jun, Hee-Sung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.183-188
    • /
    • 2010
  • A factorization-based 3D reconstruction system is realized to recover 3D scene from an image sequence. The image sequence is captured from uncalibrated perspective camera from several views. Many matched feature points over all images are obtained by feature tracking method. Then, these data are supplied to the 3D reconstruction module to obtain the projective reconstruction. Projective reconstruction is converted to Euclidean reconstruction by enforcing several metric constraints. After many triangular meshes are obtained, realistic reconstruction of 3D models are finished by texture mapping. The developed system is implemented in C++, and Qt library is used to implement the system user interface. OpenGL graphics library is used to realize the texture mapping routine and the model visualization program. Experimental results using synthetic and real image data are included to demonstrate the effectiveness of the developed system.

3D Precision Building Modeling Based on Fusion of Terrestrial LiDAR and Digital Close-Range Photogrammetry (지상라이다와 디지털지상사진측량을 융합한 건축물의 3차원 정밀모델링)

  • 사석재;이임평;최윤수;오의종
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.11a
    • /
    • pp.529-534
    • /
    • 2004
  • The increasing need and use of 3D GIS particularly in urban areas has produced growing attention on building reconstruction. Nowadays, the use of close-range data for building reconstruction has been intensively emphasized since they can provide higher resolution and more complete coverage than airborne sensory data. We developed a fusion approach for building reconstruction from both points and images. The proposed approach was then applied to reconstructing a building model from real data sets acquired from a large existing building. Based on the experimental results, we assured that the proposed approach cam achieve high resolution and accuracy in building reconstruction. The proposed approach can effectively contribute in developing an operational system producing large urban models for 3D GIS.

  • PDF

Custom-Made T-Tube Designed by 3-D Reconstruction Technique, a Preliminary Study (삼차원 재건 기술을 이용한 맞춤형 몽고메리 T-Tube의 제작에 관한 예비 연구)

  • Yoo, Young-Sam
    • Korean Journal of Bronchoesophagology
    • /
    • v.16 no.2
    • /
    • pp.131-137
    • /
    • 2010
  • Background: Montgomery T-tube is widely used to maintain airway in many cases. Market-available tubes are not always fit to the trachea of each patient and need some modification such as trimming. Complications do happen in prolonged use like tracheostomy tubes. To overcome above limitations, we designed custom-made T-tube using CT data with the aid of 3D reconstruction software. Material and Method: Boundaries were extracted from neck CT data of normal person and processed by surface rendering methods. Real laryngotracheal model and tracheal inner surface-mimicking tube model were made with plaster and rubber. The main tube was designed by accumulation of circles or simple closed curves made from boundaries. Stomal tube was made by accumulation of squares due to limitation of software. Measurement data of tracheal lumen were used to custom-made T-tubes. Tracheal lumen residing portion (vertical limb) was made like circular cylinder or simple closed curved cylinder. Stomal portion (horizontal limb) was designed like square cylinder. Results: Custom made T-tube with cylindric vertical limb and horizontal limb of square cylinder was designed. Conclusion: CT data was helpful in making custom made T-tube with 3D reconstruction technique. If suitable materials are available, commercial T-tube can be printed out from 3D printers.

  • PDF

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Surface Reconstruction for Cutting Path Generation on VLM-Slicer (VLM-Slicer에서 절단 경로 생성을 위한 측면 형상 복원)

  • Lee, Sang-Ho;An, Dong-Gyu;Yang, Dong-Yeol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.7
    • /
    • pp.71-79
    • /
    • 2002
  • A new rapid prototyping process, Variable Lamination Manufacturing using a 4-axis-controlled hotwire cutter and expandable polystyrene foam sheet as a laminating material of the part (VLM-S), has been developed to reduce building time and to improve the surface finish of parts. The objective of this study is to reconstruct the surface of the original 3D CAD model in order to generate mid-slice data using the advancing front technique. The generation of 3D layers by a 4 axis-controlled hot-wire cutter requires a completely different procedure to generate toolpath data unlike the conventional RP CAD systems. The cutting path data for VLM-S are created by VLM-Slicer, which is a special CAD/CAM software with automatic generation of 3D toolpath. For the conventional sheet type system like LOM, the STL file would be sliced into 2D data only. However, because of using the thick layers and a sloping edge with the firstorder approximation between the top and bottom layers, VLM-Slicer requires surface reconstruction, mid-slice, and the toolpath data generation as well as 2D slicing. Surface reconstruction demands the connection between the two neighboring cross-sectional contours using the triangular facets. VLM-S employs thick layers with finite thickness, so that surface reconstruction is necessary to obtain a sloping angle of a side surface and the point data at a half of the sheet thickness. In the process of the toolpath data generation the surface reconstruction algorithm is expected to minimize the error between the ruled surface and the original parts..

A Study on Three-Dimensional Model Reconstruction Based on Laser-Vision Technology (레이저 비전 기술을 이용한 물체의 3D 모델 재구성 방법에 관한 연구)

  • Nguyen, Huu Cuong;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.7
    • /
    • pp.633-641
    • /
    • 2015
  • In this study, we proposed a three-dimensional (3D) scanning system based on laser-vision technique and rotary mechanism for automatic 3D model reconstruction. The proposed scanning system consists of a laser projector, a camera, and a turntable. For laser-camera calibration a new and simple method was proposed. 3D point cloud data of the surface of scanned object was fully collected by integrating extracted laser profiles, which were extracted from laser stripe images, corresponding to rotary angles of the rotary mechanism. The obscured laser profile problem was also solved by adding an addition camera at another viewpoint. From collected 3D point cloud data, the 3D model of the scanned object was reconstructed based on facet-representation. The reconstructed 3D models showed effectiveness and the applicability of the proposed 3D scanning system to 3D model-based applications.

Generation of 3 Dimensional Image Model from Multiple Digital Photographs (다중 디지털 사진을 이용한 3차원 이미지 모델 생성)

  • 정태은;석정민;신효철;류재평
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.1634-1637
    • /
    • 2003
  • Any given object on the motor-driven turntable is pictured from 8 to 72 different views with a digital camera. 3D shape reconstruction is performed with the integrated software called by Scanware from these multiple digital photographs. There are several steps such as configuration, calibration, capturing, segmentation, shape creation, texturing and merging process during the shape reconstruction process. 3D geometry data can be exported to cad data such as Autocad input file. Also 3D image model is generated from 3D geometry and texture data, and is used to advertise the model in the internet environment. Consumers can see the object realistically from wanted views by rotating or zooming in the internet browsers with Scanbull spx plug-in. The spx format allows a compact saving of 3D objects to handle or download. There are many types of scan equipments such as laser scanners and photogrammetric scanners. Line or point scan methods by laser can generate precise 3D geometry but cannot obtain color textures in general. Reversely, 3D image modeling with photogrammetry can generate not only geometries but also textures from associated polygons. We got various 3D image models and introduced the process of getting 3D image model of an internet-connected watchdog robot.

  • PDF

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.