• Title/Summary/Keyword: 3D point set

Search Result 247, Processing Time 0.036 seconds

Hard calibration of a structured light for the Euclidian reconstruction (3차원 복원을 위한 구조적 조명 보정방법)

  • 신동조;양성우;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.183-186
    • /
    • 2003
  • A vision sensor should be calibrated prior to infer a Euclidian shape reconstruction. A point to point calibration. also referred to as a hard calibration, estimates calibration parameters by means of a set of 3D to 2D point pairs. We proposed a new method for determining a set of 3D to 2D pairs for the structured light hard calibration. It is simply determined based on epipolar geometry between camera image plane and projector plane, and a projector calibrating grid pattern. The projector calibration is divided two stages; world 3D data acquisition Stage and corresponding 2D data acquisition stage. After 3D data points are derived using cross ratio, corresponding 2D point in the projector plane can be determined by the fundamental matrix and horizontal grid ID of a projector calibrating pattern. Euclidian reconstruction can be achieved by linear triangulation. and experimental results from simulation are presented.

  • PDF

Automation technology for analyzing 3D point cloud data of construction sites

  • Park, Suyeul;Kim, Younggun;Choi, Yungjun;Kim, Seok
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1100-1105
    • /
    • 2022
  • Denoising, registering, and detecting changes of 3D digital map are generally conducted by skilled technicians, which leads to inefficiency and the intervention of individual judgment. The manual post-processing for analyzing 3D point cloud data of construction sites requires a long time and sufficient resources. This study develops automation technology for analyzing 3D point cloud data for construction sites. Scanned data are automatically denoised, and the denoised data are stored in a specific storage. The stored data set is automatically registrated when the data set to be registrated is prepared. In addition, regions with non-homogeneous densities will be converted into homogeneous data. The change detection function is developed to automatically analyze the degree of terrain change occurred between time series data.

  • PDF

Machining Tool Path Generation for Point Set

  • Park, Se-Youn;Shin, Ha-Yong
    • International Journal of CAD/CAM
    • /
    • v.8 no.1
    • /
    • pp.45-53
    • /
    • 2009
  • As the point sampling technology evolves rapidly, there has been increasing need in generating tool path from dense point set without creating intermediate models such as triangular meshes or surfaces. In this paper, we present a new tool path generation method from point set using Euclidean distance fields based on Algebraic Point Set Surfaces (APSS). Once an Euclidean distance field from the target shape is obtained, it is fairly easy to generate tool paths. In order to compute the distance from a point in the 3D space to the point set, we locally fit an algebraic sphere using moving least square method (MLS) for accurate and simple calculation. This process is repeated until it converges. The main advantages of our approach are : (1) tool paths are computed directly from point set without making triangular mesh or surfaces and their offsets, and (2) we do not have to worry about no local interference at concave region compared to the other methods using triangular mesh or surface model. Experimental results show that our approach can generate accurate enough tool paths from a point set in a robust manner and efficiently.

Point-Based Simplification Using Moving-Least-Squrares (근사 함수를 이용한 Point-Based Simplification)

  • 조현철;배진석;김창헌
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1312-1314
    • /
    • 2004
  • This paper proposes a new simplification algorithm that simplifies reconstructed polygonal mesh from 3D point set considering an original point set. Previous method computes error using mesh information, but it makes to increase error of difference between an original and a simplified model by reason of implementation of simplification. Proposed method simplifies a reconstructed model using an original point data, we acquire a simplified model similar an original. We show several simplified results to demonstrate the usability of our methods.

  • PDF

3D Mesh Creation using 2D Delaunay Triangulation of 3D Point Clouds (2차원 딜로니 삼각화를 이용한 3차원 메시 생성)

  • Choi, Ji-Hoon;Yoon, Jong-Hyun;Park, Jong-Seung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.4
    • /
    • pp.21-27
    • /
    • 2007
  • The 3D Delaunay triangulation is the most widely used method for the mesh creation via the triangulation of a 3D point cloud. However, the method involves a heavy computational cost and, hence, in many interactive applications, it is not appropriate for surface triangulation. In this paper, we propose an efficient triangulation method to create a surface mesh from a 3D point cloud. We divide a set of object points into multiple subsets and apply the 2D Delaunay triangulation to each subset. A given 3D point cloud is cut into slices with respect to the OBB(Oriented Bounding Box) of the point set. The 2D Delaunay triangulation is applied to each subset producing a partial triangulation. The sum of the partial triangulations constitutes the global mesh. As a postprocessing process, we eliminate false edges introduced in the split steps of the triangulation and improve the results. The proposed method can be effectively applied to various image-based modeling applications.

  • PDF

Comparisons of Object Recognition Performance with 3D Photon Counting & Gray Scale Images

  • Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • In this paper the object recognition performance of a photon counting integral imaging system is quantitatively compared with that of a conventional gray scale imaging system. For 3D imaging of objects with a small number of photons, the elemental image set of a 3D scene is obtained using the integral imaging set up. We assume that the elemental image detection follows a Poisson distribution. Computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator are applied to the photon counting elemental image set in order to reconstruct the original 3D scene. To evaluate the photon counting object recognition performance, the normalized correlation peaks between the reconstructed 3D scenes are calculated for the varied and fixed total number of photons in the reconstructed sectional image changing the total number of image channels in the integral imaging system. It is quantitatively illustrated that the recognition performance of the photon counting integral imaging system can be similar to that of a conventional gray scale imaging system as the number of image viewing channels in the photon counting integral imaging (PCII) system is increased up to the threshold point. Also, we present experiments to find the threshold point on the total number of image channels in the PCII system which can guarantee a comparable recognition performance with a gray scale imaging system. To the best of our knowledge, this is the first report on comparisons of object recognition performance with 3D photon counting & gray scale images.

NOTE ON THE PINNED DISTANCE PROBLEM OVER FINITE FIELDS

  • Koh, Doowon
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.35 no.3
    • /
    • pp.227-234
    • /
    • 2022
  • Let 𝔽q be a finite field with odd q elements. In this article, we prove that if E ⊆ 𝔽dq, d ≥ 2, and |E| ≥ q, then there exists a set Y ⊆ 𝔽dq with |Y| ~ qd such that for all y ∈ Y, the number of distances between the point y and the set E is ~ q. As a corollary, we obtain that for each set E ⊆ 𝔽dq with |E| ≥ q, there exists a set Y ⊆ 𝔽dq with |Y| ~ qd so that any set E ∪ {y} with y ∈ Y determines a positive proportion of all possible distances. The averaging argument and the pigeonhole principle play a crucial role in proving our results.

Feature Template-Based Sweeping Shape Reverse Engineering Algorithm using a 3D Point Cloud

  • Kang, Tae Wook;Kim, Ji Eun;Hong, Chang Hee;Hwa, Cho Gun
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.680-681
    • /
    • 2015
  • This study develops an algorithm that automatically performs reverse engineering on three-dimensional (3D) sweeping shapes using a user's pre-defined feature templates and 3D point cloud data (PCD) of sweeping shapes. Existing methods extract 3D sweeping shapes by extracting points on a PCD cross section together with the center point in order to perform curve fitting and connect the center points. However, a drawback of existing methods is the difficulty of creating a 3D sweeping shape in which the user's preferred feature center points and parameters are applied. This study extracts shape features from cross-sectional points extracted automatically from the PCD and compared with pre-defined feature templates for similarities, thereby acquiring the most similar template cross-section. Fitting the most similar template cross-section to sweeping shape modeling makes the reverse engineering process automatic.

  • PDF

A Comparison of 3D Reconstruction through the Passive and Pseudo-Active Acquisition of Images (수동 및 반자동 영상획득을 통한 3차원 공간복원의 비교)

  • Jeona, MiJeong;Kim, DuBeom;Chai, YoungHo
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.3-10
    • /
    • 2016
  • In this paper, two reconstructed point cloud sets with the information of 3D features are analyzed. For a certain 3D reconstruction of the interior of a building, the first image set is taken from the sequential passive camera movement along the regular grid path and the second set is from the application of the laser scanning process. Matched key points over all images are obtained by the SIFT(Scale Invariant Feature Transformation) algorithm and are used for the registration of the point cloud data. The obtained results are point cloud number, average density of point cloud and the generating time for point cloud. Experimental results show the necessity of images from the additional sensors as well as the images from the camera for the more accurate 3D reconstruction of the interior of a building.

Comparison of Size between direct-measurement and 3D body scanning (중국 성인여성의 직접계측과 3D Body scanning 치수 비교 연구)

  • Cha, Su-Joung
    • Journal of Fashion Business
    • /
    • v.16 no.1
    • /
    • pp.150-159
    • /
    • 2012
  • This study intend to analyze differences between 3D body scanning sizes and direct measurement sizes of same subjects. The subjects of study are female students of university in China. 3D data analyze as a 3D Body Measurement Soft System. The conclusion found is as below: In case of circumferences, error between direct-measurement size and 3D body scanning size is from 4.9mm to 62.2mm. The neck circumference size of directmeasurement is bigger than 3D body scanning size. The height error range is from 0.6mm to 51mm. Height of underbust, waist and hip are that direct-measurement sizes are higher than 3D body scanning sizes. Gap of width is from 3.8mm to 21.9mm. The gap range is too narrow relatively to others. Only direct-measurement size of neck width is wider than 3D body scanning size. Error range of length is from 0.3mm to 41.8mm. 3D body scanning sizes of lateral neck to waistline, upperarm length, arm length, neck shoulder point to breast point, shoulder center point to breast point, lateral shoulder to breast point are longer than direct-measurement sizes. They have a negative margin of error. I intend to set up same measurement point between direct-measurement and 3D body scanning but they have some errors because direct-measurement point is applied by a person. 3D body scanning measurement point is settled by automatic system. A measurement point of direct-measurement and 3D body scanning isn't unite. So we need to make a standard of setting up measurement points.