• Title/Summary/Keyword: 2차원 투영

Search Result 222, Processing Time 0.031 seconds

Non-rigid Point-Cloud Contents Registration Method used Local Similarity Measurement (부분 유사도 측정을 사용한 비 강체 포인트 클라우드 콘텐츠 정합 방법)

  • Lee, Heejea;Yun, Junyoung;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.829-831
    • /
    • 2022
  • 포인트 클라우드 콘텐츠는 움직임이 있는 콘텐츠를 연속된 프레임에 3 차원 위치정보와 대응하는 색상으로 기록한 데이터이다. 강체 포인트 클라우드 데이터를 정합하기 위해서는 고전적인 방법이지만 강력한 ICP 정합 알고리즘을 사용한다. 그러나 국소적인 모션 벡터가 있는 비 강체 포인트 클라우드 콘텐츠는 기존의 ICP 정합 알고리즘을 통해서는 프레임 간 정합이 불가능하다. 본 논문에서는 비 강체 포인트 클라우드 콘텐츠를 지역적 확률 모델을 사용하여 프레임 간 포인트의 쌍을 맺고 개별 포인트 간의 모션벡터를 구해 정합 하는 방법을 제안한다. 정합 대상의 데이터를 2 차원 투영을 하여 구조화시키고 정합 할 데이터를 투영하여 후보군 포인트를 선별한다. 선별된 포인트에서 깊이 값 비교와 좌표 및 색상 유사도를 측정하여 적절한 쌍을 찾아준다. 쌍을 찾은 후 쌍으로 모션 벡터를 더하여 정합을 수행하면 비 강체 포인트 클라우드 콘텐츠 데이터에 대해서도 정합이 가능해진다.

  • PDF

3D Augmented pose estimation through GAN based image synthesis (GAN 기반 이미지 합성을 통한 3차원 증강 자세 추정)

  • Park, Chan;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.667-669
    • /
    • 2022
  • 2차원 이미지를 통한 자세 추정의 경우 관절이 겹치거나 가려져 있는 등의 인식 저해 요소로 인하여 자세 추정 정확도가 감소하는 한계가 있다. 본 논문에서는 GAN을 통해 2차원 이미지를 3차원으로 증강한 뒤 자세를 추정하는 기법을 제안한다. 제안하는 방법은 2차원 이미지의 평면좌표 값에서 GAN을 통해 노이즈 벡터 z축 값과 피사체에 투영되는 빛의 방향 값을 반영한 3차원 이미지를 만든다. 이러한 이미지 합성 과정을 거친 후 DeepLabCut을 사용해 관절 좌표를 추출하고 자세 추정 및 분류를 진행한다. 이를 통해 2차원에서의 자세 추정 정확도 향상을 기대할 수 있으며, 향후 이를 기반한 이상행동 탐지 분야에서 적용할 수 있다.

Simple Method of Integrating 3D Data for Face Modeling (얼굴 모델링을 위한 간단한 3차원 데이터 통합 방법)

  • Yoon, Jin-Sung;Kim, Gye-Young;Choi, Hyung-Ill
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.4
    • /
    • pp.34-44
    • /
    • 2009
  • Integrating 3D data acquired in multiple views is one of the most important techniques in 3D modeling. However, due to the presence of surface scanning noise and the modification of vertices consisting of surface, the existing integration methods are inadequate to some applications. In this paper, we propose a method of integrating surfaces by using the local surface topology. We first find all boundary vertex pairs satisfying a prescribed geometric condition on adjacent surfaces and then compute 2D planes suitable to each vertex pairs. Using each vertex pair and neighbouring boundary vertices projected to their 2d plane, we produce polygons and divide them to the triangles which will be inserted to empty space between the adjacent surfaces. A proposed method use local surface topology and not modify the vertices consisting of surface to integrate several of surfaces to one surface, so that it is robust and simple. We also integrate the transformed textures to a 2D image plane computed by using a cylindrical projection to composite 3D textured model. The textures will be integrated according to the partition lines which considering attribute of face object. Experimental results on real object data show that the suggested method is simple and robust.

3D Object Recognition Using Appearance Model Space of Feature Point (특징점 Appearance Model Space를 이용한 3차원 물체 인식)

  • Joo, Seong Moon;Lee, Chil Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.2
    • /
    • pp.93-100
    • /
    • 2014
  • 3D object recognition using only 2D images is a difficult work because each images are generated different to according to the view direction of cameras. Because SIFT algorithm defines the local features of the projected images, recognition result is particularly limited in case of input images with strong perspective transformation. In this paper, we propose the object recognition method that improves SIFT algorithm by using several sequential images captured from rotating 3D object around a rotation axis. We use the geometric relationship between adjacent images and merge several images into a generated feature space during recognizing object. To clarify effectiveness of the proposed algorithm, we keep constantly the camera position and illumination conditions. This method can recognize the appearance of 3D objects that previous approach can not recognize with usually SIFT algorithm.

The study of depth information acquisition in 2D pattern image (2차원 패턴 영상에서의 3차원 정보취득에 관한 연구)

  • Kim Tae-Eun
    • Journal of Digital Contents Society
    • /
    • v.6 no.1
    • /
    • pp.35-39
    • /
    • 2005
  • It is significantly important problem in computer what is estimating 3D information from 2D images. However, most of the related works have been interested in the analysis of the changes of 2D image, so that, they need much time to solve the complex equation and expensive device. In this paper, we first actively project the pattern of the sinusoidal wave into the object. Then, we measure the change of the phase from the distortion occurring according to the shape of the object, and we use the change of the phase in order to estimate the depth information. This is our proposal.

  • PDF

Concepts of System Function and Modulation-Demodulation based Reconstruction of a 3D Object Coordinates using Active Method (시스템 함수 및 변복조 개념 적용 능동 방식 3차원 물체 좌표 복원)

  • Lee, Deokwoo;Kim, Jisu;Park, Cheolhyeong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.5
    • /
    • pp.530-537
    • /
    • 2019
  • In this paper we propose a novel approach to representation of the 3D reconstruction problem by employing a concept of system function that is defined as the ratio of the output to the input signal. Akin to determination of system function (or system response), this paper determines system function by choosing (or defining) appropriate input and output signals. In other words, the 3D reconstruction using structured circular light patterns is reformulated as determination of system function from input and output signals. This paper introduces two algorithms for the reconstruction. The one defines the input and output signals as projected circular light patterns and the images overlaid with the patterns and captured by camera, respectively. The other one defines input and output signals as 3D coordinates of the object surface and the image captured by camera. The first one leads to the problem as identifying the system function and the second one leads to the problem as estimation of an input signal employing concept of modulation-demodulation theory. This paper substantiate the proposed approach by providing experimental results.

Construction of 3 Dimensional Object from Orthographic Views (2차원 평면투영도로부터 3차원 물체의 구성)

  • Kim, Eung-Kon
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.12
    • /
    • pp.1825-1833
    • /
    • 1990
  • This paper proposes an efficient algorithm that constructs 3-dimensional solid object from 3 orthogonal views. This algorithm inputs vertex and edge information of 3 orthogonal views and generates 2 dimensional surfaces, 3 dimensional vertices, edges and surfaces and then compares 2 dimensional projections of 3 dimensional surfaces with surfaces from othorgonal views. This algorithm is useful for CAD system, 3 dimensional scene analysis system and object modeling for real-time animation and has been implemented in C language on IRIS workstation. The effectiveness of this algorithm is shown by examples of aircrafts' models.

  • PDF

A Study on Image Recognition using 2D Auto-tuning Template (2 차원 자동 변형 템플릿을 사용하는 영상인식에 대한 연구)

  • Han, Youngmo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.587-589
    • /
    • 2019
  • 템플릿 정합을 사용하는 영상인식 방법은 사용이 편리한 장점이 있지만, 템플릿과 정합 영상의 자세가 맞지 않으면 좋은 결과를 기대하기 어렵다. 이와 같은 단점을 보완하기 위해서 본 논문에서는 템플릿과 정합 영상 사이에 2 차원 방향과 크기에 차이기 있어도 성능이 유지될 수 있는 방안을 제시한다. 사용의 편의성을 고려하여, 템플릿 이외의 추가정보, 예를 들어 직교투영상의 거리 정보가 없어도 사용 가능하도록 알고리즘을 설계하는데 주력한다.

Epipolar Resampling for High Resolution Satellite Imagery Based on Parallel Projection (평행투영 기반의 고해상도 위성영상 에피폴라 재배열)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Chang, Hwi-Jeong;Jeong, Ji-Yeon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.15 no.4
    • /
    • pp.81-88
    • /
    • 2007
  • The geometry of satellite image captured by linear CCD sensor is different from that of frame camera image. The fact that the exterior orientation parameters for satellite image with linear CCD sensor varies from scan line by scan line, causes the difference of image geometry between frame and linear CCD sensor. Therefore, we need the epipolar geometry for linear CCD image which differs from that of frame camera image. In this paper, we proposed a method of resampling linear CCD satellite image in epipolar geometry under the assumption that image is not formed in perspective projection but in parallel projection, and the sensor model is a 2D affine sensor model based on parallel projection. For the experiment, IKONOS stereo images, which are high resolution linear CCD images, were used and tested. As results, the spatial accuracy of 2D affine sensor model is investigated and the accuracy of epipolar resampled image with RFM was presented.

  • PDF

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.