• Title/Summary/Keyword: 2차원 투영

Search Result 223, Processing Time 0.025 seconds

A Study on 3D-Transformation of Krazovsky Coordinate System (Krassovsky 타원체 좌표의 3차원 변환에 대한 연구)

  • 김감래;전호원;현민호
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.19 no.2
    • /
    • pp.117-123
    • /
    • 2001
  • Requiring topographic information of map due to retaining russia map, which needed accuracy analysis of russia map and relation between its and south korea's map. In order to obtain exact location information from the map which has different reference datum. We have to operate coordinate transformation between maps applied different ellipsoid. In this paper, in order to evaluate accuracy between two maps applied different ellipsoid, it has studied theory of map projection and coordinate transformation. Then, select each point which can be recognized on the two maps for accuracy evaluation. After obtaining coordinate values for each point of same area, it is evaluated accuracy each geodetic coordinate and each TM coordinate. As a result of this study, the maps which have different reference datum could be used if the exact origin shift could be obtained and applied.

  • PDF

Selecting Representative Views of 3D Objects By Affinity Propagation for Retrieval and Classification (검색과 분류를 위한 친근도 전파 기반 3차원 모델의 특징적 시점 추출 기법)

  • Lee, Soo-Chahn;Park, Sang-Hyun;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.828-837
    • /
    • 2008
  • We propose a method to select representative views of single objects and classes of objects for 3D object retrieval and classification. Our method is based on projected 2D shapes, or views, of the 3D objects, where the representative views are selected by applying affinity propagation to cluster uniformly sampled views. Affinity propagation assigns prototypes to each cluster during the clustering process, thereby providing a natural criterion to select views. We recursively apply affinity propagation to the selected views of objects classified as single classes to obtain representative views of classes of objects. By enabling classification as well as retrieval, effective management of large scale databases for retrieval can be enhanced, since we can avoid exhaustive search over all objects by first classifying the object. We demonstrate the effectiveness of the proposed method for both retrieval and classification by experimental results based on the Princeton benchmark database [16].

Building Dataset of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Junhyuk Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.21-30
    • /
    • 2024
  • In this paper, we propose a method to build a sample dataset of the features of eight sensor-only facilities built as infrastructure for autonomous cooperative driving. The feature extracted from point cloud data acquired by LiDAR and build them into the sample dataset for recognizing the facilities. In order to build the dataset, eight sensor-only facilities with high-brightness reflector sheets and a sensor acquisition system were developed. To extract the features of facilities located within a certain measurement distance from the acquired point cloud data, a cylindrical projection method was applied to the extracted points after applying DBSCAN method for points and then a modified OTSU method for reflected intensity. Coordinates of 3D points, projected coordinates of 2D, and reflection intensity were set as the features of the facility, and the dataset was built along with labels. In order to check the effectiveness of the facility dataset built based on LiDAR data, a common CNN model was selected and tested after training, showing an accuracy of about 90% or more, confirming the possibility of facility recognition. Through continuous experiments, we will improve the feature extraction algorithm for building the proposed dataset and improve its performance, and develop a dedicated model for recognizing sensor-only facilities for autonomous cooperative driving.

An Efficient Walkthrough from Two Images using Spidery Mesh Interface and View Morphing (Spidery 매쉬 인터페이스와 뷰 모핑을 이용한 두 이미지로부터의 효율적인 3차원 애니메이션)

  • Cho, Hang-Shin;Kim, Chang-Hun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.2
    • /
    • pp.132-140
    • /
    • 2001
  • This paper proposes an efficient walktlu-ough animation from two images of the same scene. To make animation easily and fast, Tour Into the Picture(TIP) enables walkthrough animation from single image but lacks the reality of its foreground object when the viewpoint moves from side to side, and view morphing uses only 2D transition between two images but restricts its camera path on the line between two views. By combining advantages of these two image-based techniques, this paper suggests a new virtual navigation technique which enable natural scene transformation when the viewpoint changes in the side-to-side direction as well as in the depth direction. In our method, view morphing is employed only in foreground objects , and background scene which is perceived carelessly is mapped into cube-like 3D model as in TIP, so as to save laborious 3D reconstruction costs and improve visual realism simultaneously. To do this, we newly define a camera transformation between two images from the relationship of the spidery mesh transformation and its corresponding 3D view change. The result animation shows that our method creates a realistic 3D virtual navigation using a simple interface.

  • PDF

TIN based Matching using Stereo Airphoto and Airborne LiDAR (입체항공사진과 항공 LiDAR를 이용한 TIN 기반 정합)

  • Kim, Hyung-Tae;Han, Dong-Yeob
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.4
    • /
    • pp.443-452
    • /
    • 2008
  • To deduce 3D linear information which express shapes of buildings out of airphoto by fusion of airphoto and LiDAR data, this research went through 2 process. First, research made LiDAR data into projected data of 2D based on airphoto. For this, the virtual points were added to solve the visual problem of building boundary area which has poor information because the attribute in LiDAR data. Research construct irregular triangular nets from modified LiDAR data and judge visual triangular nets out of image. Through this, research can make reference to information of triangular nets in each image pixel. Second, 3D information was extracted from stereo images segments by combining extracted information of visible region and 2D irregular triangular nets. Matching way based on TIN for segments from stereo images was used. Matching condition based on TIN can improve about 20% of edge matching accuracy compared to existing quadrilateral condition of epipolar geometry.

An Effective Algorithm for Subdimensional Clustering of High Dimensional Data (고차원 데이터를 부분차원 클러스터링하는 효과적인 알고리즘)

  • Park, Jong-Soo;Kim, Do-Hyung
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.417-426
    • /
    • 2003
  • The problem of finding clusters in high dimensional data is well known in the field of data mining for its importance, because cluster analysis has been widely used in numerous applications, including pattern recognition, data analysis, and market analysis. Recently, a new framework, projected clustering, to solve the problem was suggested, which first select subdimensions of each candidate cluster and then each input point is assigned to the nearest cluster according to a distance function based on the chosen subdimensions of the clusters. We propose a new algorithm for subdimensional clustering of high dimensional data, each of the three major steps of which partitions the input points into several candidate clutters with proper numbers of points, filters the clusters that can not be useful in the next steps, and then merges the remaining clusters into the predefined number of clusters using a closeness function, respectively. The result of extensive experiments shows that the proposed algorithm exhibits better performance than the other existent clustering algorithms.

3D Reconstruction and Self-calibration based on Binocular Stereo Vision (스테레오 영상을 이용한 자기보정 및 3차원 형상 구현)

  • Hou, Rongrong;Jeong, Kyung-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.3856-3863
    • /
    • 2012
  • A 3D reconstruction technique from stereo images that requires minimal intervention from the user has been developed. The reconstruction problem consists of three steps of estimating specific geometry groups. The first step is estimating the epipolar geometry that exists between the stereo image pairs which includes feature matching in both images. The second is estimating the affine geometry, a process to find a special plane in the projective space by means of vanishing points. The third step, which includes camera self-calibration, is obtaining a metric geometry from which a 3D model of the scene could be obtained. The major advantage of this method is that the stereo images do not need to be calibrated for reconstruction. The results of camera calibration and reconstruction have shown the possibility of obtaining a 3D model directly from features in the images.

Real-time Hand Pose Recognition Using HLF (HLF(Haar-like Feature)를 이용한 실시간 손 포즈 인식)

  • Kim, Jang-Woon;Kim, Song-Gook;Hong, Seok-Ju;Jang, Han-Byul;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.897-902
    • /
    • 2007
  • 인간과 컴퓨터간의 전통적인 인터페이스는 인간이 요구하는 다양한 인터페이스를 제공하지 못한다는 점에서 점차 사용하기 불편하게 되었고 이는 새로운 형태의 인터페이스에 대한 요구로 이어지게 되었다. 본 논문에서는 이러한 추세에 맞추어 카메라를 통해 인간의 손 제스처를 인식하는 새로운 인터페이스를 연구하였다. 손은 자유도가 높고 3차원의 view direction에 의해 형상이 매우 심하게 변한다. 따라서 윤곽선 기반방법과 같은 2차원으로 투영된 영상에서 contour나 edge의 정보로 손 제스처를 인식하는 데는 한계가 있다. 그러나 모델기반 방법은 3차원 정보를 이용하기 때문에 손 제스처를 인식하는데 좋으나 계산량이 많아 실시간으로 처리하기가 쉽지 않다. 이러한 문제점을 해결하기 위해 손 형상에 대한 대규모 데이터베이스를 구성하고 정규화된 공간에서 Feature 간의 연관성을 파악하여 훈련 데이터 모델을 구성하여 비교함으로써 실시간으로 손 포즈를 구별할 수 있다. 이러한 통계적 학습 기반의 알고리즘은 다양한 데이터와 좋은 feature의 검출이 최적의 성능을 구현하는 것과 연관된다. 따라서 배경으로부터 노이즈를 최대한 줄이기 위해 피부의 색상 정보를 이용하여 손 후보 영역을 검출하고 검출된 후보 영역으로부터 HLF(Haar-like Feature)를 이용하여 손 영역을 검출한다. 검출된 손 영역으로부터 패턴 분류 과정을 거쳐 손 포즈를 인식 하게 된다. 패턴 분류 과정은 HLF를 이용하여 손 포즈를 인식하게 되는데 미리 학습된 각 포즈에 대한 HLF를 이용하여 손 포즈를 인식하게 된다. HLF는 Violar가 얼굴 검출에 적용한 것으로 얼굴 검출에 좋은 결과를 보여 주었으며, 이는 적분 이미지로부터 추출한 HLF를 이용한 Adaboost 학습 알고리즘을 사용하였다. 본 논문에서는 피부색의 색상 정보를 이용 배경과 손 영상을 최대한 분리하여 배경의 대부분이 Adaboost-Haar Classifier의 첫 번째 스테이지에서 제거되는 방법을 이용하여 그 성능을 더 향상 시켜 손 형상 인식에 적용하였다.

  • PDF

Automatic Generation of 3D Building Models using a Draft Map (도화원도를 이용한 3차원 건물모델의 자동생성)

  • Kim, Seong-Joon;Min, Seong-Hong;Lee, Dong-Cheon;Park, Jin-Ho;Lee, Im-Pyeong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.15 no.2 s.40
    • /
    • pp.3-14
    • /
    • 2007
  • This study proposes an automatic method to generate 3D building models using a draft map, which is an intermediate product generated during the map generation process based on aerial photos. The proposed method is to generate a terrain model, roof models, and wall models sequentially from the limited 3D information extracted from an existing draft map. Based on the planar fitting error of the roof corner points, the roof model is generated as a single planar facet or a multiple planar structure. The first type is derived using a robust estimation method while the second type is constructed through segmentation and merging based on a triangular irregular network. Each edge of this roof model is then projected to the terrain model to create a wall facet. The experimental results from its application to real data indicates that the building models of various shapes in wide areas are successfully generated. The proposed method is evaluated to be an cost and time effective method since it utilizes the existing data.

  • PDF

Extraction of 3D Building Information using Shadow Analysis from Single High Resolution Satellite Images (단일 고해상도 위성영상으로부터 그림자를 이용한 3차원 건물정보 추출)

  • Lee, Tae-Yoon;Lim, Young-Jae;Kim, Tae-Jung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.14 no.2 s.36
    • /
    • pp.3-13
    • /
    • 2006
  • Extraction of man-made objects from high resolution satellite images has been studied by many researchers. In order to reconstruct accurate 3D building structures most of previous approaches assumed 3D information obtained by stereo analysis. For this, they need the process of sensor modeling, etc. We argue that a single image itself contains many clues of 3D information. The algorithm we propose projects virtual shadow on the image. When the shadow matches against the actual shadow, the height of a building can be determined. If the height of a building is determined, the algorithm draws vertical lines of sides of the building onto the building in the image. Then the roof boundary moves along vertical lines and the footprint of the building is extracted. The algorithm proposed can use the shadow cast onto the ground surface and onto facades of another building. This study compared the building heights determined by the algorithm proposed and those calculated by stereo analysis. As the results of verification, root mean square errors of building heights were about 1.5m.

  • PDF