• Title/Summary/Keyword: 3D urban feature

Search Result 32, Processing Time 0.02 seconds

Development of Mobile 3D Urban Landscape Authoring and Rendering System

  • Lee Ki-Won;Kim Seung-Yub
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.3
    • /
    • pp.221-228
    • /
    • 2006
  • In this study, an integrated 3D modeling and rendering system dealing with 3D urban landscape features such as terrain, building, road and user-defined geometric ones was designed and implemented using $OPENGL\;{|}\;ES$ (Embedded System) API for mobile devices of PDA. In this system, the authoring functions are composed of several parts handling urban landscape features: vertex-based geometry modeling, editing and manipulating 3D landscape objects, generating geometrically complex type features with attributes for 3D objects, and texture mapping of complex types using image library. It is a kind of feature-based system, linked with 3D geo-based spatial feature attributes. As for the rendering process, some functions are provided: optimizing of integrated multiple 3D landscape objects, and rendering of texture-mapped 3D landscape objects. By the active-synchronized process among desktop system, OPENGL-based 3D visualization system, and mobile system, it is possible to transfer and disseminate 3D feature models through both systems. In this mobile 3D urban processing system, the main graphical user interface and core components is implemented under EVC 4.0 MFC and tested at PDA running on windows mobile and Pocket Pc. It is expected that the mobile 3D geo-spatial information systems supporting registration, modeling, and rendering functions can be effectively utilized for real time 3D urban planning and 3D mobile mapping on the site.

A Prototype Implementation for 3D Animated Anaglyph Rendering of Multi-typed Urban Features using Standard OpenGL API

  • Lee, Ki-Won
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.401-408
    • /
    • 2007
  • Animated anaglyph is the most cost-effective method for 3D stereo visualization of virtual or actual 3D geo-based data model. Unlike 3D anaglyph scene generation using paired epipolar images, the main data sets of this study is the multi-typed 3D feature model containing 3D shaped objects, DEM and satellite imagery. For this purpose, a prototype implementation for 3D animated anaglyph using OpenGL API is carried out, and virtual 3D feature modeling is performed to demonstrate the applicability of this anaglyph approach. Although 3D features are not real objects in this stage, these can be substituted with actual 3D feature model with full texture images along all facades. Currently, it is regarded as the special viewing effect within 3D GIS application domains, because just stereo 3D viewing is a part of lots of GIS functionalities or remote sensing image processing modules. Animated anaglyph process can be linked with real-time manipulation process of 3D feature model and its database attributes in real world problem. As well, this approach of feature-based 3D animated anaglyph scheme is a bridging technology to further image-based 3D animated anaglyph rendering system, portable mobile 3D stereo viewing system or auto-stereo viewing system without glasses for multi-viewers.

Localization of a Monocular Camera using a Feature-based Probabilistic Map (특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법)

  • Kim, Hyungjin;Lee, Donghwa;Oh, Taekjun;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

Accurate Parked Vehicle Detection using GMM-based 3D Vehicle Model in Complex Urban Environments (가우시안 혼합모델 기반 3차원 차량 모델을 이용한 복잡한 도시환경에서의 정확한 주차 차량 검출 방법)

  • Cho, Younggun;Roh, Hyun Chul;Chung, Myung Jin
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.1
    • /
    • pp.33-41
    • /
    • 2015
  • Recent developments in robotics and intelligent vehicle area, bring interests of people in an autonomous driving ability and advanced driving assistance system. Especially fully automatic parking ability is one of the key issues of intelligent vehicles, and accurate parked vehicles detection is essential for this issue. In previous researches, many types of sensors are used for detecting vehicles, 2D LiDAR is popular since it offers accurate range information without preprocessing. The L shape feature is most popular 2D feature for vehicle detection, however it has an ambiguity on different objects such as building, bushes and this occurs misdetection problem. Therefore we propose the accurate vehicle detection method by using a 3D complete vehicle model in 3D point clouds acquired from front inclined 2D LiDAR. The proposed method is decomposed into two steps: vehicle candidate extraction, vehicle detection. By combination of L shape feature and point clouds segmentation, we extract the objects which are highly related to vehicles and apply 3D model to detect vehicles accurately. The method guarantees high detection performance and gives plentiful information for autonomous parking. To evaluate the method, we use various parking situation in complex urban scene data. Experimental results shows the qualitative and quantitative performance efficiently.

Spherical Signature Description of 3D Point Cloud and Environmental Feature Learning based on Deep Belief Nets for Urban Structure Classification (도시 구조물 분류를 위한 3차원 점 군의 구형 특징 표현과 심층 신뢰 신경망 기반의 환경 형상 학습)

  • Lee, Sejin;Kim, Donghyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.115-126
    • /
    • 2016
  • This paper suggests the method of the spherical signature description of 3D point clouds taken from the laser range scanner on the ground vehicle. Based on the spherical signature description of each point, the extractor of significant environmental features is learned by the Deep Belief Nets for the urban structure classification. Arbitrary point among the 3D point cloud can represents its signature in its sky surface by using several neighborhood points. The unit spherical surface centered on that point can be considered to accumulate the evidence of each angular tessellation. According to a kind of point area such as wall, ground, tree, car, and so on, the results of spherical signature description look so different each other. These data can be applied into the Deep Belief Nets, which is one of the Deep Neural Networks, for learning the environmental feature extractor. With this learned feature extractor, 3D points can be classified due to its urban structures well. Experimental results prove that the proposed method based on the spherical signature description and the Deep Belief Nets is suitable for the mobile robots in terms of the classification accuracy.

3D Line Segment Detection using a New Hybrid Stereo Matching Technique (새로운 하이브리드 스테레오 정합기법에 의한 3차원 선소추출)

  • 이동훈;우동민;정영기
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.4
    • /
    • pp.277-285
    • /
    • 2004
  • We present a new hybrid stereo matching technique in terms of the co-operation of area-based stereo and feature-based stereo. The core of our technique is that feature matching is carried out by the reference of the disparity evaluated by area-based stereo. Since the reference of the disparity can significantly reduce the number of feature matching combinations, feature matching error can be drastically minimized. One requirement of the disparity to be referenced is that it should be reliable to be used in feature matching. To measure the reliability of the disparity, in this paper, we employ the self-consistency of the disunity Our suggested technique is applied to the detection of 3D line segments by 2D line matching using our hybrid stereo matching, which can be efficiently utilized in the generation of the rooftop model from urban imagery. We carry out the experiments on our hybrid stereo matching scheme. We generate synthetic images by photo-realistic simulation on Avenches data set of Ascona aerial images. Experimental results indicate that the extracted 3D line segments have an average error of 0.5m and verify our proposed scheme. In order to apply our method to the generation of 3D model in urban imagery, we carry out Preliminary experiments for rooftop generation. Since occlusions are occurred around the outlines of buildings, we experimentally suggested multi-image hybrid stereo system, based on the fusion of 3D line segments. In terms of the simple domain-specific 3D grouping scheme, we notice that an accurate 3D rooftop model can be generated. In this context, we expect that an extended 3D grouping scheme using our hybrid technique can be efficiently applied to the construction of 3D models with more general types of building rooftops.

DEVELOPMENT OF AUGMENTED 3D STEREO URBAN CITY MODELLING SYSTEM BASED ON ANAGLYPH APPROACH

  • Kim, Hak-Hoon;Kim, Seung-Yub;Lee, Ki-Won
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.98-101
    • /
    • 2006
  • In general, stereo images are widely used to remote sensing or photogrametric applications for the purpose of image understanding and feature extraction or cognition. However, the most cases of these stereo-based application deal with 2-D satellite images or the airborne photos so that its main targets are generation of small-scaled or large-scaled DEM(Digital Elevation Model) or DSM(Digital Surface Model), in the 2.5-D. Contrast to these previous approaches, the scope of this study is to investigate 3-D stereo processing and visualization of true geo-referenced 3-D features based on anaglyph technique, and the aim is at the prototype development for stereo visualization system of complex typed 3-D GIS features. As for complex typed 3-D features, the various kinds of urban landscape components are taken into account with their geometric characteristics and attributes. The main functions in this prototype are composed of 3-D feature authoring and modeling along with database schema, stereo matching, and volumetric visualization. Using these functions, several technical aspects for migration into actual 3-D GIS application are provided with experiment results. It is concluded that this result will contribute to more specialized and realistic applications by linking 3-D graphics with geo-spatial information.

  • PDF

DESIGN AND IMPLEMENTATION OF FEATURE-BASED 3D GEO-SPATIAL RENDERING SYSTEM USING OPENGL API

  • Kim Seung-Yeb;Lee Kiwon
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.321-324
    • /
    • 2005
  • In these days, the management and visualization of 3D geo-spatial information is regarded as one of an important issue in GiS and remote sensing fields. 3D GIS is considered with the database issues such as handling and managing of 3D geometry/topology attributes, whereas 3D visualization is basically concerned with 3D computer graphics. This study focused on the design and implementation for the OpenGL API-based rendering system for the complex types of 3D geo-spatial features. In this approach 3D features can be separately processed with the functions of authoring and manipulation of terrain segments, building segments, road segments, and other geo-based things with texture mapping. Using this implementation, it is possible to the generation of an integrated scene with these complex types of 3D features. This integrated rendering system based on the feature-based 3D-GIS model can be extended and effectively applied to urban environment analysis, 3D virtual simulation and fly-by navigation in urban planning. Furthermore, we expect that 3D-GIS visualization application based on OpenGL API can be easily extended into a real-time mobile 3D-GIS system, soon after the release of OpenGLIES which stands for OpenGL for embedded system, though this topic is beyond the scope of this implementation.

  • PDF

3D image mosaicking technique using multiple planes for urban visualization (복수 투영면을 사용한 도심지 가시화용 3 차원 모자이크 기술)

  • CHON Jaechoon;KIM Hyongsuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.3 s.303
    • /
    • pp.41-50
    • /
    • 2005
  • A novel image mosaicking technique suitable for 3D urban visualization is proposed. It is not effective to apply 2D image mosaicking techniques for urban visualization when, for example, one is filming a sequence of images from a side-looking video camera along a road in an urban area. The proposed method presents the roadside scene captured by a side-looking video camera as a continuous set of textured planar faces, which are termed 'multiple planes' in this paper. The exterior parameters of each frame are first calculated through automatically selected matching feature points. The matching feature points are also used to estimate a plane approximation of the scene geometry for each frame. These planes are concatenated to create an approximate model on which images are back-projected as textures. Here, we demonstrate algorithm that creates efficient image mosaics in 3D space from a sequence of real images.

Semantic Segmentation of Urban Scenes Using Location Prior Information (사전위치정보를 이용한 도심 영상의 의미론적 분할)

  • Wang, Jeonghyeon;Kim, Jinwhan
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.249-257
    • /
    • 2017
  • This paper proposes a method to segment urban scenes semantically based on location prior information. Since major scene elements in urban environments such as roads, buildings, and vehicles are often located at specific locations, using the location prior information of these elements can improve the segmentation performance. The location priors are defined in special 2D coordinates, referred to as road-normal coordinates, which are perpendicular to the orientation of the road. With the help of depth information to each element, all the possible pixels in the image are projected into these coordinates and the learned prior information is applied to those pixels. The proposed location prior can be modeled by defining a unary potential of a conditional random field (CRF) as a sum of two sub-potentials: an appearance feature-based potential and a location potential. The proposed method was validated using publicly available KITTI dataset, which has urban images and corresponding 3D depth measurements.