• Title/Summary/Keyword: planar region extraction

Search Result 9, Processing Time 0.024 seconds

Planar Region Extraction for Visual Navigation using Stereo Cameras

  • Lee, Se-Na;You, Bum-Jae;Ko, Sung-Jea
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.681-686
    • /
    • 2003
  • In this paper, we propose an algorithm to extract valid planar regions from stereo images for visual navigation of mobile robots. The algorithm is based on the difference image between the stereo images obtained by applying Homography matrix between stereo cameras. Illegal planar regions are filtered out by the use of labeling of the difference images and filtering of invalid blobs using the size of each blob. Also, illegal large planar regions such as walls are removed by adopting a weighted low-pass filtering of the difference image using the past difference images. The algorithms are experimented successfully by the use of stereo camera system built in a mobile robot and a PC-based real-time vision system.

  • PDF

Extraction of Geometric Components of Buildings with Gradients-driven Properties

  • Seo, Su-Young;Kim, Byung-Guk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.1
    • /
    • pp.723-733
    • /
    • 2009
  • This study proposes a sequence of procedures to extract building boundaries and planar patches through segmentation of rasterized lidar data. Although previous approaches to building extraction have been shown satisfactory, there still exist needs to increase the degree of automation. The methodologies proposed in this study are as follows: Firstly, lidar data are rasterized into grid form in order to exploit its rapid access to neighboring elevations and image operations. Secondly, propagation of errors in raw data is taken into account for in assessing the quality of gradients-driven properties and further in choosing suitable parameters. Thirdly, extraction of planar patches is conducted through a sequence of processes: histogram analysis, least squares fitting, and region merging. Experimental results show that the geometric components of building models could be extracted by the proposed approach in a streamlined way.

A Region Based Approach to Surface Segmentation using LIDAR Data and Images

  • Moon, Ji-Young;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.575-583
    • /
    • 2007
  • Surface segmentation aims to represent the terrain as a set of bounded and analytically defined surface patches. Many previous segmentation methods have been developed to extract planar patches from LIDAR data for building extraction. However, most of them were not fully satisfactory for more general applications in terms of the degree of automation and the quality of the segmentation results. This is mainly caused from the limited information derived from LIDAR data. The purpose of this study is thus to develop an automatic method to perform surface segmentation by combining not only LIDAR data but also images. A region-based method is proposed to generate a set of planar patches by grouping LIDAR points. The grouping criteria are based on both the coordinates of the points and the corresponding intensity values computed from the images. This method has been applied to urban data and the segmentation results are compared with the reference data acquired by manual segmentation. 76% of the test area is correctly segmented. Under-segmentation is rarely founded but over-segmentation still exists. If the over-segmentation is mitigated by merging adjacent patches with similar properties as a post-process, the proposed segmentation method can be effectively utilized for a reliable intermediate process toward automatic extraction of 3D model of the real world.

A method of extracting edge line from range image using recognition features (거리 영상에서 인식 특정을 이용한 경계선 검출 기법)

  • 이강호
    • Journal of the Korea Society of Computer and Information
    • /
    • v.6 no.2
    • /
    • pp.14-19
    • /
    • 2001
  • This paper presents a new method of 3-D surface feature extraction using a quadratic pol expression. With a range image, we get an edge map through the modified scan line technique this edge map, we label a 3-dimensional object to divide object's region and extract cent corner points from it's region. Then we determine whether the segmented region is a planar or a curved from the quadric surface equation. we calculate the coefficients of the planar su the curved surface to represent regions. In this article. we prove performance of the metho synthetic and real (Odetics) range images.

Image segmentation and line segment extraction for 3-d building reconstruction

  • Ye, Chul-Soo;Kim, Kyoung-Ok;Lee, Jong-Hun;Lee, Kwae-Hi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.59-64
    • /
    • 2002
  • This paper presents a method for line segment extraction for 3-d building reconstruction. Building roofs are described as a set of planar polygonal patches, each of which is extracted by watershed-based image segmentation, line segment matching and coplanar grouping. Coplanar grouping and polygonal patch formation are performed per region by selecting 3-d line segments that are matched using epipolar geometry and flight information. The algorithm has been applied to high resolution aerial images and the results show accurate 3-d building reconstruction.

  • PDF

Development of a Lane Detect Algorithm from Road-Facing Cameras on a Vehicle (차량에 부착된 측하방 CCD카메라를 이용한 차선추출 알고리즘 개발)

  • Rhee, Soo-Ahm;Lee, Tae-Yoon;Kim, Tae-Jung;Sung, Jung-Gon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.13 no.3 s.33
    • /
    • pp.87-94
    • /
    • 2005
  • 3D positional information of lane can be automatically calculated tv combining GPS data, IMU data if coordinates of lane centers are given. The Road Safety Survey and Analysis Vehicle(RoSSAV) is currently under development to analyze three dimensional safety and stability of roads. RoSSAV has GPS and IMU sensors to get positional information of the vehicle and two road-facing CCD cameras for extraction of lane coordinates. In this paper, we develop technology that automatically detects centers of lanes from the road-facing cameras of RoSSAV. The proposed algorithm defines line-support regions by grouping pixels with similar edge orientation and magnitude together and extracts a line from each line support region by planar fitting. Then if extracted lines and the region in-between satisfy the criteria of brightness and width, we decide this region as lane. The proposed algorithm was more precise and stable than the previously proposed algorithm based on brightness threshold method. Experiments with real road scenes confirmed that lane was effectively extracted by the proposed algorithm.

  • PDF

A Basic Study on the Extraction of Dangerous Region for Safe Landing of self-Driving UAMs (자율주행 UAM의 안전착륙을 위한 위험영역 추출에 관한 기초 연구)

  • Chang min Park
    • Journal of Platform Technology
    • /
    • v.11 no.3
    • /
    • pp.24-31
    • /
    • 2023
  • Recently, interest in UAM (Urban Air Mobility, UAM), which can take off and land vertically in the operation of urban air transportation systems, has been increasing. Therefore, various start-up companies are developing related technologies as eco-friendly future transportation with advanced technology. However, studies on ways to increase safety in the operation of UAM are still insignificant. In particular, efforts are more urgent to improve the safety of risks generated in the process of attempting to land in the city center by UAM equipped with autonomous driving. Accordingly, this study proposes a plan to safely land by avoiding dangerous region that interfere when autonomous UAM attempts to land in the city center. To this end, first, the latitude and longitude coordinate values of dangerous objects observed by the sense of the UAM are calculated. Based on this, we proposed to convert the coordinates of the distorted planar image from the 3D image to latitude and longitude and then use the calculated latitude and longitude to compare the pre-learned feature descriptor with the HOG (Histogram of Oriented Gradients, HOG) feature descriptor to extract the dangerous Region. Although the dangerous region could not be completely extracted, generally satisfactory results were obtained. Accordingly, the proposed research method reduces the enormous cost of selecting a take-off and landing site for UAM equipped with autonomous driving technology and contribute to basic measures to reduce risk increase safety when attempting to land in complex environments such as urban areas.

  • PDF

Multi License Plate Recognition System using High Resolution 360° Omnidirectional IP Camera (고해상도 360° 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템)

  • Ra, Seung-Tak;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.412-415
    • /
    • 2017
  • In this paper, we propose a multi license plate recognition system using high resolution $360^{\circ}$ omnidirectional IP camera. The proposed system consists of a planar division part of $360^{\circ}$ circular image and a multi license plate recognition part. The planar division part of the $360^{\circ}$ circular image are divided into a planar image with enhanced image quality through processes such as circular image acquisition, circular image segmentation, conversion to plane image, pixel correction using color interpolation, color correction and edge correction in a high resolution $360^{\circ}$ omnidirectional IP Camera. Multi license plate recognition part is through the multi-plate extraction candidate region, a multi-plate candidate area normalized and restore, multiple license plate number, character recognition using a neural network in the process of recognizing a multi-planar imaging plates. In order to evaluate the multi license plate recognition system using the proposed high resolution $360^{\circ}$ omnidirectional IP camera, we experimented with a specialist in the operation of intelligent parking control system, and 97.8% of high plate recognition rate was confirmed.

Influences of direction for hexagonal-structure arrays of lens patterns on structural, optical, and electrical properties of InGaN/GaN MQW LEDs

  • Lee, Kwang-Jae;Kim, Hyun-June;Park, Dong-Woo;Jo, Byoung-Gu;Oh, Hye-Min;Hwang, Jeong-Woo;Kim, Jin-Soo;Lee, Jin-Hong;Leem, Jae-Young
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2010.08a
    • /
    • pp.153-153
    • /
    • 2010
  • Recently, to develop GaN-based light-emitting diodes (LEDs) with better performances, various approaches have been suggested by many research groups. In particular, using the patterned sapphire substrate technique has shown the improvement in both internal quantum efficiency and light extraction properties of GaN-based LEDs. In this paper, we discuss the influences of the direction of the hexagonal-structure arrays of lens-shaped patterns (HSAPs) formed on sapphire substrates on the crystal, optical, and electrical properties of InGaN/GaN multi-quantum-well (MQW) LEDs. The basic direction of the HSAPs is normal (HSAPN) with respect to the primary flat zone of a c-plane sapphire substrate. Another HSAP tilted by 30o (HSAP30) from the HSAPN structure was used to investigate the effects of the pattern direction. The full width at half maximums (FWHMs) of the double-crystal x-ray diffraction (DCXRD) spectrum for the (0002) and (1-102) planes of the HSAPN are 320.4 and 381.6 arcsecs., respectively, which are relatively narrower compared to those of the HSP30. The photoluminescence intensity for the HSAPN structure was ~1.2 times stronger than that for the HSAP30. From the electroluminescence (EL) measurements, the intensity for both structures are almost similar. In addition, the effects of the area of the individual lens pattern consisting of the hexagonal-structure arrays are discussed using the concept of the planar area fraction (PAF) defined as the following equation; PAF = [1-(patterns area/total unit areas)] For the relatively small PAF region up to 0.494, the influences of the HSAP direction on the LED characteristics were significant. However, the direction effects of the HSAP became small with increasing the PAF.

  • PDF