• Title/Summary/Keyword: Aerial image data

Search Result 425, Processing Time 0.026 seconds

A Study on Automatic Extraction of Buildings Using LIDAR with Aerial Imagery (LIDAR 데이터와 항공사진을 이용한 건물의 자동추출에 관한 연구)

  • 이영진;조우석
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.04a
    • /
    • pp.471-477
    • /
    • 2003
  • This paper presents an algorithm that automatically extracts buildings among many different features on the earth surface by fusing LIDAR data with panchromatic aerial images. The proposed algorithm consists of three stages such as point level process, polygon level process, parameter space level process. At the first stage, we eliminate gross errors and apply a local maxima filter to detect building candidate points from the raw laser scanning data. After then, a grouping procedure is performed for segmenting raw LIDAR data and the segmented LIDAR data is polygonized by the encasing polygon algorithm developed in the research. At the second stage, we eliminate non-building polygons using several constraints such as area and circularity. At the last stage, all the polygons generated at the second stage are projected onto the aerial stereo images through collinearity condition equations. Finally, we fuse the projected encasing polygons with edges detected by image processing for refining the building segments. The experimental results showed that the RMSEs of building corners in X, Y and Z were ${\pm}$8.1cm, ${\pm}$24.7cm, ${\pm}$35.9cm, respectively.

  • PDF

Comparison and Performance Validation of On-line Aerial Triangulation Algorithms for Real-time Image Georeferencing (실시간 영상 지오레퍼런싱을 위한 온라인 항공삼각측량 알고리즘의 비교 및 성능 검증)

  • Choi, Kyoung-Ah;Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.1
    • /
    • pp.55-67
    • /
    • 2012
  • Real-time image georeferencing is required to generate spatial information rapidly from the image sequences acquired by multi-sensor systems. To complement the performance of position/attitude sensors and process in real-time, we should employ on-line aerial triangulation based on a sequential estimation algorithm. In this study, we thus attempt to derive an efficient on-line aerial triangulation algorithm for real-time georeferencing of image sequences. We implemented on-line aerial triangulation using the existing Given transformation update algorithm, and a new inverse normal matrix update algorithm based on observation classification, respectively. To compare the performance of two algorithms in terms of the accuracy and processing time, we applied these algorithms to simulated airborne multi-sensory data. The experimental results indicate that the inverse normal matrix update algorithm shows 40 % higher accuracy in the estimated ground point coordinates and eight times faster processing speed comparing to the Given transformation update algorithm. Therefore, the inverse normal matrix update algorithm is more appropriate for the real-time image georeferencing.

Automatic Change Detection Using Unsupervised Saliency Guided Method with UAV and Aerial Images

  • Farkoushi, Mohammad Gholami;Choi, Yoonjo;Hong, Seunghwan;Bae, Junsu;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1067-1076
    • /
    • 2020
  • In this paper, an unsupervised saliency guided change detection method using UAV and aerial imagery is proposed. Regions that are more different from other areas are salient, which make them more distinct. The existence of the substantial difference between two images makes saliency proper for guiding the change detection process. Change Vector Analysis (CVA), which has the capability of extracting of overall magnitude and direction of change from multi-spectral and temporal remote sensing data, is used for generating an initial difference image. Combined with an unsupervised CVA and the saliency, Principal Component Analysis(PCA), which is possible to implemented as the guide for change detection method, is proposed for UAV and aerial images. By implementing the saliency generation on the difference map extracted via the CVA, potentially changed areas obtained, and by thresholding the saliency map, most of the interest areas correctly extracted. Finally, the PCA method is implemented to extract features, and K-means clustering is applied to detect changed and unchanged map on the extracted areas. This proposed method is applied to the image sets over the flooded and typhoon-damaged area and is resulted in 95 percent better than the PCA approach compared with manually extracted ground truth for all the data sets. Finally, we compared our approach with the PCA K-means method to show the effectiveness of the method.

Evaluation of Geospatial Information Construction Characteristics and Usability According to Type and Sensor of Unmanned Aerial Vehicle (무인항공기 종류 및 센서에 따른 공간정보 구축의 활용성 평가)

  • Chang, Si Hoon;Yun, Hee Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.555-562
    • /
    • 2021
  • Recently, in the field of geospatial information construction, unmanned aerial vehicles have been increasingly used because they enable rapid data acquisition and utilization. In this study, photogrammetry was performed using fixed-wing, rotary-wing, and VTOL (Vertical Take-Off and Landing) unmanned aerial vehicles, and geospatial information was constructed using two types of unmanned aerial vehicle LiDAR (Light Detection And Ranging) sensors. In addition, the accuracy was evaluated to present the utility of spatial information constructed through unmanned aerial photogrammetry and LiDAR. As a result of the accuracy evaluation, the orthographic image constructed through unmanned aerial photogrammetry showed accuracy within 2 cm. Considering that the GSD (Ground Sample Distance) of the constructed orthographic image is about 2 cm, the accuracy of the unmanned aerial photogrammetry results is judged to be within the GSD. The spatial information constructed through the unmanned aerial vehicle LiDAR showed accuracy within 6 cm in the height direction, and data on the ground was obtained in the vegetation area. DEM (Digital Elevation Model) using LiDAR data will be able to be used in various ways, such as construction work, urban planning, disaster prevention, and topographic analysis.

Sectional corner matching for automatic relative orientation

  • Seo, Ji-Hun;Bang, Ki-In;Kim, Kyung-Ok
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.74-74
    • /
    • 2002
  • This paper describes a corner matching technique for automatic relative orientation. Automatically matched corner points from stereo aerial images are used to a data set and help to improve automation of relative orientation process. A general corner matching process of overall image to image has very heavy operation and repetitive computation, so very time-consuming. But aerial stereo images are approximately seventy percent overlapped and little rotated. Based this hypothesis, we designed a sectional corner matching technique calculating correlation section by section between stereo images. Although the overlap information is not accurate, if we know it approximately, the matching process can be lighter. Since the size of aerial image is very large, corner extraction process is performed hierarchically by creating image pyramid, and corners extracted are refined at the higher level image. Extracted corners at the final step are matched section by section. Matched corners are filtered using positional information and their relation and distribution. Filtering process is applied over several steps because the thing affecting to get good result-good relative orientation parameter- is not the number of matched corner points but the accuracy of them. Filtered data is filtered one more during the process calculating relative orientation parameters. When the process is finished, we can get the well matched corner points and the refined Von-Gruber areas besides relative orientation parameters. This sectional corner matching technique is efficient by decreasing unnecessarily repetitive operations and contributes to improve automation for relative orientation.

  • PDF

Image-Based Modeling of Urban Buildings Using Aerial Photographs and Digital Maps (항공사진과 수치지도를 이용한 도시 건물의 이미지 기반 모델링)

  • Yoo, Byounghyun;Han, Soonhung
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.1
    • /
    • pp.49-62
    • /
    • 2005
  • The VR (virtual reality) simulator such as helicopter simulation needs virtual environment of existing urban area. But the real urban environment keeps changing. We need a modeling method to make use of the GIS data that are updated periodically. The flight simulation needs to visualize not only buildings in near distance but also a large number of buildings in the far distance. We propose a method for modeling urban environment from aerial image and digital map with a comparatively small manual work. Image based modeling is applied to urban model which considers the characteristic of Korean cities. Buildings in the distance can be presented without creating a lot of polygons. Proposed method consists of the pre-processing stage which prepares the model from the GIS data and the modeling stage which makes the virtual urban environment. The virtual urban environment can be modeled with the simple process which utilizes the height map of buildings.

  • PDF

Classification of Fall Crops Using Unmanned Aerial Vehicle Based Image and Support Vector Machine Model - Focusing on Idam-ri, Goesan-gun, Chungcheongbuk-do - (무인기 기반 영상과 SVM 모델을 이용한 가을수확 작물 분류 - 충북 괴산군 이담리 지역을 중심으로 -)

  • Jeong, Chan-Hee;Go, Seung-Hwan;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.28 no.1
    • /
    • pp.57-69
    • /
    • 2022
  • Crop classification is very important for estimating crop yield and figuring out accurate cultivation area. The purpose of this study is to classify crops harvested in fall in Idam-ri, Goesan-gun, Chungcheongbuk-do by using unmanned aerial vehicle (UAV) images and support vector machine (SVM) model. The study proceeded in the order of image acquisition, variable extraction, model building, and evaluation. First, RGB and multispectral image were acquired on September 13, 2021. Independent variables which were applied to Farm-Map, consisted gray level co-occurrence matrix (GLCM)-based texture characteristics by using RGB images, and multispectral reflectance data. The crop classification model was built using texture characteristics and reflectance data, and finally, accuracy evaluation was performed using the error matrix. As a result of the study, the classification model consisted of four types to compare the classification accuracy according to the combination of independent variables. The result of four types of model analysis, recursive feature elimination (RFE) model showed the highest accuracy with an overall accuracy (OA) of 88.64%, Kappa coefficient of 0.84. UAV-based RGB and multispectral images effectively classified cabbage, rice and soybean when the SVM model was applied. The results of this study provided capacity usefully in classifying crops using single-period images. These technologies are expected to improve the accuracy and efficiency of crop cultivation area surveys by supplementing additional data learning, and to provide basic data for estimating crop yields.

3D Line Segment Detection from Aerial Images using DEM and Ortho-Image (DEM과 정사영상을 이용한 항공 영상에서의 3차원 선소추출)

  • Woo Dong-Min;Jung Young-Kee;Lee Jeong-Yong
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.174-179
    • /
    • 2005
  • This paper presents 3D line segment extraction method, which can be used in generating 3D rooftop model. The core of our method is that 3D line segment is extracted by using line fitting of elevation data on 2D line coordinates of ortho-image. In order to use elevations in line fitting, the elevations should be reliable. To measure the reliability of elevation, in this paper, we employ the concept of self-consistency. We test the effectiveness of the proposed method with a quantitative accuracy analysis using synthetic images generated from Avenches data set of Ascona aerial images. Experimental results indicate that the proposed method shows average 30 line errors of .16 - .30 meters, which are about $10\%$ of the conventional area-based method.

A study on aerial triangulation from multi-sensor imagery

  • Lee, Young-ran;Habib, Ayman;Kim, Kyung-Ok
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.400-406
    • /
    • 2002
  • Recently, the enormous increase in the volume of remotely sensed data is being acquired by an ever-growing number of earth observation satellites. The combining of diversely sourced imagery together is an important requirement in many applications such as data fusion, city modeling and object recognition. Aerial triangulation is a procedure to reconstruct object space from imagery. However, since the different kinds of imagery have their own sensor model, characteristics, and resolution, the previous approach in aerial triangulation (or georeferencing) is performed on a sensor model separately. This study evaluated the advantages of aerial triangulation of large number of images from multi-sensors simultaneously. The incorporated multi-sensors are frame, push broom, and whisky broom cameras. The limits and problems of push-broom or whisky broom sensor models can be compensated by combined triangulation with frame imagery and vise versa. The reconstructed object space from multi-sensor triangulation is more accurate than that from a single model. Experiments conducted in this study show the more accurately reconstructed object space from multi-sensor triangulation.

  • PDF

A Study on Aerial Triangulation from Multi-Sensor Imagery

  • Lee, Young-Ran;Habib, Ayman;Kim, Kyung-Ok
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.3
    • /
    • pp.255-261
    • /
    • 2003
  • Recently, the enormous increase in the volume of remotely sensed data is being acquired by an ever-growing number of earth observation satellites. The combining of diversely sourced imagery together is an important requirement in many applications such as data fusion, city modeling and object recognition. Aerial triangulation is a procedure to reconstruct object space from imagery. However, since the different kinds of imagery have their own sensor model, characteristics, and resolution, the previous approach in aerial triangulation (or georeferencing) is purformed on a sensor model separately. This study evaluated the advantages of aerial triangulation of large number of images from multi-sensors simultaneously. The incorporated multi-sensors are frame, push broom, and whisky broom cameras. The limits and problems of push-broom or whisky broom sensor models can be compensated by combined triangulation with other sensors The reconstructed object space from multi-sensor triangulation is more accurate than that from a single model. Experiments conducted in this study show the more accurately reconstructed object space from multi-sensor triangulation.