• Title/Summary/Keyword: RapidEye 영상

Search Result 42, Processing Time 0.019 seconds

Automated Image Matching for Satellite Images with Different GSDs through Improved Feature Matching and Robust Estimation (특징점 매칭 개선 및 강인추정을 통한 이종해상도 위성영상 자동영상정합)

  • Ban, Seunghwan;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1257-1271
    • /
    • 2022
  • Recently, many Earth observation optical satellites have been developed, as their demands were increasing. Therefore, a rapid preprocessing of satellites became one of the most important problem for an active utilization of satellite images. Satellite image matching is a technique in which two images are transformed and represented in one specific coordinate system. This technique is used for aligning different bands or correcting of relative positions error between two satellite images. In this paper, we propose an automatic image matching method among satellite images with different ground sampling distances (GSDs). Our method is based on improved feature matching and robust estimation of transformation between satellite images. The proposed method consists of five processes: calculation of overlapping area, improved feature detection, feature matching, robust estimation of transformation, and image resampling. For feature detection, we extract overlapping areas and resample them to equalize their GSDs. For feature matching, we used Oriented FAST and rotated BRIEF (ORB) to improve matching performance. We performed image registration experiments with images KOMPSAT-3A and RapidEye. The performance verification of the proposed method was checked in qualitative and quantitative methods. The reprojection errors of image matching were in the range of 1.277 to 1.608 pixels accuracy with respect to the GSD of RapidEye images. Finally, we confirmed the possibility of satellite image matching with heterogeneous GSDs through the proposed method.

Comparative Performance Evaluations of Eye Detection algorithm (눈 검출 알고리즘에 대한 성능 비교 연구)

  • Gwon, Su-Yeong;Cho, Chul-Woo;Lee, Won-Oh;Lee, Hyeon-Chang;Park, Kang-Ryoung;Lee, Hee-Kyung;Cha, Ji-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.722-730
    • /
    • 2012
  • Recently, eye image information has been widely used for iris recognition or gaze detection in biometrics or human computer interaction. According as long distance camera-based system is increasing for user's convenience, the noises such as eyebrow, forehead and skin areas which can degrade the accuracy of eye detection are included in the captured image. And fast processing speed is also required in this system in addition to the high accuracy of eye detection. So, we compared the most widely used algorithms for eye detection such as AdaBoost eye detection algorithm, adaptive template matching+AdaBoost algorithm, CAMShift+AdaBoost algorithm and rapid eye detection method. And these methods were compared with images including light changes, naive eye and the cases wearing contact lens or eyeglasses in terms of accuracy and processing speed.

Vicarious Radiometric Calibration of RapidEye Satellite Image Using CASI Hyperspectral Data (CASI 초분광 영상을 이용한 RapidEye 위성영상의 대리복사보정)

  • Chang, An Jin;Choi, Jae Wan;Song, Ah Ram;Kim, Ye Ji;Jung, Jin Ha
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.3
    • /
    • pp.3-10
    • /
    • 2015
  • All kinds of objects on the ground have inherent spectral reflectance curves, which can be used to classify the ground objects and to detect the target. Remotely sensed data have to be transferred to spectral reflectance for accurate analysis. There are formula methods provided by the institution, mathematical model method and ground-data-based method. In this study, RapidEye satellite image was converted to reflectance data using spectral reflectance of a CASI hyperspectral image by using vicarious radiometric calibration. The results were compared with those of the other calibration methods and ground data. The proposed method was closer to the ground data than ATCOR and New Kurucz 2005 method and equal with ELM method.

Determination of Spatial Resolution to Improve GCP Chip Matching Performance for CAS-4 (농림위성용 GCP 칩 매칭 성능 향상을 위한 위성영상 공간해상도 결정)

  • Lee, YooJin;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1517-1526
    • /
    • 2021
  • With the recent global and domestic development of Earth observation satellites, the applications of satellite images have been widened. Research for improving the geometric accuracy of satellite images is being actively carried out. This paper studies the possibility of automated ground control point (GCP) generation for CAS-4 satellite, to be launched in 2025 with the capability of image acquisition at 5 m ground sampling distance (GSD). In particular, this paper focuses to check whether GCP chips with 25 cm GSD established for CAS-1 satellite images can be used for CAS-4 and to check whether optimalspatial resolution for matching between CAS-4 images and GCP chips can be determined to improve matching performance. Experiments were carried out using RapidEye images, which have similar GSD to CAS-4. Original satellite images were upsampled to make satellite images with smaller GSDs. At each GSD level, up-sampled satellite images were matched against GCP chips and precision sensor models were estimated. Results shows that the accuracy of sensor models were improved with images atsmaller GSD compared to the sensor model accuracy established with original images. At 1.25~1.67 m GSD, the accuracy of about 2.4 m was achieved. This finding lead that the possibility of automated GCP extraction and precision ortho-image generation for CAS-4 with improved accuracy.

An Analysis of Agricultural Infrastructure Status of North Korea Using Satellite Imagery (인공위성영상을 활용한 북한의 농업생산기반 실태분석)

  • Kim, Kwanho;Lee, Sunghack;Choi, Jinyong
    • KCID journal
    • /
    • v.21 no.1
    • /
    • pp.45-54
    • /
    • 2014
  • In this study, Agricultural Infrastructures of Shincheon-gun in North Korea are investigated using Kompsat-2 and RapidEye satellite imagery. Target agricultural infrastructures are agricultural landuse, irrigation and drainage canals, dammed pools for irrigation and pumping stations. KOMPSAT-2 satellite imagery are use to investigate agricultural hydraulic structures and agricultural landuse are investigated by RapidEye Imagery. Geometric correction are performed using 28 GCP and QUAC method are used for atmospherical correction in all imagery. ISODATA clustering and naked-eye classification method are used for extracting agricultural hydraulic structures and Object-based analysis is applied to classifying the agricultural landuse. Extraction results of agricultural hydraulic structures and agricultural are presented and we suggest the applicability of satellite imagery to investigate agricultural infrastructures in North Korea.

  • PDF

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Comparative Analysis of Classification Accuracy for Calculating Cropland Areas by using Satellite Images (위성영상별 경지면적 분류 정확도 비교 분석)

  • Jo, Myung-Hee;Kim, Sung-Jae;Kim, Dong-Young;Choi, Kyung-Sook
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.54 no.2
    • /
    • pp.47-53
    • /
    • 2012
  • Recently many developed countries have used satellite images for classifying cropland areas to reduce time and efforts put into field survey. Korea also has used satellite images for the same purpose since KOMPSAT-2 was successfully launched and operated in 2006, but still far way to go in order to achieve the required accuracy from the products. This study evaluated the accuracy of the calculated croplands by using the objected classification method with various satellite images including ASTER, Spot-5, Rapid eye, Quickbird-2, Geo eye-1. Also, their usability and effectiveness for the cropland survey were verified by comparing with field survey data. As results. Geo eye-1 and Rapid eye showed higher accuracy to calculate the paddy field areas while Geo eye-1 and Quickbird-2 showed higher accuracy to calculate the upland field areas.

Estimating Chlorophyll-a Concentration using Spectral Mixture Analysis from RapidEye Imagery in Nak-dong River Basin (RapidEye영상과 선형분광혼합화소분석 기법을 이용한 낙동강 유역의 클로로필-a 농도 추정)

  • Lee, Hyuk;Nam, Gibeom;Kang, Taegu;Yoon, Seungjoon
    • Journal of Korean Society on Water Environment
    • /
    • v.30 no.3
    • /
    • pp.329-339
    • /
    • 2014
  • This study aims to estimate chlorophyll-a concentration in rivers using multi-spectral RapidEye imagery and Spectral Mixture Analysis (SMA) and assess the applicability of SMA for multi-temporal imagery analysis. Comparison between images (acquired on Oct. and Nov., 2013) predicted and ground reference chlorophyll-a concentration showed significant performance statistically with determination coefficients of 0.49 and 0.51, respectively. Two band (Red-RE) model for the October and November 2013 RapidEye images showed low performance with coefficient of determinations ($R^2$) of 0.26 and 0.16, respectively. Also Three band (Red-RE-NIR) model showed different performance with $R^2$ of 0.016 and 0.304, respectively. SMA derived Chlorophyll-a concentrations showed relatively higher accuracy than band ratio models based values. SMA was the most appropriate method to calculate Chlorophyll-a concentration using images which were acquired on period of low Chlorophyll-a concentrations. The results of SMA for multi-temporal imagery showed low performance because of the spatio-temporal variation of each end members. This approach provides the potential of providing a cost effective method of monitoring river water quality and management using multi-spectral imagery. In addition, the calculated Chlorophyll-a concentrations using multi-spectral RapidEye imagery can be applied to water quality modeling, enhancing the predicting accuracy.

The Analysis of Changes in Forest Status and Deforestation of North Korea's DMZ Using RapidEye Satellite Imagery and Google Earth (RapidEye 위성영상과 구글 어스를 활용한 북한 DMZ의 산림현황 및 산림황폐지 변화 분석)

  • KWON, Sookyung;KIM, Eunhee;LIM, Joongbin;YANG, A-Ram
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.4
    • /
    • pp.113-126
    • /
    • 2021
  • This study was conducted to analyze the forest status and deforestation area changes of the DMZ region in North Korea based on satellite images. Using growing and non-growing season's RapidEye satellite images, land cover of the North Korean DMZ was classified into stocking land(conifer, deciduous, mixed), deforested land(unstocked mountain, cultivated mountain, bare mountain), and non-forest areas. Deforestation rates in the Yeonan-baecheon, Beopdong-Pyeonggang, Heoyang-Geumgang and Tongcheon-Goseong district were calculated as 14.24%, 16.75%, 5.98%, and 16.63% respectively. Forest fire and land use change of forest were considered as the main causes of deforestation of DMZ. Changes in deforestation area were analyzed through Google Earth images. As a results, it was shown that the area of deforestation was on a decreasing trend. This study can be used as basic data for establishing inter-Korean border region's forest cooperation strategies by providing forest spatial information on the North Korea's DMZ.

A Study on Land Cover Map of UAV Imagery using an Object-based Classification Method (객체기반 분류기법을 이용한 UAV 영상의 토지피복도 제작 연구)

  • Shin, Ji Sun;Lee, Tae Ho;Jung, Pil Mo;Kwon, Hyuk Soo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.4
    • /
    • pp.25-33
    • /
    • 2015
  • The study of ecosystem assessment(ES) is based on land cover information, and primarily it is performed at the global scale. However, these results as data for decision making have a limitation at the aspects of range and scale to solve the regional issue. Although the Ministry of Environment provides available land cover data at the regional scale, it is also restricted in use due to the intrinsic limitation of on screen digitizing method and temporal and spatial difference. This study of objective is to generate UAV land cover map. In order to classify the imagery, we have performed resampling at 5m resolution using UAV imagery. The results of object-based image segmentation showed that scale 20 and merge 34 were the optimum weight values for UAV imagery. In the case of RapidEye imagery;we found that the weight values;scale 30 and merge 30 were the most appropriate at the level of land cover classes for sub-category. We generated land cover imagery using example-based classification method and analyzed the accuracy using stratified random sampling. The results show that the overall accuracies of RapidEye and UAV classification imagery are each 90% and 91%.