• Title/Summary/Keyword: 공간 분할 기법

Search Result 654, Processing Time 0.038 seconds

Research Trend of the Remote Sensing Image Analysis Using Deep Learning (딥러닝을 이용한 원격탐사 영상분석 연구동향)

  • Kim, Hyungwoo;Kim, Minho;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.819-834
    • /
    • 2022
  • Artificial Intelligence (AI) techniques have been effectively used for image classification, object detection, and image segmentation. Along with the recent advancement of computing power, deep learning models can build deeper and thicker networks and achieve better performance by creating more appropriate feature maps based on effective activation functions and optimizer algorithms. This review paper examined technical and academic trends of Convolutional Neural Network (CNN) and Transformer models that are emerging techniques in remote sensing and suggested their utilization strategies and development directions. A timely supply of satellite images and real-time processing for deep learning to cope with disaster monitoring will be required for future work. In addition, a big data platform dedicated to satellite images should be developed and integrated with drone and Closed-circuit Television (CCTV) images.

Bias-correction of near-real-time multi-satellite precipitation products using machine learning (머신러닝 기반 준실시간 다중 위성 강수 자료 보정)

  • Sungho Jung;Xuan-Hien Le;Van-Giang Nguyen;Giha Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.280-280
    • /
    • 2023
  • 강수의 정확한 시·공간적 추정은 홍수 대응, 가뭄 관리, 수자원 계획 등 수문학적 모델링의 핵심 기술이다. 우주 기술의 발전으로 전지구 강수량 측정 프로젝트(Global Precipitation Measurement, GPM)가 시작됨에 따라 위성의 여러 센서를 이용하여 다양한 고해상도 강수량 자료가 생산되고 있으며, 기후변화로 인한 수재해의 빈도가 증가함에 따라 준실시간(Near-Real-Time) 위성 강수 자료의 활용성 및 중요성이 높아지고 있다. 하지만 준실시간 위성 강수 자료의 경우 빠른 지연시간(latency) 확보를 위해 관측 이후 최소한의 보정을 거쳐 제공되므로 상대적으로 강수 추정치의 불확실성이 높다. 이에 따라 본 연구에서는 앙상블 머신러닝 기반 수집된 위성 강수 자료들을 관측 자료와 병합하여 보정된 준실시간 강수량 자료를 생성하고자 한다. 모형의 입력에는 시단위 3가지 준실시간 위성 강수 자료(GSMaP_NRT, IMERG_Early, PERSIANN_CCS)와 방재기상관측 (AWS)의 온도, 습도, 강수량 지점 자료를 활용하였다. 지점 강수 자료의 경우 결측치를 고려하여 475개 관측소를 선정하였으며, 공간성을 고려한 랜덤 샘플링으로 375개소(약 80%)는 훈련 자료, 나머지 100개소(약 20%)는 검증 자료로 분리하였다. 모형의 정량적 평가 지표로는 KGE, MAE, RMSE이 사용되었으며, 정성적 평가 지표로 강수 분할표에 따라 POD, SR, BS 그리고 CSI를 사용하였다. 머신러닝 모형은 개별 원시 위성 강수 자료 및 IDW 기법보다 높은 정확도로 강수량을 추정하였으며 공간적으로 안정적인 결과를 나타내었다. 다만, 최대 강수량에서는 다소 과소추정되므로 이는 강수와 관련된 입력 변수의 개수 업데이트로 해결할 수 있을 것으로 판단된다. 따라서 불확실성이 높은 개별 준실시간 위성 자료들을 관측 자료와 병합하여 보정된 최적 강수 자료를 생성하는 머신러닝 기법은 돌발성 수재해에 실시간으로 대응 가능하며 홍수 예보에 신뢰도 높은 정량적인 강수량 추정치를 제공할 수 있다.

  • PDF

Flood Forecasting by using Distributed Models with Ensemble Kalman Filter (앙상블 칼만필터 이론을 이용한 분포형모델의 홍수유출예측)

  • Park, Hyo-Gil;Choi, Hyun-Il;Jee, Hong-Kee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.27-31
    • /
    • 2009
  • 홍수피해를 예방할 수 있는 대책에는 여러 가지 방법이 있으나 비구조물적인 방법 중에서 대표적인 것이 홍수예경보이다. 이에 합리적인 설계홍수량 산정을 위해 하천유역에서 강우-유출과정의 정확한 해석과 유출예측은 수자원의 효율적인 활용과 하천의 이수, 치수를 위한 수문학적 해석에 있어서 매우 중요하며, 이를 위해서는 강우로부터 정도 높은 유출량 예측이 요구된다. 뿐만 아니라 하천범람 등의 재해로부터 인명과 재산을 보호하기 위한 홍수예경보 시스템의 구축이 필요하다. 홍수예경보 시스템의 효율적인 관리를 위해서는 실시간 홍수예측(Real-time Flood Prediction)기법의 개발이 필요하다. 홍수유출모형에 있어 공간적 변화특성과 평균 강우량의 공간분포를 반영할 수 있는 분포형 매개변수 모형(Distributed-Parameter Model)인 분포형 모델을 대상으로 앙상블 칼만필터(Ensemble Kalman Filter, EnKF) 이론을 적용하여 비선형시스템에서 오차를 포함한 반응을 실시간으로 처리하여 불확실성을 정량적으로 감소시켜 홍수유출을 예측하는데 그 목적이 있다. 하천유역특성을 이용한 홍수유출예측을 위하여 비선형시스템에서의 앙상블 칼만필터 기법을 적용한 분포형 모형을 이용하여 더욱 정밀한 홍수유출을 예측하게 되고 향후 홍수예경보모형으로서 적정 유역분할 규모를 결정해주는 근거를 제시할 수 있을 것으로 기대된다.

  • PDF

Development of Deep Learning Based Ensemble Land Cover Segmentation Algorithm Using Drone Aerial Images (드론 항공영상을 이용한 딥러닝 기반 앙상블 토지 피복 분할 알고리즘 개발)

  • Hae-Gwang Park;Seung-Ki Baek;Seung Hyun Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.71-80
    • /
    • 2024
  • In this study, a proposed ensemble learning technique aims to enhance the semantic segmentation performance of images captured by Unmanned Aerial Vehicles (UAVs). With the increasing use of UAVs in fields such as urban planning, there has been active development of techniques utilizing deep learning segmentation methods for land cover segmentation. The study suggests a method that utilizes prominent segmentation models, namely U-Net, DeepLabV3, and Fully Convolutional Network (FCN), to improve segmentation prediction performance. The proposed approach integrates training loss, validation accuracy, and class score of the three segmentation models to enhance overall prediction performance. The method was applied and evaluated on a land cover segmentation problem involving seven classes: buildings,roads, parking lots, fields, trees, empty spaces, and areas with unspecified labels, using images captured by UAVs. The performance of the ensemble model was evaluated by mean Intersection over Union (mIoU), and the results of comparing the proposed ensemble model with the three existing segmentation methods showed that mIoU performance was improved. Consequently, the study confirms that the proposed technique can enhance the performance of semantic segmentation models.

Fast Triangular Mesh Approximation for Terrain Data Using Wavelet Coefficients (Wavelet 변환 계수를 이용한 대용량 지형정보 데이터의 삼각형 메쉬근사에 관한 연구)

  • 유한주;이상지;나종범
    • Journal of Broadcast Engineering
    • /
    • v.2 no.1
    • /
    • pp.65-73
    • /
    • 1997
  • This paper propose a new triangular mesh approximation method using wavelet coefficients for large terrain data. Using spatio-freguency localization characteristics of wavelet coefficients, we determine the complexity of terrain data and approximate the data according to the complexity. This proposed algorithm is simple and requires low computational cost due to its top-down approach. Because of the similarity between the mesh approximation and data compression procedures based on wavelet transform, we combine the mesh approximation scheme with the Embedded Zerotree Wavelet (EZW) coding scheme for the effective management of large terrain data. Computer simulation results demonstrate that the proposed algorithm is very prospective for the 3-D visualization of terrain data.

  • PDF

Projective Reconstruction from Multiple Images using Matrix Decomposition Constraints (행렬 분해 제약을 사용한 다중 영상에서의 투영 복원)

  • Ahn, Ho-Young;Park, Jong-Seung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.770-783
    • /
    • 2012
  • In this paper, we propose a novel structure recovery algorithm in the projective space using image feature points. We use normalized image feature coordinates for the numerical stability. To acquire an initial value of the structure and motion, we decompose the scaled measurement matrix using the singular value decomposition. When recovering structure and motion in projective space, we introduce matrix decomposition constraints. In the reconstruction procedure, a nonlinear iterative optimization technique is used. Experimental results showed that the proposed method provides proper accuracy and the error deviation is small.

An efficient VLSI Implementation of the 2-D DCT with the Algorithm Decomposition (알고리즘 분해를 이용한 2-D DCT)

  • Jeong, Jae-Gil
    • The Journal of Natural Sciences
    • /
    • v.7
    • /
    • pp.27-35
    • /
    • 1995
  • This paper introduces a VLSI (Very Large Scale Integrated Circuit) implementation of the 2-D Discrete Cosine Transform (DCT) with an application to image and video coding. This implementation, which is based upon a state space model, uses both algorithm and data partitioning to achieve high efficiency. With this implementation, the amount of data transfers between the processing elements (PEs) are reduced and all the data transfers are limitted to be local. This system accepts the input as a progressively scanned data stream which reduces the hardware required for the input data control module. With proper ordering of computations, a matrix transposition between two matrix by matrix multiplications, which is required in many 2-D DCT systems based upon a row-column decomposition, can be also removed. The new implementation scheme makes it feasible to implement a single 2-D DCT VLSI chip which can be easily expanded for a larger 2-D DCT by cascading these chips.

  • PDF

Study on Regional Spatial Autocorrelation of Forest Fire Occurrence in Korea (우리나라 산불 발생의 지역별 공간자기상관성에 관한 연구)

  • Kim, Moon-Il;Kwak, Han-Bin;Lee, Woo-Kyun;Won, Myoung-Soo;Koo, Kyo-Sang
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.2
    • /
    • pp.29-37
    • /
    • 2011
  • Forest fire in Korea has been controlled by local government, so that it is required to understand the characteristics of regional forest fire occurrences for the effective management. In this study, to analyze the patterns of regional forest fire occurrences, we divided South Korea into nine zones based on administrative boundaries and performed spatial statistical analysis using the location data of forest fire occurrences for 1991-2008. The spatial distributions of forest fire were analyzed by the variogram, and the risk of forest fire was predicted by kriging analysis. As a result, forest fires in metropolitan areas showed strong spatial correlations, while it was hard to find spatial correlations of forest fires in local areas without big city as Gangwon-do, Chungcheongbuk-do and Jeju island.

Trajectory Indexing for Efficient Processing of Range Queries (영역 질의의 효과적인 처리를 위한 궤적 인덱싱)

  • Cha, Chang-Il;Kim, Sang-Wook;Won, Jung-Im
    • The KIPS Transactions:PartD
    • /
    • v.16D no.4
    • /
    • pp.487-496
    • /
    • 2009
  • This paper addresses an indexing scheme capable of efficiently processing range queries in a large-scale trajectory database. After discussing the drawbacks of previous indexing schemes, we propose a new scheme that divides the temporal dimension into multiple time intervals and then, by this interval, builds an index for the line segments. Additionally, a supplementary index is built for the line segments within each time interval. This scheme can make a dramatic improvement in the performance of insert and search operations using a main memory index, particularly for the time interval consisting of the segments taken by those objects which are currently moving or have just completed their movements, as contrast to the previous schemes that store the index totally on the disk. Each time interval index is built as follows: First, the extent of the spatial dimension is divided onto multiple spatial cells to which the line segments are assigned evenly. We use a 2D-tree to maintain information on those cells. Then, for each cell, an additional 3D $R^*$-tree is created on the spatio-temporal space (x, y, t). Such a multi-level indexing strategy can cure the shortcomings of the legacy schemes. Performance results obtained from intensive experiments show that our scheme enhances the performance of retrieve operations by 3$\sim$10 times, with much less storage space.

Effects of LDPCA Frame Size for Parity Bit Estimation Methods in Fast Distributed Video Decoding Scheme (고속 분산 비디오 복호화 기법에서 패리티 비트 예측방식에 대한 LDPCA 프레임 크기 효과)

  • Kim, Man-Jae;Kim, Jin-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.8
    • /
    • pp.1675-1685
    • /
    • 2012
  • DVC (Distributed Video Coding) technique plays an essential role in providing low-complexity video encoder. But, in order to achieve the better rate-distortion performances, most DVC systems need feedback channel for parity bit control. This causes the DVC-based system to have high decoding latency and becomes as one of the most critical problems to overcome for a real implementation. In order to overcome this problem and to accelerate the commercialization of the DVC applications, this paper analyzes an effect of LDPCA frame size for adaptive LDPCA frame-based parity bit request estimations. First, this paper presents the LDPCA segmentation method in pixel-domain and explains the temporal-based bit request estimation method and the spatial-based bit request estimation method using the statistical characteristics between adjacent LDPCA frames. Through computer simulations, it is shown that the better performance and fast decoding is observed specially when the LDPCA frame size is 3168 in QCIF resolution.