• Title/Summary/Keyword: Satellite Segmentation

Search Result 128, Processing Time 0.022 seconds

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

Mapping Man-Made Levee Line Using LiDAR Data and Aerial Orthoimage (라이다 데이터와 항공 정사영상을 활용한 인공 제방선 지도화)

  • Choung, Yun-Jae;Park, Hyen-Cheol;Chung, Youn-In;Jo, Myung-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.14 no.1
    • /
    • pp.84-93
    • /
    • 2011
  • Levee line mapping is critical to the protection of environments in river zones, the prevention of river flood and the development of river zones. Use of the remote sensing data such as LiDAR and aerial orthoimage is efficient for river mapping due to their accessibility and higher accuracy in horizontal and vertical direction. Airborne laser scanning (LiDAR) has been used for river zone mapping due to its ability to penetrate shallow water and its high vertical accuracy. Use of image source is also efficient for extraction of features by analysis of its image source. Therefore, aerial orthoimage also have been used for river zone mapping tasks due to its image source and its higher accuracy in horizontal direction. Due to these advantages, in this paper, research on three dimensional levee line mapping is implemented using LiDAR and aerial orthoimage separately. Accuracy measurement is implemented for both extracted lines generated by each data using the ground truths and statistical comparison is implemented between two measurement results. Statistical results show that the generated 3D levee line using LiDAR data has higher accuracy than the generated 3D levee line using aerial orthoimage in horizontal direction and vertical direction.

Assessment of the Inundation Area and Volume of Tonle Sap Lake using Remote Sensing and GIS (원격탐사와 GIS를 이용한 Tonle Sap호의 홍수량 평가)

  • Chae, Hyosok
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.3
    • /
    • pp.96-106
    • /
    • 2005
  • The ability of remote sensing and GIS technique, which used to provide valuable informations in the time and space domain, has been known to be very useful in providing permanent records by mapping and monitoring flooded area. In 2000, floods were at the worst stage of devastation in Tonle Sap Lake, Mekong River Basin, for the second time in records during July and October. In this study, Landsat ETM+ and RADARSAT imagery were used to obtain the basic information on computation of the inundation area and volume using ISODATA classifier and segmentation technique. However, the extracted inundatton area showed only a small fraction than the actually inundated area because of clouds in the imagery and complex ground conditions. To overcome these limitations, the cost-distance method of GIS was used to estimate the inundated area at the peak level by integrating the inundated area from satellite imagery in corporation with digital elevation model (DEM). The estimated inundation area was simply converted with the inundation volume using GIS. The inundation volume was compared with the volume based on hydraulic modeling with MIKE 11. which is the most poppular among the dynamic river modeling system. The method is suitable for estimating inundation volume even when Landsat ETM+ has many clouds in the imagery.

  • PDF

A Study on the Deep Neural Network based Recognition Model for Space Debris Vision Tracking System (심층신경망 기반 우주파편 영상 추적시스템 인식모델에 대한 연구)

  • Lim, Seongmin;Kim, Jin-Hyung;Choi, Won-Sub;Kim, Hae-Dong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.9
    • /
    • pp.794-806
    • /
    • 2017
  • It is essential to protect the national space assets and space environment safely as a space development country from the continuously increasing space debris. And Active Debris Removal(ADR) is the most active way to solve this problem. In this paper, we studied the Artificial Neural Network(ANN) for a stable recognition model of vision-based space debris tracking system. We obtained the simulated image of the space environment by the KARICAT which is the ground-based space debris clearing satellite testbed developed by the Korea Aerospace Research Institute, and created the vector which encodes structure and color-based features of each object after image segmentation by depth discontinuity. The Feature Vector consists of 3D surface area, principle vector of point cloud, 2D shape and color information. We designed artificial neural network model based on the separated Feature Vector. In order to improve the performance of the artificial neural network, the model is divided according to the categories of the input feature vectors, and the ensemble technique is applied to each model. As a result, we confirmed the performance improvement of recognition model by ensemble technique.

Stereo Matching For Satellite Images using The Classified Terrain Information (지형식별정보를 이용한 입체위성영상매칭)

  • Bang, Soo-Nam;Cho, Bong-Whan
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.1 s.6
    • /
    • pp.93-102
    • /
    • 1996
  • For an atomatic generation of DEM(Digital Elevation Model) by computer, it is a time-consumed work to determine adquate matches from stereo images. Correlation and evenly distributed area-based method is generally used for matching operation. In this paper, we propose a new approach that computes matches efficiantly by changing the size of mask window and search area according to the given terrain information. For image segmentation, at first edge-preserving smoothing filter is used for preprocessing, and then region growing algorithm is applied for the filterd images. The segmented regions are classifed into mountain, plain and water area by using MRF(Markov Random Filed) model. Maching is composed of predicting parallex and fine matching. Predicted parallex determines the location of search area in fine matching stage. The size of search area and mask window is determined by terrain information for each pixel. The execution time of matching is reduced by lessening the size of search area in the case of plain and water. For the experiments, four images which are covered $10km{\times}10km(1024{\times}1024\;pixel)$ of Taejeon-Kumsan in each are studied. The result of this study shows that the computing time of the proposed method using terrain information for matching operation can be reduced from 25% to 35%.

  • PDF

Identification of shear layer at river confluence using (RGB) aerial imagery (RGB 항공 영상을 이용한 하천 합류부 전단층 추출법)

  • Noh, Hyoseob;Park, Yong Sung
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.8
    • /
    • pp.553-566
    • /
    • 2021
  • River confluence is often characterized by shear layer and the associated strong mixing. In natural rivers, the main channel and its tributary can be separated by the shear layer using contrasting colors. The shear layer can be easily observed using aerial images from satellite or unmanned aerial vehicles. This study proposes a low-cost identification method extracting geographic features of the shear layer using RGB aerial image. The method consists of three stages. At first, in order to identify the shear layer, it performs image segmentation using a Gaussian mixture model and extracts the water bodies of the main channel and tributary. Next, the self-organizing map simplifies the flow line of the water bodies into the 1-dimensional curve grid. After that, the curvilinear coordinate transformation is performed using the water body pixels and the curve grid. As a result, the shear layer identification method was successfully applied to the confluence between Nakdong River and Nam River to extract geometric shear layer features (confluence angle, upstream- and downstream- channel widths, shear layer length, maximum shear layer thickness).

The Application Methods of FarmMap Reading in Agricultural Land Using Deep Learning (딥러닝을 이용한 농경지 팜맵 판독 적용 방안)

  • Wee Seong Seung;Jung Nam Su;Lee Won Suk;Shin Yong Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.77-82
    • /
    • 2023
  • The Ministry of Agriculture, Food and Rural Affairs established the FarmMap, an digital map of agricultural land. In this study, using deep learning, we suggest the application of farm map reading to farmland such as paddy fields, fields, ginseng, fruit trees, facilities, and uncultivated land. The farm map is used as spatial information for planting status and drone operation by digitizing agricultural land in the real world using aerial and satellite images. A reading manual has been prepared and updated every year by demarcating the boundaries of agricultural land and reading the attributes. Human reading of agricultural land differs depending on reading ability and experience, and reading errors are difficult to verify in reality because of budget limitations. The farmmap has location information and class information of the corresponding object in the image of 5 types of farmland properties, so the suitable AI technique was tested with ResNet50, an instance segmentation model. The results of attribute reading of agricultural land using deep learning and attribute reading by humans were compared. If technology is developed by focusing on attribute reading that shows different results in the future, it is expected that it will play a big role in reducing attribute errors and improving the accuracy of digital map of agricultural land.

Detection of Plastic Greenhouses by Using Deep Learning Model for Aerial Orthoimages (딥러닝 모델을 이용한 항공정사영상의 비닐하우스 탐지)

  • Byunghyun Yoon;Seonkyeong Seong;Jaewan Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.183-192
    • /
    • 2023
  • The remotely sensed data, such as satellite imagery and aerial photos, can be used to extract and detect some objects in the image through image interpretation and processing techniques. Significantly, the possibility for utilizing digital map updating and land monitoring has been increased through automatic object detection since spatial resolution of remotely sensed data has improved and technologies about deep learning have been developed. In this paper, we tried to extract plastic greenhouses into aerial orthophotos by using fully convolutional densely connected convolutional network (FC-DenseNet), one of the representative deep learning models for semantic segmentation. Then, a quantitative analysis of extraction results had performed. Using the farm map of the Ministry of Agriculture, Food and Rural Affairsin Korea, training data was generated by labeling plastic greenhouses into Damyang and Miryang areas. And then, FC-DenseNet was trained through a training dataset. To apply the deep learning model in the remotely sensed imagery, instance norm, which can maintain the spectral characteristics of bands, was used as normalization. In addition, optimal weights for each band were determined by adding attention modules in the deep learning model. In the experiments, it was found that a deep learning model can extract plastic greenhouses. These results can be applied to digital map updating of Farm-map and landcover maps.