• Title/Summary/Keyword: Grid-based maps

Search Result 98, Processing Time 0.025 seconds

The history of high intensity rainfall estimation methods in New Zealand and the latest High Intensity Rainfall Design System (HIRDS.V3)

  • Horrell, Graeme;Pearson, Charles
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.16-16
    • /
    • 2011
  • Statistics of extreme rainfall play a vital role in engineering practice from the perspective of mitigation and protection of infrastructure and human life from flooding. While flood frequency assessments, based on river flood flow data are preferred, the analysis of rainfall data is often more convenient due to the finer spatial nature of rainfall recording networks, often with longer records, and potentially more easily transferable from site to site. The rainfall frequency analysis as a design tool has developed over the years in New Zealand from Seelye's daily rainfall frequency maps in 1947 to Thompson's web based tool in 2010. This paper will present a history of the development of New Zealand rainfall frequency analysis methods, and the details of the latest method, so that comparisons may in future be made with the development of Korean methods. One of the main findings in the development of methods was new knowledge on the distribution of New Zealand rainfall extremes. The High Intensity Rainfall Design System (HIRDS.V3) method (Thompson, 2011) is based upon a regional rainfall frequency analysis with the following assumptions: $\bullet$ An "index flood" rainfall regional frequency method, using the median annual maximum rainfall as the indexing variable. $\bullet$ A regional dimensionless growth curve based on the Generalised Extreme Value (GEV), and using goodness of fit test for the GEV, Gumbel (EV1), and Generalised Logistic (GLO) distributions. $\bullet$ Mapping of median annual maximum rainfall and parameters of the regional growth curves, using thin-plate smoothing splines, a $2km\times2km$ grid, L moments statistics, 10 durations from 10 minutes to 72 hours, and a maximum Average Recurrence Interval of 100 years.

  • PDF

Memory Propagation-based Target-aware Segmentation Tracker with Adaptive Mask-attention Decision Network

  • Huanlong Zhang;Weiqiang Fu;Bin Zhou;Keyan Zhou;Xiangbo Yang;Shanfeng Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.9
    • /
    • pp.2605-2625
    • /
    • 2024
  • Siamese-based segmentation and tracking algorithms improve accuracy and stability for video object segmentation and tracking tasks simultaneously. Although effective, variability in target appearance and background clutter can still affect segmentation accuracy and further influence the performance of tracking. In this paper, we present a memory propagation-based target-aware and mask-attention decision network for robust object segmentation and tracking. Firstly, a mask propagation-based attention module (MPAM) is constructed to explore the inherent correlation among image frames, which can mine mask information of the historical frames. By retrieving a memory bank (MB) that stores features and binary masks of historical frames, target attention maps are generated to highlight the target region on backbone features, thus suppressing the adverse effects of background clutter. Secondly, an attention refinement pathway (ARP) is designed to further refine the segmentation profile in the process of mask generation. A lightweight attention mechanism is introduced to calculate the weight of low-level features, paying more attention to low-level features sensitive to edge detail so as to obtain segmentation results. Finally, a mask fusion mechanism (MFM) is proposed to enhance the accuracy of the mask. By utilizing a mask quality assessment decision network, the corresponding quality scores of the "initial mask" and the "previous mask" can be obtained adaptively, thus achieving the assignment of weights and the fusion of masks. Therefore, the final mask enjoys higher accuracy and stability. Experimental results on multiple benchmarks demonstrate that our algorithm performs outstanding performance in a variety of challenging tracking tasks.

GIS-based Disaster Management System for a Private Insurance Company in Case of Typhoons(I) (지리정보기반의 재해 관리시스템 구축(I) -민간 보험사의 사례, 태풍의 경우-)

  • Chang Eun-Mi
    • Journal of the Korean Geographical Society
    • /
    • v.41 no.1 s.112
    • /
    • pp.106-120
    • /
    • 2006
  • Natural or man-made disaster has been expected to be one of the potential themes that can integrate human geography and physical geography. Typhoons like Rusa and Maemi caused great loss to insurance companies as well as public sectors. We have implemented a natural disaster management system for a private insurance company to produce better estimation of hazards from high wind as well as calculate vulnerability of damage. Climatic gauge sites and addresses of contract's objects were geo-coded and the pressure values along all the typhoon tracks were vectorized into line objects. National GIS topog raphic maps with scale of 1: 5,000 were updated into base maps and digital elevation model with 30 meter space and land cover maps were used for reflecting roughness of land to wind velocity. All the data are converted to grid coverage with $1km{\times}1km$. Vulnerability curve of Munich Re was ad opted, and preprocessor and postprocessor of wind velocity model was implemented. Overlapping the location of contracts on the grid value coverage can show the relative risk, with given scenario. The wind velocities calculated by the model were compared with observed value (average $R^2=0.68$). The calibration of wind speed models was done by dropping two climatic gauge data, which enhanced $R^2$ values. The comparison of calculated loss with actual historical loss of the insurance company showed both underestimation and overestimation. This system enables the company to have quantitative data for optimizing the re-insurance ratio, to have a plan to allocate enterprise resources and to upgrade the international creditability of the company. A flood model, storm surge model and flash flood model are being added, at last, combined disaster vulnerability will be calculated for a total disaster management system.

Application into Assessment of Liquefaction Hazard and Geotechnical Vulnerability During Earthquake with High-Precision Spatial-Ground Model for a City Development Area (도시개발 영역 고정밀 공간지반모델의 지진 시 액상화 재해 및 지반 취약성 평가 활용)

  • Kim, Han-Saem;Sun, Chang-Guk;Ha, Ik-Soo
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.27 no.5
    • /
    • pp.221-230
    • /
    • 2023
  • This study proposes a methodology for assessing seismic liquefaction hazard by implementing high-resolution three-dimensional (3D) ground models with high-density/high-precision site investigation data acquired in an area of interest, which would be linked to geotechnical numerical analysis tools. It is possible to estimate the vulnerability of earthquake-induced geotechnical phenomena (ground motion amplification, liquefaction, landslide, etc.) and their triggering complex disasters across an area for urban development with several stages of high-density datasets. In this study, the spatial-ground models for city development were built with a 3D high-precision grid of 5 m × 5 m × 1 m by applying geostatistic methods. Finally, after comparing each prediction error, the geotechnical model from the Gaussian sequential simulation is selected to assess earthquake-induced geotechnical hazards. In particular, with seven independent input earthquake motions, liquefaction analysis with finite element analyses and hazard mappings with LPI and LSN are performed reliably based on the spatial geotechnical models in the study area. Furthermore, various phenomena and parameters, including settlement in the city planning area, are assessed in terms of geotechnical vulnerability also based on the high-resolution spatial-ground modeling. This case study on the high-precision 3D ground model-based zonations in the area of interest verifies the usefulness in assessing spatially earthquake-induced hazards and geotechnical vulnerability and their decision-making support.

Estimation of Forest Carbon Stock in South Korea Using Machine Learning with High-Resolution Remote Sensing Data (고해상도 원격탐사 자료와 기계학습을 이용한 한국 산림의 탄소 저장량 산정)

  • Jaewon Shin;Sujong Jeong;Dongyeong Chang
    • Atmosphere
    • /
    • v.33 no.1
    • /
    • pp.61-72
    • /
    • 2023
  • Accurate estimation of forest carbon stocks is important in establishing greenhouse gas reduction plans. In this study, we estimate the spatial distribution of forest carbon stocks using machine learning techniques based on high-resolution remote sensing data and detailed field survey data. The high-resolution remote sensing data used in this study are Landsat indices (EVI, NDVI, NDII) for monitoring vegetation vitality and Shuttle Radar Topography Mission (SRTM) data for describing topography. We also used the forest growing stock data from the National Forest Inventory (NFI) for estimating forest biomass. Based on these data, we built a model based on machine learning methods and optimized for Korean forest types to calculate the forest carbon stocks per grid unit. With the newly developed estimation model, we created forest carbon stocks maps and estimated the forest carbon stocks in South Korea. As a result, forest carbon stock in South Korea was estimated to be 432,214,520 tC in 2020. Furthermore, we estimated the loss of forest carbon stocks due to the Donghae-Uljin forest fire in 2022 using the forest carbon stock map in this study. The surrounding forest destroyed around the fire area was estimated to be about 24,835 ha and the loss of forest carbon stocks was estimated to be 1,396,457 tC. Our model serves as a tool to estimate spatially distributed local forest carbon stocks and facilitates accounting of real-time changes in the carbon balance as well as managing the LULUCF part of greenhouse gas inventories.

Visible Height Based Occlusion Area Detection in True Orthophoto Generation (엄밀 정사영상 제작을 위한 가시고도 기반의 폐색영역 탐지)

  • Youn, Junhee;Kim, Gi Hong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.3D
    • /
    • pp.417-422
    • /
    • 2008
  • With standard orthorectification algorithms, one can produce unacceptable structure duplication in the orthophoto due to the double projection. Because of the abrupt height differences, such structure duplication is a frequently occurred phenomenon in the dense urban area which includes multi-history buildings. Therefore, occlusion area detection especially for the urban area is a critical issue in generation of true orthophoto. This paper deals with occlusion area detection with visible height based approach from aerial imagery and LiDAR. In order to accomplish this, a grid format DSM is produced from the point clouds of LiDAR. Next, visible height based algorithm is proposed to detect the occlusion area for each camera exposure station with DSM. Finally, generation of true orthophoto is presented with DSM and previously produced occlusion maps. The proposed algorithms are applied in the Purdue campus, Indiana, USA.

Evaluation of Grid-Based ROI Extraction Method Using a Seamless Digital Map (연속수치지형도를 활용한 격자기준 관심 지역 추출기법의 평가)

  • Jeong, Jong-Chul
    • Journal of Cadastre & Land InformatiX
    • /
    • v.49 no.1
    • /
    • pp.103-112
    • /
    • 2019
  • Extraction of region of interest for satellite image classification is one of the important techniques for efficient management of the national land space. However, recent studies on satellite image classification often depend on the information of the selected image in selecting the region of interest. This study propose an effective method of selecting the area of interest using the continuous digital topographic map constructed from high resolution images. The spatial information used in this research is based on the digital topographic map from 2013 to 2017 provided by the National Geographical Information Institute and the 2015 Sejong City land cover map provided by the Ministry of Environment. To verify the accuracy of the extracted area of interest, KOMPSAT-3A satellite images were used which taken on October 28, 2018 and July 7, 2018. The baseline samples for 2015 were extracted using the unchanged area of the continuous digital topographic map for 2013-2015 and the land cover map for 2015, and also extracted the baseline samples in 2018 using the unchanged area of the continuous digital topographic map for 2015-2017 and the land cover map for 2015. The redundant areas that occurred when merging continuous digital topographic maps and land cover maps were removed to prevent confusion of data. Finally, the checkpoints are generated within the region of interest, and the accuracy of the region of interest extracted from the K3A satellite images and the error matrix in 2015 and 2018 is shown, and the accuracy is approximately 93% and 72%, respectively. The accuracy of the region of interest can be used as a region of interest, and the misclassified region can be used as a reference for change detection.

An improved Bellman-Ford algorithm based on SPFA (SPFA를 기반으로 개선된 벨만-포드 알고리듬)

  • Chen, Hao;Suh, Hee-Jong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.4
    • /
    • pp.721-726
    • /
    • 2012
  • In this paper, we proposed an efficient algorithm based on SPFA(shortest path faster algorithm), which is an improved the Bellman-Ford algorithm. The Bellman-Ford algorithm can be used on graphs with negative edge weights unlike Dijkstra's algorithm. And SPFA algorithm used a queue to store the nodes, to avoid redundancy, though the Bellman-Ford algorithm takes a long time to update the nodes table. In this improved algorithm, an adjacency list is also used to store each vertex of the graph, applying dynamic optimal approach. And a queue is used to store the data. The improved algorithm can find the optimal path by continuous relaxation operation to the new node. Simulations to compare the efficiencies for Dijkstra's algorithm, SPFA algorithm and improved Bellman-Ford were taken. The result shows that Dijkstra's algorithm, SPFA algorithm have almost same efficiency on the random graphs, the improved algorithm, although the improved algorithm is not desirable, on grid maps the proposed algorithm is very efficient. The proposed algorithm has reduced two-third times processing time than SPFA algorithm.

Mapping the East African Ionosphere Using Ground-based GPS TEC Measurements

  • Mengist, Chalachew Kindie;Kim, Yong Ha;Yeshita, Baylie Damtie;Workayehu, Abyiot Bires
    • Journal of Astronomy and Space Sciences
    • /
    • v.33 no.1
    • /
    • pp.29-36
    • /
    • 2016
  • The East African ionosphere (3°S-18°N, 32°E-50°E) was mapped using Total Electron Content (TEC) measurements from ground-based GPS receivers situated at Asmara, Mekelle, Bahir Dar, Robe, Arbaminch, and Nairobi. Assuming a thin shell ionosphere at 350 km altitude, we project the Ionospheric Pierce Point (IPP) of a slant TEC measurement with an elevation angle of >10° to its corresponding location on the map. We then infer the estimated values at any point of interest from the vertical TEC values at the projected locations by means of interpolation. The total number of projected IPPs is in the range of 24-66 at any one time. Since the distribution of the projected IPPs is irregularly spaced, we have used an inverse distance weighted interpolation method to obtain a spatial grid resolution of 1°×1° latitude and longitude, respectively. The TEC maps were generated for the year 2008, with a 2 hr temporal resolution. We note that TEC varies diurnally, with a peak in the late afternoon (at 1700 LT), due to the equatorial ionospheric anomaly. We have observed higher TEC values at low latitudes in both hemispheres compared to the magnetic equatorial region, capturing the ionospheric distribution of the equatorial anomaly. We have also confirmed the equatorial seasonal variation in the ionosphere, characterized by minimum TEC values during the solstices and maximum values during the equinoxes. We evaluate the reliability of the map, demonstrating a mean error (difference between the measured and interpolated values) range of 0.04-0.2 TECU (Total Electron Content Unit). As more measured TEC values become available in this region, the TEC map will be more reliable, thereby allowing us to study in detail the equatorial ionosphere of the African sector, where ionospheric measurements are currently very few.

Strain demand prediction of buried steel pipeline at strike-slip fault crossings: A surrogate model approach

  • Xie, Junyao;Zhang, Lu;Zheng, Qian;Liu, Xiaoben;Dubljevic, Stevan;Zhang, Hong
    • Earthquakes and Structures
    • /
    • v.20 no.1
    • /
    • pp.109-122
    • /
    • 2021
  • Significant progress in the oil and gas industry advances the application of pipeline into an intelligent era, which poses rigorous requirements on pipeline safety, reliability, and maintainability, especially when crossing seismic zones. In general, strike-slip faults are prone to induce large deformation leading to local buckling and global rupture eventually. To evaluate the performance and safety of pipelines in this situation, numerical simulations are proved to be a relatively accurate and reliable technique based on the built-in physical models and advanced grid technology. However, the computational cost is prohibitive, so one has to wait for a long time to attain a calculation result for complex large-scale pipelines. In this manuscript, an efficient and accurate surrogate model based on machine learning is proposed for strain demand prediction of buried X80 pipelines subjected to strike-slip faults. Specifically, the support vector regression model serves as a surrogate model to learn the high-dimensional nonlinear relationship which maps multiple input variables, including pipe geometries, internal pressures, and strike-slip displacements, to output variables (namely tensile strains and compressive strains). The effectiveness and efficiency of the proposed method are validated by numerical studies considering different effects caused by structural sizes, internal pressure, and strike-slip movements.