• Title/Summary/Keyword: interpolation error

Search Result 505, Processing Time 0.022 seconds

Providing the combined models for groundwater changes using common indicators in GIS (GIS 공통 지표를 활용한 지하수 변화 통합 모델 제공)

  • Samaneh, Hamta;Seo, You Seok
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.3
    • /
    • pp.245-255
    • /
    • 2022
  • Evaluating the qualitative the qualitative process of water resources by using various indicators, as one of the most prevalent methods for optimal managing of water bodies, is necessary for having one regular plan for protection of water quality. In this study, zoning maps were developed on a yearly basis by collecting and reviewing the process, validating, and performing statistical tests on qualitative parameters҆ data of the Iranian aquifers from 1995 to 2020 using Geographic Information System (GIS), and based on Inverse Distance Weighting (IDW), Radial Basic Function (RBF), and Global Polynomial Interpolation (GPI) methods and Kriging and Co-Kriging techniques in three types including simple, ordinary, and universal. Then, minimum uncertainty and zoning error in addition to proximity for ASE and RMSE amount, was selected as the optimum model. Afterwards, the selected model was zoned by using Scholar and Wilcox. General evaluation of groundwater situation of Iran, revealed that 59.70 and 39.86% of the resources are classified into the class of unsuitable for agricultural and drinking purposes, respectively indicating the crisis of groundwater quality in Iran. Finally, for validating the extracted results, spatial changes in water quality were evaluated using the Groundwater Quality Index (GWQI), indicating high sensitivity of aquifers to small quantitative changes in water level in addition to severe shortage of groundwater reserves in Iran.

A Study on Transport Robot for Autonomous Driving to a Destination Based on QR Code in an Indoor Environment (실내 환경에서 QR 코드 기반 목적지 자율주행을 위한 운반 로봇에 관한 연구)

  • Se-Jun Park
    • Journal of Platform Technology
    • /
    • v.11 no.2
    • /
    • pp.26-38
    • /
    • 2023
  • This paper is a study on a transport robot capable of autonomously driving to a destination using a QR code in an indoor environment. The transport robot was designed and manufactured by attaching a lidar sensor so that the robot can maintain a certain distance during movement by detecting the distance between the camera for recognizing the QR code and the left and right walls. For the location information of the delivery robot, the QR code image was enlarged with Lanczos resampling interpolation, then binarized with Otsu Algorithm, and detection and analysis were performed using the Zbar library. The QR code recognition experiment was performed while changing the size of the QR code and the traveling speed of the transport robot while the camera position of the transport robot and the height of the QR code were fixed at 192cm. When the QR code size was 9cm × 9cm The recognition rate was 99.7% and almost 100% when the traveling speed of the transport robot was less than about 0.5m/s. Based on the QR code recognition rate, an experiment was conducted on the case where the destination is only going straight and the destination is going straight and turning in the absence of obstacles for autonomous driving to the destination. When the destination was only going straight, it was possible to reach the destination quickly because there was little need for position correction. However, when the destination included a turn, the time to arrive at the destination was relatively delayed due to the need for position correction. As a result of the experiment, it was found that the delivery robot arrived at the destination relatively accurately, although a slight positional error occurred while driving, and the applicability of the QR code-based destination self-driving delivery robot was confirmed.

  • PDF

Commissionning of Dynamic Wedge Field Using Conventional Dosimetric Tools (선량 중첩 방식을 이용한 동적 배기 조사면의 특성 연구)

  • Yi Byong Yong;Nha Sang Kyun;Choi Eun Kyung;Kim Jong Hoon;Chang Hyesook;Kim Mi Hwa
    • Radiation Oncology Journal
    • /
    • v.15 no.1
    • /
    • pp.71-78
    • /
    • 1997
  • Purpose : To collect beam data for dynamic wedge fields using conventional measurement tools without the multi-detector system, such as the linear diode detectors or ionization chambers. Materials and Methods : The accelerator CL 2100 C/D has two photon energies of 6MV and 15MV with dynamic wedge an91es of 15o, 30o, 45o and 60o. Wedge transmission factors, percentage depth doses(PDD's) and dose Profiles were measured. The measurements for wedge transmission factors are performed for field sizes ranging from $4\times4cm^2\;to\;20\times20cm^2$ in 1-2cm steps. Various rectangular field sizes are also measured for each photon energy of 6MV and 15MV, with the combination of each dynamic wedge angle of 15o 30o. 45o and 60o. These factors are compared to the calculated wedge factors using STT(Segmented Treatment Table) value. PDD's are measured with the film and the chamber in water Phantom for fixed square field. Converting parameters for film data to chamber data could be obtained from this procedure. The PDD's for dynamic wedged fields could be obtained from film dosimetry by using the converting parameters without using ionization chamber. Dose profiles are obtained from interpolation and STT weighted superposition of data through selected asymmetric static field measurement using ionization chamber. Results : The measured values of wedge transmission factors show good agreement to the calculated values The wedge factors of rectangular fields for constant V-field were equal to those of square fields The differences between open fields' PDDs and those from dynamic fields are insignificant. Dose profiles from superposition method showed acceptable range of accuracy(maximum 2% error) when we compare to those from film dosimetry. Conclusion : The results from this superposition method showed that commissionning of dynamic wedge could be done with conventional dosimetric tools such as Point detector system and film dosimetry winthin maximum 2% error range of accuracy.

  • PDF

Application of K-means Clustering Model to XRD Experimental Data in the Korea Plateau (한국대지 XRD 실험자료 대상 k-평균 군집화 모델 적용성 분석)

  • Ju Young Park;Sun Young Park;Jiyoung Choi;Sungil Kim;Yuri Kim;Bo Yeon Yi;Kyungbook Lee
    • Economic and Environmental Geology
    • /
    • v.57 no.5
    • /
    • pp.529-537
    • /
    • 2024
  • Mineral composition used to identify the sedimentary environment can be obtained through X-ray diffraction (XRD) analysis. However, due to time constraints for analyzing a large number of samples, a machine learning-based mineral composition analysis model was developed. This model demonstrated reasonable reliability for samples with usual compositions but showed poor performance for unusual samples. Consequently, a clustering model has recently been developed to classify the unusual samples, allowing experts to handle. The purpose of this study is to examine the applicability of the clustering model, developed using XRD data from the Ulleung Basin in previous study, using samples from different regions. Research data consist of intensity profile from XRD experiment and its mineral composition analysis for a total of 54 sediment samples from the Korea Plateau, located northwest of the Ulleung Basin. Because the intensity of samples in the Korea Plateau comprises 7,420 values (3.005-64.996°), differing from 3,100 values (3.01-64.99°) of samples in the Ulleung Basin, linear interpolation was used to align the input feature. Then, min-max scaler was applied to intensity profile for each sample to preserve the trend and peak ratio of the intensity. Applying the clustering model to the 54 preprocessed intensity profiles, 35 samples and 19 samples were classified into expert and machine learning groups, respectively. For machine learning group, false positive was zero among the 19 samples. This means that the clustering model can increase reliability in when mineral composition from machine learning model because unusual sample did not belong to the machine learning group. For the 35 samples in expert group, the 31 samples were classified as false negative (FN). It means that although machine learning model can properly analyze these samples, they were assigned to expert group. However, when these FN samples were analyzed using machine learning based composition analysis model, a high mean absolute error of 2.94% was observed. Therefore, it is reasonable that the samples were assigned to expert group.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF