• Title/Summary/Keyword: Position Accuracy

Search Result 2,345, Processing Time 0.038 seconds

Study on Development of Embedded Source Depth Assessment Method Using Gamma Spectrum Ratio (감마선 스펙트럼 비율을 이용한 매립 선원의 깊이 평가 방법론 개발 연구)

  • Kim, Jun-Ha;Cheong, Jea-Hak;Hong, Sang-Bum;Seo, Bum-Kyung;Lee, Byung Chae
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.18 no.1
    • /
    • pp.51-62
    • /
    • 2020
  • This study was conducted to develop a method for depth assessment of embedded sources using gamma-spectrum ratio and for the evaluation of field applicability. To this end, Peak to Compton and Peak to valley ratio changes were evaluated according to 137Cs, 60Co, 152Eu point source depth using HPGe detector and MCNP simulation. The effects of measurement distance of PTV and PTC methods were evaluated. Using the results, the source depth assessment equation using the PTC and PTV methods was derived based on the detection distance of 50 cm. In addition, the sensitivity of detection distance changes was assessed when using PTV and PTC methods, and error increased by 3 to 4 cm when detection distance decreased by 20 cm based on 50 cm. However, it was confirmed that if the detection distance was increased to 100 cm, the effects of detection distance were small. And PTV and PTC methods were compared with the two distance measurement method which evaluates the depth of source by the change of net peak counting rate according to the detection distance. As a result of source depth assessment, the PTV and PTC showed a maximum error of 1.87 cm and the two distance measurement method showed maximum error of 2.69 cm. The results of the experiment confirmed that the accuracy of the PTV and PTC methods was higher than two distance measurement. In addition, Sensitivity evaluation by horizontal position error of source has maximum error of less than 25.59 cm for the two distance measurement method. On the other hand, PTV and PTC method showed high accuracy with maximum error of less than 8.04 cm. In addition, the PTC method has lowest standard deviation for the same time measurement, which is expected to enable rapid measurement.

Improvement of GPS positioning accuracy by static post-processing method (정적 후처리방식에 의한 GPS의 측위정도 개선)

  • 김민선;신현옥
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.39 no.4
    • /
    • pp.251-261
    • /
    • 2003
  • To measure the GPS position accuracy and its distribution according to the length of the baseline, 30 minutes to 24 hours observations at the fixed location were conducted with two GPS receivers (Ll, 12 channels) on May 29 to June 2, 2002. The GPS data received at the reference station, the rover station and the ordinary times GPS observation station operated by the National Geography Institute in Korea were processed in kinematic and static post-processing methods with a post -processing software. The results obtained are summarized as follows: 1. The number of the satellite that could be observed continuously more than six hours was 16 and most of these satellites were positioned at east-west direction on May 31, 2002. The number of the satellite observed and the geometric dilution of precision (GDOP) determined by the average of every 10 minute for the day were 8 and 3.89, respectively. 2. Both the average GPS positions before and after post-processing were shifted (standalone: 1.17 m, post -processing: 0.43m) to the south and west. The twice distance root mean square (2drms) measured with standalone was 6.65m. The 2drms could be reduced to 33.8% (standard deviation 0=17.2) and 5.3% (0=2.2) of standalone by the kinematic and the static post-processing methods, respectively. 3. The relationship between the length of the baseline x (km) and the 2drms y (m) obtained by the static post-processing method was y=0.00l6x+0.006 $(R^2=0.87)$. In the case of the positioning with the static post-processing method using the GPS receiver, it was found that a positioning within 20cm 2drms was possible when the length of the baseline was less than 100km and the receiving time of the GPS is more than 30 minutes.

THREE DIMENSIONAL ANALYSIS OF MAXILLOFACIAL STRUCTURE BY FRONTAL AND LATERAL CEPHALOGRAM (두부 방사선 규격사진을 이용한 악안면 구조의 3차원적 분석법)

  • Kwon, Kui-Young;Lee, Sang-Han;Kwon, Tae-Geon
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.21 no.2
    • /
    • pp.174-188
    • /
    • 1999
  • The purpose of this study is to evaluate the precision and accuracy of a three dimensional cephalogram constructed by using the frontal and lateral cephalogram of twelve human dry skulls. After achieving the three dimensional image reconstruction program, we tried to apply this program to two dentofacial deformity patients. 1. Conventional nasion relator in cephalostat was used to reproduce the same head position for the same dry skull. The mean difference of the three dimensional cephalogram for the same dry skull was $0.34{\pm}0.33mm$. Closeness of repeated measures to each skull reveals the precision of this method for the three dimensional cephalogram. 2. Concerning the accuracy, the mean difference between the three dimensional reconstruction data and actual lineal measurements was $1.47{\pm}1.45mm$ and the mean magnification ratio was $100.24{\pm}4.68%$. This Diffrerence is attributed mainly to the ill defined cephalometric landmarks, not to the positional change of the dry skull. 3. Cephalometric measurement of lateral and frontal radiographs had no consecutive magnification ratio because of the different focus-object distance. The mean difference between the frontal and lateral cephalogram to the actual lineal measurements was $4.72{\pm}2.01mm$ and $-5.22{\pm}3.36mm$. Vertical measurements were slightly more accurate than horizontal measurements. 4. Applying to the actual patient analysis, it is recommendable to use this program for analyzing the asymmetry or spatial change after operation. The orthodontic bracket would be a favorable cephalometric landmark for constructing the three dimensional images.

  • PDF

Dosimetric Analysis of Respiratory-Gated RapidArc with Varying Gating Window Times (호흡연동 래피드아크 치료 시 빔 조사 구간 설정에 따른 선량 변화 분석)

  • Yoon, Mee Sun;Kim, Yong-Hyeob;Jeong, Jae-Uk;Nam, Taek-Keun;Ahn, Sung-Ja;Chung, Woong-Ki;Song, Ju-Young
    • Progress in Medical Physics
    • /
    • v.26 no.2
    • /
    • pp.87-92
    • /
    • 2015
  • The gated RapidArc may produce a dosimetric error due to the stop-and-go motion of heavy gantry which can misalign the gantry restart position and reduce the accuracy of important factors in RapidArc delivery such as MLC movement and gantry speed. In this study, the effect of stop-and-go motion in gated RapidArc was analyzed with varying gating window time, which determines the total number of stop-and-go motions. Total 10 RapidArc plans for treatment of liver cancer were prepared. The RPM gating system and the moving phantom were used to set up the accurate gating window time. Two different delivery quality assurance (DQA) plans were created for each RapidArc plan. One is the portal dosimetry plan and the other is MapCHECK2 plan. The respiratory cycle was set to 4 sec and DQA plans were delivered with three different gating conditions: no gating, 1-sec gating window, and 2-sec gating window. The error between calculated dose and measured dose was evaluated based on the pass rate calculated using the gamma evaluation method with 3%/3 mm criteria. The average pass rates in the portal dosimetry plans were $98.72{\pm}0.82%$, $94.91{\pm}1.64%$, and $98.23{\pm}0.97%$ for no gating, 1-sec gating, and 2-sec gating, respectively. The average pass rates in MapCHECK2 plans were $97.80{\pm}0.91%$, $95.38{\pm}1.31%$, and $97.50{\pm}0.96%$ for no gating, 1-sec gating, and 2-sec gating, respectively. We verified that the dosimetric accuracy of gated RapidArc increases as gating window time increases and efforts should be made to increase gating window time during the RapidArc treatment process.

Automated patient set-up using intensity based image registration in proton therapy (양성자 치료 시 Intensity 기반의 영상 정합을 이용한 환자 자동화 Set up 적용 방법)

  • Jang, Hoon;Kim, Ho Sik;Choe, Seung Oh;Kim, Eun Suk;Jeong, Jong Hyi;Ahn, Sang Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.97-105
    • /
    • 2018
  • Purpose : Proton Therapy using Bragg-peak, because it has distinct characteristics in providing maximum dosage for tumor and minimal dosage for normal tissue, a medical imaging system that can quantify changes in patient position or treatment area is of paramount importance to the treatment of protons. The purpose of this research is to evaluate the usefulness of the algorithm by comparing the image matching through the set-up and in-house code through the existing dips program by producing a Matlab-based in-house registration code to determine the error value between dips and DRR to evaluate the accuracy of the existing treatment. Materials and Methods : Thirteen patients with brain tumors and head and neck cancer who received proton therapy were included in this study and used the DIPS Program System (Version 2.4.3, IBA, Belgium) for image comparison and the Eclipse Proton Planning System (Version 13.7, Varian, USA) for patient treatment planning. For Validation of the Registration method, a test image was artificially rotated and moved to match the existing image, and the initial set up image of DIPS program of existing set up process was image-matched with plan DRR, and the error value was obtained, and the usefulness of the algorithm was evaluated. Results : When the test image was moved 0.5, 1, and 10 cm in the left and right directions, the average error was 0.018 cm. When the test image was rotated counterclockwise by 1 and $10^{\circ}$, the error was $0.0011^{\circ}$. When the initial images of four patients were imaged, the mean error was 0.056, 0.044, and 0.053 cm in the order of x, y, and z, and 0.190 and $0.206^{\circ}$ in the order of rotation and pitch. When the final images of 13 patients were imaged, the mean differences were 0.062, 0.085, and 0.074 cm in the order of x, y, and z, and 0.120 cm as the vector value. Rotation and pitch were 0.171 and $0.174^{\circ}$, respectively. Conclusion : The Matlab-based In-house Registration code produced through this study showed accurate Image matching based on Intensity as well as the simple image as well as anatomical structure. Also, the Set-up error through the DIPS program of the existing treatment method showed a very slight difference, confirming the accuracy of the proton therapy. Future development of additional programs and future Intensity-based Matlab In-house code research will be necessary for future clinical applications.

  • PDF

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

Analysis of Optimal Pathways for Terrestrial LiDAR Scanning for the Establishment of Digital Inventory of Forest Resources (디지털 산림자원정보 구축을 위한 최적의 지상LiDAR 스캔 경로 분석)

  • Ko, Chi-Ung;Yim, Jong-Su;Kim, Dong-Geun;Kang, Jin-Taek
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.245-256
    • /
    • 2021
  • This study was conducted to identify the applicability of a LiDAR sensor to forest resources inventories by comparing data on a tree's position, height, and DBH obtained by the sensor with those by existing forest inventory methods, for the tree species of Criptomeria japonica in Jeolmul forest in Jeju, South Korea. To this end, a backpack personal LiDAR (Greenvalley International, Model D50) was employed. To facilitate the process of the data collection, patterns of collecting the data by the sensor were divided into seven ones, considering the density of sample plots and the work efficiency. Then, the accuracy of estimating the variables of each tree was assessed. The amount of time spent on acquiring and processing the data by each method was compared to evaluate the efficiency. The findings showed that the rate of detecting standing trees by the LiDAR was 100%. Also, the high statistical accuracy was observed in both Pattern 5 (DBH: RMSE 1.07 cm, Bias -0.79 cm, Height: RMSE 0.95 m, Bias -3.2 m), and Pattern 7 (DBH: RMSE 1.18 cm, Bias -0.82 cm, Height: RMSE 1.13 m, Bias -2.62 m), compared to the results drawn in the typical inventory manner. Concerning the time issue, 115 to 135 minutes per 1ha were taken to process the data by utilizing the LiDAR, while 375 to 1,115 spent in the existing way, proving the higher efficiency of the device. It can thus be concluded that using a backpack personal LiDAR helps increase efficiency in conducting a forest resources inventory in an planted coniferous forest with understory vegetation, implying a need for further research in a variety of forests.

Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network (Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑)

  • Gong, Sung-Hyun;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1723-1735
    • /
    • 2022
  • Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Usefulness of Gated RapidArc Radiation Therapy Patient evaluation and applied with the Amplitude mode (호흡 동조 체적 세기조절 회전 방사선치료의 유용성 평가와 진폭모드를 이용한 환자적용)

  • Kim, Sung Ki;Lim, Hhyun Sil;Kim, Wan Sun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.29-35
    • /
    • 2014
  • Purpose : This study has already started commercial Gated RapidArc automation equipment which was not previously in the Gated radiation therapy can be performed simultaneously with the VMAT Gated RapidArc radiation therapy to the accuracy of the analysis to evaluate the usability, Amplitude mode applied to the patient. Materials and Methods : The analysis of the distribution of radiation dose equivalent quality solid water phantom and GafChromic film was used Film QA film analysis program using the Gamma factor (3%, 3 mm). Three-dimensional dose distribution in order to check the accuracy of Matrixx dosimetry equipment and Compass was used for dose analysis program. Periodic breathing synchronized with solid phantom signals Phantom 4D Phantom and Varian RPM was created by breathing synchronized system, free breathing and breath holding at each of the dose distribution was analyzed. In order to apply to four patients from February 2013 to August 2013 with liver cancer targets enough to get a picture of 4DCT respiratory cycle and then patients are pratice to meet patient's breathing cycle phase mode using the patient eye goggles to see the pattern of the respiratory cycle to be able to follow exactly in a while 4DCT images were acquired. Gated RapidArc treatment Amplitude mode in order to create the breathing cycle breathing performed three times, and then at intervals of 40% to 60% 5-6 seconds and breathing exercises that can not stand (Fig. 5), 40% While they are treated 60% in the interval Beam On hold your breath when you press the button in a way that was treated with semi-automatic. Results : Non-respiratory and respiratory rotational intensity modulated radiation therapy technique absolute calculation dose of using computerized treatment plan were shown a difference of less than 1%, the difference between treatment technique was also less than 1%. Gamma (3%, 3 mm) and showed 99% agreement, each organ-specific dose difference were generally greater than 95% agreement. The rotational intensity modulated radiation therapy, respiratory synchronized to the respiratory cycle created Amplitude mode and the actual patient's breathing cycle could be seen that a good agreement. Conclusion : When you are treated Non-respiratory and respiratory method between volumetric intensity modulated radiation therapy rotation of the absolute dose and dose distribution showed a very good agreement. This breathing technique tuning volumetric intensity modulated radiation therapy using a rotary moving along the thoracic or abdominal breathing can be applied to the treatment of tumors is considered. The actual treatment of patients through the goggles of the respiratory cycle to create Amplitude mode Gated RapidArc treatment equipment that does not automatically apply to the results about 5-6 seconds stopped breathing in breathing synchronized rotary volumetric intensity modulated radiation therapy facilitate could see complement.