• Title/Summary/Keyword: Smoothing Error

Search Result 218, Processing Time 0.022 seconds

The Positional Accuracy Quality Assessment of Digital Map Generalization (수치지도 일반화 위치정확도 품질평가)

  • 박경식;임인섭;최석근
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.19 no.2
    • /
    • pp.173-181
    • /
    • 2001
  • It is very important to assess spatial data quality of a digital map produced through digital map generalization. In this study, as a aspect of spatial data quality maintenance, we examined the tolerate range of theoretical expectation accuracy and established the quality assessment standard in spatial data for the transformed digital map data do not act contrary to the digital map specifications and the digital map accuracy of the relational scale. And, transforming large scale digital map to small scale, if we reduce complexity through processes as simplification, smoothing, refinement and so on., the spatial position change may be always happened. thus, because it is very difficult to analyse the spatial accuracy of the transformed position, we used the buffering as assessment method of spatial accuracy in digital map generalization procedure. Although the tolerated range of generic positioning error for l/l, 000 and l/5, 000 scale is determined based on related law, because the algorithms adapted to each processing elements have different property each other, if we don't determine the suitable parameter and tolerance, we will not satisfy the result after generalization procedure with tolerated range of positioning error. The results of this study test which is about the parameters of each algorithm based on tolerated range showed that the parameter of the simplification algorithm and the positional accuracy are 0.2617 m, 0.4617 m respectively.

  • PDF

A Compensation Method of Timing Signals for Communications Networks Synchronization by using Loran Signals (Loran 신호 이용 통신망 동기를 위한 타이밍 신호 보상 방안)

  • Lee, Young-Kyu;Lee, Chang-Bok;Yang, Sung-Hoon;Lee, Jong-Gu;Kong, Hyun-Dong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11A
    • /
    • pp.882-890
    • /
    • 2009
  • In this paper, we describe a compensation method that can be used for the situation where Loran receivers lose their phase lock to the received Loran signals when Loran signals are employed for the synchronization of national infrastructures such as telecommunication networks, electric power distribution and so on. In losing the phase lock to the received signals in a Loran receiver, the inner oscillator of the receiver starts free-running and the performance of the timing synchronization signals which are locked to the oscillator's phase is very severly degraded, so the timing accuracy under 1 us for a Primary Reference Clock (PRC) required in the International Telecommunications Union (ITU) G.811 standard can not be satisfied in the situation. Therefore, in this paper, we propose a method which can compensate the phase jump by using a compensation algorithm when a Loran receiver loses its phase lock and the performance evaluation of the proposed algorithm is achieved by the Maximum Time Interval Error (MTIE) of the measured data. From the performance evaluation results, it is observed that the requirement under 1 us for a PRC can be easily achieved by using the proposed algorithm showing about 0.6 us with under 30 minutes mean interval of smoothing with 1 hour period when the loss of phase lock occurs.

Radar rainfall prediction based on deep learning considering temporal consistency (시간 연속성을 고려한 딥러닝 기반 레이더 강우예측)

  • Shin, Hongjoon;Yoon, Seongsim;Choi, Jaemin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.5
    • /
    • pp.301-309
    • /
    • 2021
  • In this study, we tried to improve the performance of the existing U-net-based deep learning rainfall prediction model, which can weaken the meaning of time series order. For this, ConvLSTM2D U-Net structure model considering temporal consistency of data was applied, and we evaluated accuracy of the ConvLSTM2D U-Net model using a RainNet model and an extrapolation-based advection model. In addition, we tried to improve the uncertainty in the model training process by performing learning not only with a single model but also with 10 ensemble models. The trained neural network rainfall prediction model was optimized to generate 10-minute advance prediction data using four consecutive data of the past 30 minutes from the present. The results of deep learning rainfall prediction models are difficult to identify schematically distinct differences, but with ConvLSTM2D U-Net, the magnitude of the prediction error is the smallest and the location of rainfall is relatively accurate. In particular, the ensemble ConvLSTM2D U-Net showed high CSI, low MAE, and a narrow error range, and predicted rainfall more accurately and stable prediction performance than other models. However, the prediction performance for a specific point was very low compared to the prediction performance for the entire area, and the deep learning rainfall prediction model also had limitations. Through this study, it was confirmed that the ConvLSTM2D U-Net neural network structure to account for the change of time could increase the prediction accuracy, but there is still a limitation of the convolution deep neural network model due to spatial smoothing in the strong rainfall region or detailed rainfall prediction.

Improvement of the Dose Calculation Accuracy Using MVCBCT Image Processing (Megavoltage Cone-Beam CT 영상의 변환을 이용한 선량 계산의 정확성 향상)

  • Kim, Min-Joo;Cho, Woong;Kang, Young-Nam;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.62-69
    • /
    • 2012
  • The dose re-calculation process using Megavoltage cone-beam CT images is inevitable process to perform the Adaptive Radiation Therapy (ART). The purpose of this study is to improve dose re-calculation accuracy using MVCBCT images by applying intensity calibration method and three dimensional rigid body transform and filtering process. The three dimensional rigid body transform and Gaussian smoothing filtering process to MVCBCT Rando phantom images was applied to reduce image orientation error and the noise of the MVCBCT images. Then, to obtain the predefined modification level for intensity calibration, the cheese phantom images from kilo-voltage CT (kV CT), MVCBCT was acquired. From these cheese phantom images, the calibration table for MVCBCT images was defined from the relationship between Hounsfield Units (HUs) of kV CT and MVCBCT images at the same electron density plugs. The intensity of MVCBCT images from Rando phantom was calibrated using the predefined modification level as discussed above to have the intensity of the kV CT images to make the two images have the same intensity range as if they were obtained from the same modality. Finally, the dose calculation using kV CT, MVCBCT with/without intensity calibration was applied using radiation treatment planning system. As a result, the percentage difference of dose distributions between dose calculation based on kVCT and MVCBCT with intensity calibration was reduced comparing to the percentage difference of dose distribution between dose calculation based on kVCT and MVCBCT without intensity calibration. For head and neck, lung images, the percentage difference between kV CT and non-calibrated MVCBCT images was 1.08%, 2.44%, respectively. In summary, our method has quantitatively improved the accuracy of dose calculation and could be a useful solution to enhance the dose calculation accuracy using MVCBCT images.

Automatic Liver Segmentation of a Contrast Enhanced CT Image Using a Partial Histogram Threshold Algorithm (부분 히스토그램 문턱치 알고리즘을 사용한 조영증강 CT영상의 자동 간 분할)

  • Kyung-Sik Seo;Seung-Jin Park;Jong An Park
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.3
    • /
    • pp.189-194
    • /
    • 2004
  • Pixel values of contrast enhanced computed tomography (CE-CT) images are randomly changed. Also, the middle liver part has a problem to segregate the liver structure because of similar gray-level values of a pancreas in the abdomen. In this paper, an automatic liver segmentation method using a partial histogram threshold (PHT) algorithm is proposed for overcoming randomness of CE-CT images and removing the pancreas. After histogram transformation, adaptive multi-modal threshold is used to find the range of gray-level values of the liver structure. Also, the PHT algorithm is performed for removing the pancreas. Then, morphological filtering is processed for removing of unnecessary objects and smoothing of the boundary. Four CE-CT slices of eight patients were selected to evaluate the proposed method. As the average of normalized average area of the automatic segmented method II (ASM II) using the PHT and manual segmented method (MSM) are 0.1671 and 0.1711, these two method shows very small differences. Also, the average area error rate between the ASM II and MSM is 6.8339 %. From the results of experiments, the proposed method has similar performance as the MSM by medical Doctor.

Analysis of the MODIS-Based Vegetation Phenology Using the HANTS Algorithm (HANTS 알고리즘을 이용한 MODIS 영상기반의 식물계절 분석)

  • Choi, Chul-Hyun;Jung, Sung-Gwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.3
    • /
    • pp.20-38
    • /
    • 2014
  • Vegetation phenology is the most important indicator of ecosystem response to climate change. Therefore it is necessary to continuously monitor forest phenology. This paper analyzes the phenological characteristics of forests in South Korea using the MODIS vegetation index with error from clouds or other sources removed using the HANTS algorithm. After using the HANTS algorithm to reduce the noise of the satellite-based vegetation index data, we were able to confirm that phenological transition dates varied strongly with altitudinal gradients. The dates of the start of the growing season, end of the growing season and the length of the growing season were estimated to vary by +0.71day/100m, -1.33day/100m and -2.04day/100m in needleleaf forests, +1.50day/100m, -1.54day/100m and -3.04day/100m in broadleaf forests, +1.39day/100m, -2.04day/100m and -3.43day/100m in mixed forests. We found a linear pattern of variation in response to altitudinal gradients that was related to air temperature. We also found that broadleaf forests are more sensitive to temperature changes compared to needleleaf forests.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

Comparison of Forest Carbon Stocks Estimation Methods Using Forest Type Map and Landsat TM Satellite Imagery (임상도와 Landsat TM 위성영상을 이용한 산림탄소저장량 추정 방법 비교 연구)

  • Kim, Kyoung-Min;Lee, Jung-Bin;Jung, Jaehoon
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.449-459
    • /
    • 2015
  • The conventional National Forest Inventory(NFI)-based forest carbon stock estimation method is suitable for national-scale estimation, but is not for regional-scale estimation due to the lack of NFI plots. In this study, for the purpose of regional-scale carbon stock estimation, we created grid-based forest carbon stock maps using spatial ancillary data and two types of up-scaling methods. Chungnam province was chosen to represent the study area and for which the $5^{th}$ NFI (2006~2009) data was collected. The first method (method 1) selects forest type map as ancillary data and uses regression model for forest carbon stock estimation, whereas the second method (method 2) uses satellite imagery and k-Nearest Neighbor(k-NN) algorithm. Additionally, in order to consider uncertainty effects, the final AGB carbon stock maps were generated by performing 200 iterative processes with Monte Carlo simulation. As a result, compared to the NFI-based estimation(21,136,911 tonC), the total carbon stock was over-estimated by method 1(22,948,151 tonC), but was under-estimated by method 2(19,750,315 tonC). In the paired T-test with 186 independent data, the average carbon stock estimation by the NFI-based method was statistically different from method2(p<0.01), but was not different from method1(p>0.01). In particular, by means of Monte Carlo simulation, it was found that the smoothing effect of k-NN algorithm and mis-registration error between NFI plots and satellite image can lead to large uncertainty in carbon stock estimation. Although method 1 was found suitable for carbon stock estimation of forest stands that feature heterogeneous trees in Korea, satellite-based method is still in demand to provide periodic estimates of un-investigated, large forest area. In these respects, future work will focus on spatial and temporal extent of study area and robust carbon stock estimation with various satellite images and estimation methods.