• Title/Summary/Keyword: a error model

Search Result 7,371, Processing Time 0.042 seconds

Analysis of Price Fluctuation Factors in the Vessel Demolition Market : Focusing on India & Bangladesh (선박 해체시장 가격 변동 요인 분석 : 인디아, 방글라데시를 중심으로)

  • Lee ChongWoo;Jang Chul-Ho
    • Journal of Korea Port Economic Association
    • /
    • v.39 no.4
    • /
    • pp.243-254
    • /
    • 2023
  • This study investigates the factors contributing to price fluctuations in the shipscrapping market, the final stage in a vessel's life cycle. Shipping companies make decisions on ship dismantling based on factors such as declining freight rates, increasing vessel age leading to higher costs, or compliance with new environmental regulations. Utilizing the FMOLS (Fully Modified Ordinary Least Squares) and VECM (Vector Error Correction Model) methodologies, the research explores the long-term elasticities of factors influencing shipscrapping prices and examines short-term causal relationships. Using a time series dataset spanning from December 2015 to April 2023, covering a total of 90 months, the study focuses on the shipscrapping prices of Capesize vessels in India and Bangladesh, which constitute a significant portion of the shipbreaking market. The findings indicate that, in the long term, shipscrapping prices are closely related to global scrap prices, 20-year-old secondhand Capesize vessel prices, newbuilding prices, and exchange rates. In terms of short-term causal relationships, an increase in global scrap prices induces a rise in shipscrapping prices, while the remaining variables do not contribute to such increases. Specifically, an escalation in shipscrapping prices is associated with increased prices of 20-year-old secondhand vessels, newbuilding prices, and exchange rates. However, the other variables do not show a significant influence on short-term increases in shipscrapping prices.

A study of Brachytherapy for Intraocular Tumor (안구내 악성종양에 대한 저준위 방사선요법에 관한 연구)

  • Ji, Gwang-Su;Yu, Dae-Heon;Lee, Seong-Gu;Kim, Jae-Hyu;Ji, Yeong-Hun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.8 no.1
    • /
    • pp.19-27
    • /
    • 1996
  • I. Project Title A Study of Brachytherapy for intraocular tumor II. Objective and Importance of the project The eye enucleation or external-beam radiation therapy that has been commonly used for the treatment of intraocular tumor have demerits of visual loss and in deficiency of effective tumor dose. Recently, brachytherapy using the plaques containing radioisotope-now treatment method that decrease the demerits of the above mentioned treatment methods and increase the treatment effect-is introduced and performed in the countries, Our purpose of this research is to design suitable shape of plaque for the ophthalmic brachytherapy, and to measure absorbed doses of Ir-192 ophthalmic plaque and thereby calculate the exact radiation dose of tumor and it's adjacent normal tissue. III. Scope and Contents of the project In order to brachytherapy for intraocular tumor, 1. to determine the eye model and selected suitable radioisotope 2. to design the suitable shape of plaque 3. to measure transmission factor and dose distribution for custom made plaques 4. to compare with the these data and results of computer dose calculation models IV. Results and Proposal for Applications The result were as followed. 1. Eye model was determined as a 25mm diameter sphere, Ir-192 was considered the most appropriate as radioisotope for brachytherapy, because of the size, half, energy and availability. 2. Considering the biological response with human tissue and protection of exposed dose, we made the plaques with gold, of which size were 15mm, 17mm and 20mm in diameter, and 1.5mm in thickness. 3. Transmission factor of plaques are all 0.71 with TLD and film dosimetry at the surface of plaques and 0.45, 0.49 at 1.5mm distance of surface, respectively. 4. As compared the measured data for the plaque with Ir-192 seeds to results of computer dose calculation model by Gary Luxton et al. and CAP-PLAN (Radiation Treatment Planning System), absorbed doses are within ${\pm}10\%$ and distance deviations are within 0.4mm Maximum error is $-11.3\%$ and 0.8mm, respectively. As a result of it, we can treat the intraocular tumor more effectively by using custom made gold plaque and Ir-192 seeds.

  • PDF

Estimation of Reference Crop Evapotranspiration Using Backpropagation Neural Network Model (역전파 신경망 모델을 이용한 기준 작물 증발산량 산정)

  • Kim, Minyoung;Choi, Yonghun;O'Shaughnessy, Susan;Colaizzi, Paul;Kim, Youngjin;Jeon, Jonggil;Lee, Sangbong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.61 no.6
    • /
    • pp.111-121
    • /
    • 2019
  • Evapotranspiration (ET) of vegetation is one of the major components of the hydrologic cycle, and its accurate estimation is important for hydrologic water balance, irrigation management, crop yield simulation, and water resources planning and management. For agricultural crops, ET is often calculated in terms of a short or tall crop reference, such as well-watered, clipped grass (reference crop evapotranspiration, $ET_o$). The Penman-Monteith equation recommended by FAO (FAO 56-PM) has been accepted by researchers and practitioners, as the sole $ET_o$ method. However, its accuracy is contingent on high quality measurements of four meteorological variables, and its use has been limited by incomplete and/or inaccurate input data. Therefore, this study evaluated the applicability of Backpropagation Neural Network (BPNN) model for estimating $ET_o$ from less meteorological data than required by the FAO 56-PM. A total of six meteorological inputs, minimum temperature, average temperature, maximum temperature, relative humidity, wind speed and solar radiation, were divided into a series of input groups (a combination of one, two, three, four, five and six variables) and each combination of different meteorological dataset was evaluated for its level of accuracy in estimating $ET_o$. The overall findings of this study indicated that $ET_o$ could be reasonably estimated using less than all six meteorological data using BPNN. In addition, it was shown that the proper choice of neural network architecture could not only minimize the computational error, but also maximize the relationship between dependent and independent variables. The findings of this study would be of use in instances where data availability and/or accuracy are limited.

Estimation of Soil Moisture Using Sentinel-1 SAR Images and Multiple Linear Regression Model Considering Antecedent Precipitations (선행 강우를 고려한 Sentinel-1 SAR 위성영상과 다중선형회귀모형을 활용한 토양수분 산정)

  • Chung, Jeehun;Son, Moobeen;Lee, Yonggwan;Kim, Seongjoon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.515-530
    • /
    • 2021
  • This study is to estimate soil moisture (SM) using Sentinel-1A/B C-band SAR (synthetic aperture radar) images and Multiple Linear Regression Model(MLRM) in the Yongdam-Dam watershed of South Korea. Both the Sentinel-1A and -1B images (6 days interval and 10 m resolution) were collected for 5 years from 2015 to 2019. The geometric, radiometric, and noise corrections were performed using the SNAP (SentiNel Application Platform) software and converted to backscattering coefficient of VV and VH polarization. The in-situ SM data measured at 6 locations using TDR were used to validate the estimated SM results. The 5 days antecedent precipitation data were also collected to overcome the estimation difficulty for the vegetated area not reaching the ground. The MLRM modeling was performed using yearly data and seasonal data set, and correlation analysis was performed according to the number of the independent variable. The estimated SM was verified with observed SM using the coefficient of determination (R2) and the root mean square error (RMSE). As a result of SM modeling using only BSC in the grass area, R2 was 0.13 and RMSE was 4.83%. When 5 days of antecedent precipitation data was used, R2 was 0.37 and RMSE was 4.11%. With the use of dry days and seasonal regression equation to reflect the decrease pattern and seasonal variability of SM, the correlation increased significantly with R2 of 0.69 and RMSE of 2.88%.

Generation of Daily High-resolution Sea Surface Temperature for the Seas around the Korean Peninsula Using Multi-satellite Data and Artificial Intelligence (다종 위성자료와 인공지능 기법을 이용한 한반도 주변 해역의 고해상도 해수면온도 자료 생산)

  • Jung, Sihun;Choo, Minki;Im, Jungho;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.707-723
    • /
    • 2022
  • Although satellite-based sea surface temperature (SST) is advantageous for monitoring large areas, spatiotemporal data gaps frequently occur due to various environmental or mechanical causes. Thus, it is crucial to fill in the gaps to maximize its usability. In this study, daily SST composite fields with a resolution of 4 km were produced through a two-step machine learning approach using polar-orbiting and geostationary satellite SST data. The first step was SST reconstruction based on Data Interpolate Convolutional AutoEncoder (DINCAE) using multi-satellite-derived SST data. The second step improved the reconstructed SST targeting in situ measurements based on light gradient boosting machine (LGBM) to finally produce daily SST composite fields. The DINCAE model was validated using random masks for 50 days, whereas the LGBM model was evaluated using leave-one-year-out cross-validation (LOYOCV). The SST reconstruction accuracy was high, resulting in R2 of 0.98, and a root-mean-square-error (RMSE) of 0.97℃. The accuracy increase by the second step was also high when compared to in situ measurements, resulting in an RMSE decrease of 0.21-0.29℃ and an MAE decrease of 0.17-0.24℃. The SST composite fields generated using all in situ data in this study were comparable with the existing data assimilated SST composite fields. In addition, the LGBM model in the second step greatly reduced the overfitting, which was reported as a limitation in the previous study that used random forest. The spatial distribution of the corrected SST was similar to those of existing high resolution SST composite fields, revealing that spatial details of oceanic phenomena such as fronts, eddies and SST gradients were well simulated. This research demonstrated the potential to produce high resolution seamless SST composite fields using multi-satellite data and artificial intelligence.

Geomagnetic Paleosecular Variation in the Korean Peninsula during the First Six Centuries (기원후 600년간 한반도 지구 자기장 고영년변화)

  • Park, Jong kyu;Park, Yong-Hee
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.611-625
    • /
    • 2022
  • One of the applications of geomagnetic paleo-secular variation (PSV) is the age dating of archeological remains (i.e., the archeomagnetic dating technique). This application requires the local model of PSV that reflects non-dipole fields with regional differences. Until now, the tentative Korean paleosecular variation (t-KPSV) calculated based on JPSV (SW Japanese PSV) has been applied as a reference curve for individual archeomagnetic directions in Korea. However, it is less reliable due to regional differences in the non-dipole magnetic field. Here, we present PSV curves for AD 1 to 600, corresponding to the Korean Three Kingdoms (including the Proto Three Kingdoms) Period, using the results of archeomagnetic studies in the Korean Peninsula and published research data. Then we compare our PSV with the global geomagnetic prediction model and t-KPSV. A total of 49 reliable archeomagnetic directional data from 16 regions were compiled for our PSV. In detail, each data showed statistical consistency (N > 6, 𝛼95 < 7.8°, and k > 57.8) and had radiocarbon or archeological ages in the range of AD 1 to 600 years with less than ±200 years error range. The compiled PSV for the initial six centuries (KPSV0.6k) showed declination and inclination in the range of 341.7° to 20.1° and 43.5° to 60.3°, respectively. Compared to the t-KPSV, our curve revealed different variation patterns both in declination and inclination. On the other hand, KPSV0.6k and global geomagnetic prediction models (ARCH3K.1, CALS3K.4, and SED3K.1) revealed consistent variation trends during the first six centennials. In particular, the ARCH3K.1 showed the best fitting with our KPSV0.6k. These results indicate that contribution of the non-dipole field to Korea and Japan is quite different, despite their geographical proximity. Moreover, the compilation of archeomagnetic data from the Korea territory is essential to build a reliable PSV curve for an age dating tool. Lastly, we double-check the reliability of our KPSV0.6k by showing a good fitting of newly acquired age-controlled archeomagnetic data on our curve.

Control of pH Neutralization Process using Simulation Based Dynamic Programming in Simulation and Experiment (ICCAS 2004)

  • Kim, Dong-Kyu;Lee, Kwang-Soon;Yang, Dae-Ryook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.620-626
    • /
    • 2004
  • For general nonlinear processes, it is difficult to control with a linear model-based control method and nonlinear controls are considered. Among the numerous approaches suggested, the most rigorous approach is to use dynamic optimization. Many general engineering problems like control, scheduling, planning etc. are expressed by functional optimization problem and most of them can be changed into dynamic programming (DP) problems. However the DP problems are used in just few cases because as the size of the problem grows, the dynamic programming approach is suffered from the burden of calculation which is called as 'curse of dimensionality'. In order to avoid this problem, the Neuro-Dynamic Programming (NDP) approach is proposed by Bertsekas and Tsitsiklis (1996). To get the solution of seriously nonlinear process control, the interest in NDP approach is enlarged and NDP algorithm is applied to diverse areas such as retailing, finance, inventory management, communication networks, etc. and it has been extended to chemical engineering parts. In the NDP approach, we select the optimal control input policy to minimize the value of cost which is calculated by the sum of current stage cost and future stages cost starting from the next state. The cost value is related with a weight square sum of error and input movement. During the calculation of optimal input policy, if the approximate cost function by using simulation data is utilized with Bellman iteration, the burden of calculation can be relieved and the curse of dimensionality problem of DP can be overcome. It is very important issue how to construct the cost-to-go function which has a good approximate performance. The neural network is one of the eager learning methods and it works as a global approximator to cost-to-go function. In this algorithm, the training of neural network is important and difficult part, and it gives significant effect on the performance of control. To avoid the difficulty in neural network training, the lazy learning method like k-nearest neighbor method can be exploited. The training is unnecessary for this method but requires more computation time and greater data storage. The pH neutralization process has long been taken as a representative benchmark problem of nonlin ar chemical process control due to its nonlinearity and time-varying nature. In this study, the NDP algorithm was applied to pH neutralization process. At first, the pH neutralization process control to use NDP algorithm was performed through simulations with various approximators. The global and local approximators are used for NDP calculation. After that, the verification of NDP in real system was made by pH neutralization experiment. The control results by NDP algorithm was compared with those by the PI controller which is traditionally used, in both simulations and experiments. From the comparison of results, the control by NDP algorithm showed faster and better control performance than PI controller. In addition to that, the control by NDP algorithm showed the good results when it applied to the cases with disturbances and multiple set point changes.

  • PDF

Construction of a reference stature growth curve using spline function and prediction of final stature in Korean (스플라인 함수를 이용한 한국인 키 기준 성장 곡선 구성과 최종 키 예측 연구)

  • An, Hong-Sug;Lee, Shin-Jae
    • The korean journal of orthodontics
    • /
    • v.37 no.1 s.120
    • /
    • pp.16-28
    • /
    • 2007
  • Objective: Evaluation of individual growth is important in orthodontics. The aim of this study was to develop a convenient software that can evaluate current growth status and predict further growth. Methods: Stature data of 2 to 20 year-old Koreans (4893 boys and 4987 girls) were extracted from a nationwide data. Age-sex-specific continuous functions describing percentile growth curves were constructed using natural cubic spline function (NCSF). Then, final stature prediction algorithm was developed and its validity was tested using longitudinal series of stature measurements on randomly selected 200 samples. Various accuracy measurements and analyses of errors between observed and predicted stature using NCSF growth curves were performed. Results: NCSF growth curves were shown to be excellent models in describing reference percentile stature growth curie over age. The prediction accuracy compared favorably with previous prediction models, even more accurate. The current prediction models gave more accurate results in girls than boys. Although the prediction accuracy was high, the error pattern of the validation data showed that in most cases, there were a lot of residuals with the same sign, suggestive of autocorrelation among them. Conclusion: More sophisticated growth prediction algorithm is warranted to enhance a more appropriate goodness of model fit for individual growth.

Onset of Natural Convection in Transient Hot Wire Device for Measuring Thermal Conductivity of Nanofluids (비정상열선법을 이용한 나노유체 열전도도 측정 시 자연대류 개시점에 대한 연구)

  • Lee, Seung-Hyun;Kim, Hyun-Jin;Jang, Seok-Pil
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.35 no.3
    • /
    • pp.279-285
    • /
    • 2011
  • We perform a numerical study to determine the time of onset of natural convection in a transient hot wire (THW) device for measuring the thermal conductivity of nanofluids. The samples used in this simulation are water-based $Al_2O_3$ nanofluids with volume fractions of 1%, 4%, and 10%, and the properties are calculated by theoretical models and experimental correlations. The THW apparatus using coated wire is modeled by the control-volume-based finite difference method, and the start of natural convection is determined by observing the temperature rise of the wire under a gravity field. The onset time is 11.5 s for water and 41.6 s for water-based $Al_2O_3$ nanofluids predicted by Maxwell thermal conductivity model with a 10% volume fraction. We confirm that the onset time of natural convection of nanofluids in the cylinder increases with the nanoparticle volume fraction. We suggest a correlation for predicting the onset time on the basis of the numerical results. Finally, it is shown that the measurement error due to natural convection is negligible if the measurement using the transient hot wire method is completed before the onset of natural convection in the base fluid.

COMPARISON OF LINEAR AND NON-LINEAR NIR CALIBRATION METHODS USING LARGE FORAGE DATABASES

  • Berzaghi, Paolo;Flinn, Peter C.;Dardenne, Pierre;Lagerholm, Martin;Shenk, John S.;Westerhaus, Mark O.;Cowe, Ian A.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1141-1141
    • /
    • 2001
  • The aim of the study was to evaluate the performance of 3 calibration methods, modified partial least squares (MPLS), local PLS (LOCAL) and artificial neural network (ANN) on the prediction of chemical composition of forages, using a large NIR database. The study used forage samples (n=25,977) from Australia, Europe (Belgium, Germany, Italy and Sweden) and North America (Canada and U.S.A) with information relative to moisture, crude protein and neutral detergent fibre content. The spectra of the samples were collected with 10 different Foss NIR Systems instruments, which were either standardized or not standardized to one master instrument. The spectra were trimmed to a wavelength range between 1100 and 2498 nm. Two data sets, one standardized (IVAL) and the other not standardized (SVAL) were used as independent validation sets, but 10% of both sets were omitted and kept for later expansion of the calibration database. The remaining samples were combined into one database (n=21,696), which was split into 75% calibration (CALBASE) and 25% validation (VALBASE). The chemical components in the 3 validation data sets were predicted with each model derived from CALBASE using the calibration database before and after it was expanded with 10% of the samples from IVAL and SVAL data sets. Calibration performance was evaluated using standard error of prediction corrected for bias (SEP(C)), bias, slope and R2. None of the models appeared to be consistently better across all validation sets. VALBASE was predicted well by all models, with smaller SEP(C) and bias values than for IVAL and SVAL. This was not surprising as VALBASE was selected from the calibration database and it had a sample population similar to CALBASE, whereas IVAL and SVAL were completely independent validation sets. In most cases, Local and ANN models, but not modified PLS, showed considerable improvement in the prediction of IVAL and SVAL after the calibration database had been expanded with the 10% samples of IVAL and SVAL reserved for calibration expansion. The effects of sample processing, instrument standardization and differences in reference procedure were partially confounded in the validation sets, so it was not possible to determine which factors were most important. Further work on the development of large databases must address the problems of standardization of instruments, harmonization and standardization of laboratory procedures and even more importantly, the definition of the database population.

  • PDF