• Title/Summary/Keyword: Error variance

Search Result 723, Processing Time 0.03 seconds

A Study on Mechanical Errors in Cone Beam Computed Tomography(CBCT) System (콘빔 전산화단층촬영(CBCT) 시스템에서 기계적 오류에 관한 연구)

  • Lee, Yi-Seong;Yoo, Eun-Jeong;Kim, Seung-Keun;Choi, Kyoung-Sik;Lee, Jeong-Woo;Suh, Tae-Suk;Kim, Joeng-Koo
    • Journal of radiological science and technology
    • /
    • v.36 no.2
    • /
    • pp.123-129
    • /
    • 2013
  • This study investigated the rate of setup variance by the rotating unbalance of gantry in image-guided radiation therapy. The equipments used linear accelerator(Elekta Synergy TM, UK) and a three-dimensional volume imaging mode(3D Volume View) in cone beam computed tomography(CBCT) system. 2D images obtained by rotating $360^{\circ}$and $180^{\circ}$ were reconstructed to 3D image. Catpan503 phantom and homogeneous phantom were used to measure the setup errors. Ball-bearing phantom was used to check the rotation axis of the CBCT. The volume image from CBCT using Catphan503 phantom and homogeneous phantom were analyzed and compared to images from conventional CT in the six dimensional view(X, Y, Z, Roll, Pitch, and Yaw). The variance ratio of setup error were difference in X 0.6 mm, Y 0.5 mm Z 0.5 mm when the gantry rotated $360^{\circ}$ in orthogonal coordinate. whereas rotated $180^{\circ}$, the error measured 0.9 mm, 0.2 mm, 0.3 mm in X, Y, Z respectively. In the rotating coordinates, the more increased the rotating unbalance, the more raised average ratio of setup errors. The resolution of CBCT images showed 2 level of difference in the table recommended. CBCT had a good agreement compared to each recommended values which is the mechanical safety, geometry accuracy and image quality. The rotating unbalance of gentry vary hardly in orthogonal coordinate. However, in rotating coordinate of gantry exceeded the ${\pm}1^{\circ}$ of recommended value. Therefore, when we do sophisticated radiation therapy six dimensional correction is needed.

Analysis of Causality of the Increase in the Port Congestion due to the COVID-19 Pandemic and BDI(Baltic Dry Index) (COVID-19 팬데믹으로 인한 체선율 증가와 부정기선 운임지수의 인과성 분석)

  • Lee, Choong-Ho;Park, Keun-Sik
    • Journal of Korea Port Economic Association
    • /
    • v.37 no.4
    • /
    • pp.161-173
    • /
    • 2021
  • The shipping industry plummeted and was depressed due to the global economic crisis caused by the bankruptcy of Lehman Brothers in the US in 2008. In 2020, the shipping market also suffered from a collapse in the unstable global economic situation due to the COVID-19 pandemic, but unexpectedly, it changed to an upward trend from the end of 2020, and in 2021, it exceeded the market of the boom period of 2008. According to the Clarksons report published in May 2021, the decrease in cargo volume due to the COVID-19 pandemic in 2020 has returned to the pre-corona level by the end of 2020, and the tramper bulk carrier capacity of 103~104% of the Panamax has been in the ports due to congestion. Earnings across the bulker segments have risen to ten-year highs in recent months. In this study, as factors affecting BDI, the capacity and congestion ratio of Cape and Panamax ships on the supply side, iron ore and coal seaborne tonnge on the demand side and Granger causality test, IRF(Impulse Response Function) and FEVD(Forecast Error Variance Decomposition) were performed using VAR model to analyze the impact on BDI by congestion caused by strengthen quarantine at the port due to the COVID-19 pandemic and the loading and discharging operation delay due to the infection of the stevedore, etc and to predict the shipping market after the pandemic. As a result of the Granger causality test of variables and BDI using time series data from January 2016 to July 2021, causality was found in the Fleet and Congestion variables, and as a result of the Impulse Response Function, Congestion variable was found to have significant at both upper and lower limit of the confidence interval. As a result of the Forecast Error Variance Decomposition, Congestion variable showed an explanatory power upto 25% for the change in BDI. If the congestion in ports decreases after With Corona, it is expected that there is down-risk in the shipping market. The COVID-19 pandemic occurred not from economic factors but from an ecological factor by the pandemic is different from the past economic crisis. It is necessary to analyze from a different point of view than the past economic crisis. This study has meaningful to analyze the causality and explanatory power of Congestion factor by pandemic.

Evaluating the Predictability of Heat and Cold Damages of Soybean in South Korea using PNU CGCM -WRF Chain (PNU CGCM-WRF Chain을 이용한 우리나라 콩의 고온해 및 저온해에 대한 예측성 검증)

  • Myeong-Ju, Choi;Joong-Bae, Ahn;Young-Hyun, Kim;Min-Kyung, Jung;Kyo-Moon, Shim;Jina, Hur;Sera, Jo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.218-233
    • /
    • 2022
  • The long-term (1986~2020) predictability of the number of days of heat and cold damages for each growth stage of soybean is evaluated using the daily maximum and minimum temperature (Tmax and Tmin) data produced by Pusan National University Coupled General Circulation Model (PNU CGCM)-Weather Research and Forecasting (WRF). The Predictability evaluation methods for the number of days of damages are Normalized Standard Deviations (NSD), Root Mean Square Error (RMSE), Hit Rate (HR), and Heidke Skill Score (HSS). First, we verified the simulation performance of the Tmax and Tmin, which are the variables that define the heat and cold damages of soybean. As a result, although there are some differences depending on the month starting with initial conditions from January (01RUN) to May (05RUN), the result after a systematic bias correction by the Variance Scaling method is similar to the observation compared to the bias-uncorrected one. The simulation performance for correction Tmax and Tmin from March to October is overall high in the results (ENS) averaged by applying the Simple Composite Method (SCM) from 01RUN to 05RUN. In addition, the model well simulates the regional patterns and characteristics of the number of days of heat and cold damages by according to the growth stages of soybean, compared with observations. In ENS, HR and HSS for heat damage (cold damage) of soybean have ranged from 0.45~0.75, 0.02~0.10 (0.49~0.76, -0.04~0.11) during each growth stage. In conclusion, 01RUN~05RUN and ENS of PNU CGCM-WRF Chain have the reasonable performance to predict heat and cold damages for each growth stage of soybean in South Korea.

Physical Characterization of Domestic Aggregate (국내 골재의 물리적 특성 분석)

  • Junyoung Ko;Eungyu Park;Junghae Choi;Jong-Tae Kim
    • The Journal of Engineering Geology
    • /
    • v.33 no.1
    • /
    • pp.169-187
    • /
    • 2023
  • Aggregates from 84 cities and counties in Korea were tested for quality to allow analysis of the physical characteristics of aggregates from river, land, and forest environments. River and land aggregates were analyzed for 18 test items, and forest aggregates for 12 test items. They were classified according to watershed and geology, respectively. The observed physical characteristics of the river aggregates by basin were as follows: aggregates from the Geum River basin passed through 2.5, 1.2, 0.6, 0.3, 0.15, and 0.08 mm sieves; clay lumps constituted the Nakdong River basin material; aggregates from the Seomjin River basin passed through 10, 5, and 2.5 mm sieves; those from the Youngsang River basin passed through 1.2, 0.6, 0.3, 0.15, and 0.08 mm sieves; and aggregates from the Han River basin passed through 10, 5, 2.5, 1.2, 0.6, 0.3, and 0.08 mm sieves, Stability; Standard errors were analyzed for the average amount passing through 10, 0.6, and 0.08 mm silver sieves, and performance rate showed different distribution patterns from other physical characteristics. Analysis of variance found that 16 of the 18 items, excluding the absorption rate and the performance rate, had statistically significant differences in their averages by region. Considering land aggregates by basin, those from the Nakdong River basin excluding the Geum River basin had clay lumps, those from the Seomjin River basin had 10 and 5 mm sieve passage, aggregates from the Youngsang River basin had 0.08 mm sieve passage, and those from the Han River basin had 10, 0.6, and 0.08 mm sieve passage. The standard error of the mean of the quantity showed a different distribution pattern from the other physical characteristics. Analysis of variance found a statistically significant difference in the average of all 18 items by region. Analyzing forest aggregates by geology showed distributions of porosity patterns different from those of other physical characteristics in metamorphic rocks (but not igneous rocks), and distributions of wear rate and porosity were different from those of sedimentary rocks. There were statistically significant differences in the average volume mass, water absorption rate, wear rate, and Sc/Rc items by lipid.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

Basic Studies for the Breeding of High Protein Rice. I. Comparison of the analytical methods for the measurement of the protein content in the brown rice (수두 고단백 계통육성을 위한 기초적 연구 I. 계통육성을 위한 조단백질 분석법의 비교)

  • Mun-Hue Heu;Hak-Soo SUH
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.12
    • /
    • pp.1-5
    • /
    • 1972
  • In order to compare the analytical efficiency of the Kjeldahal, Dye binding and Biurett method for the determination of nitrogen content in the brown rice, correlation coefficients were calculate with the analytical data obtained by the above mentioned 3 different methods for the brown rice of 36 varieties or lines grown at 5 different nitrogen levels (0, 7.5, 15.0, 22.5 and 30.0kg/10a). Analysis of variance were made for the data of 6 varieties among those 36, and compared the precision of the data obtained by the 3 analytical methods. The expenditure (in terms of chemicals and labour) required for the 3 methods are also compared. The results are summarized as follows; 1. The correlation between D. B. C. and Kjeldahl value were generally more significant than the correlation between the value of Biurett and the value of Kjeldahl. But, the D. B. C. method generally overestimates than the Kjeldahl method at both extreme low and extreme high nitrogen contents, and the Biurett method includes more dispersed error than other two methods, though the optical values are parallel to the Kjeldahl nitrogen values at any levels of nitrogen applied. 2. The varietal difference in nitrogen value obtained by the 3 methods were different at the different nitrogen level applied. That is the interaction between variety and analytical method, and between the nitrogen level and analytical method were significant statistically. 3. The coefficient of variance (C, V.) was largest in the data analyzed by Biurett method and next in the data analayred by D. B. C. method. In the data analyzed by Biurett, the C. V. increased along onglong increase of nitrogen applied. But, in the data obtained by D. B. C. or Biurett the C. V. increased along the decrease of nitrogen applied. 4. From the comparison of the expenditure (in terms of chemicals and labour) required for the analysis of 100 samples by 3 methods, it was noticed that, the Biurett or D. B. C. method largely curtail the chemical expenditure and labour costs. Especially the Biurett method could curtail more labour costs.

  • PDF

Evaluate Utility of Thyroid Incidentaloma Discrimination by $^{18}F$-FDG PET/CT Delay Scan Images ($^{18}F$-FDG PET/CT검사에서 지연영상을 이용한 갑상선 우연종 감별의 유용성 평가)

  • Lee, Hyun-Kuk;Yang, Seoung-Oh;Song, Gi-Deok;Song, Chi-Ock;Lee, Gi-Heun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.184-191
    • /
    • 2008
  • Purpose: To evaluate the degree of malignancy of incident thyroid lesion found in $^{18}F$-FDG PET/CT findings and the usefulness of the method suggested in this study, we applicate the Delay Scan Method that differentiate a false positive benign tumor, inflammation and malignancy, as well as make the criteria of SUV. Materials and Methods: A retrograde study was conducted of 800 patients who were admitted in E hospital to receive $^{18}F$-FDG PET/CT examination. One patient who was diagnosed as primary thyroid cancer and received $^{18}F$-FDG PET/CT examination was excluded. The number of final patients of this study was 799, the reasons of $^{18}F$-FDG PET/CT examination of these patients were follow-up of old cancer or suspicious tumorous lesion in 696 and disease screening in 103. $^{18}F$-FDG PET/CT image photographing was taken in Biograph-Duo made by SIEMENS, after taking normal $^{18}F$-FDG PET/CT image (1 hr) and then 1 hr later we took the thyroid 1 bed-delayed image for the patients who showed abnormal thyroid $^{18}F$-FDG uptake and above 2.0 SUV for 2 minutes every 1 bed. For the patients who showed abnormal thyroid uptake and above 2.0 SUV, 1 hr later, we took a 1 bed-delayed image and then made a comparative study between measured $SUV_{max}$ of 1 hr-abnormal uptake image and that of 2 hr-delayed image. Results and Conclusion: In this $^{18}F$-FDG PET/CT study among the patients who showed incidental $^{18}F$-FDG thyroidal uptake the number of thyroid incidentaloma was 5 (0.63%), all of then showed benign findings. And in the case of incidental $^{18}F$-FDG uptake in thyroid, $SUV_{max}$ variance obtained from 2 hr delayed image can be a indirect criteria in differentiating benign tumor from malignancy and decrease finding error. In the cases found thyroid incidentaloma when 1) $SUV_{max}$ of focal thyroid lesion is above 5.0 and 2) $SUV_{max}$ variance between normal $^{18}F$-FDG PET/CT exam and 2 hr delayed is $1.0{\pm}0.5$, they are suspected as malignancy and confirming biopsy is to be followed. Otherwise, I also think that distinct follow-up PET or CT image study is a reasonable diagnostic method.

  • PDF

A Study on the Volatility of Global Stock Markets using Markov Regime Switching model (마코브국면전환모형을 이용한 글로벌 주식시장의 변동성에 대한 연구)

  • Lee, Kyung-Hee;Kim, Kyung-Soo
    • Management & Information Systems Review
    • /
    • v.34 no.3
    • /
    • pp.17-39
    • /
    • 2015
  • This study examined the structural changes and volatility in the global stock markets using a Markov Regime Switching ARCH model developed by the Hamilton and Susmel (1994). Firstly, the US, Italy and Ireland showed that variance in the high volatility regime was more than five times that in the low volatility, while Korea, Russia, India, and Greece exhibited that variance in the high volatility regime was increased more than eight times that in the low. On average, a jump from regime 1 to regime 2 implied roughly three times increased in risk, while the risk during regime 3 was up to almost thirteen times than during regime 1 over the study period. And Korea, the US, India, Italy showed ARCH(1) and ARCH(2) effects, leverage and asymmetric effects. Secondly, 278 days were estimated in the persistence of low volatility regime, indicating that the mean transition probability between volatilities exhibited the highest long-term persistence in Korea. Thirdly, the coefficients appeared to be unstable structural changes and volatility for the stock markets in Chow tests during the Asian, Global and European financial crisis. In addition, 1-Step prediction error tests showed that stock markets were unstable during the Asian crisis of 1997-1998 except for Russia, and the Global crisis of 2007-2008 except for Korea and the European crisis of 2010-2011 except for Korea, the US, Russia and India. N-Step tests exhibited that most of stock markets were unstable during the Asian and Global crisis. There was little change in the Asian crisis in CUSUM tests, while stock markets were stable until the late 2000s except for some countries. Also there were stable and unstable stock markets mixed across countries in CUSUMSQ test during the crises. Fourthly, I confirmed a close relevance of the volatility between Korea and other countries in the stock markets through the likelihood ratio tests. Accordingly, I have identified the episode or events that generated the high volatility in the stock markets for the financial crisis, and for all seven stock markets the significant switch between the volatility regimes implied a considerable change in the market risk. It appeared that the high stock market volatility was related with business recession at the beginning in 1990s. By closely examining the history of political and economical events in the global countries, I found that the results of Lamoureux and Lastrapes (1990) were consistent with those of this paper, indicating there were the structural changes and volatility during the crises and specificly every high volatility regime in SWARCH-L(3,2) student t-model was accompanied by some important policy changes or financial crises in countries or other critical events in the international economy. The sophisticated nonlinear models are needed to further analysis.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Derivation of Stem Taper Equations and a Stem Volume Table for Quercus acuta in a Warm Temperate Region (난대지역 붉가시나무의 수간곡선식 도출 및 수간재적표 작성)

  • Suyoung Jung;Kwangsoo Lee;Hyunsoo Kim; Joonhyung Park;Jaeyeop Kim;Chunhee Park;Yeongmo Son
    • Journal of Korean Society of Forest Science
    • /
    • v.112 no.4
    • /
    • pp.417-425
    • /
    • 2023
  • The aim of this study was to derive stem taper equations for Quercus acuta, one of main evergreen broad-leaved tree species found in warm temperate regions, and to prepare a stem volume table using those stem taper equations. A total of 688 individual trees were used in the analysis, which were collected from Jeonnam-do, Gyeongnam-do, and Jeju-do. The stem taper models applied to derive the stem curve pattern were the Max and Burkhart, Kozak, and Lee models. Among the three stem taper models, the best explanation of the stem curve shape of Q. acuta was found to be given by the Kozak model, which showed a fitness index of 0.9583, bias of 0.0352, percentage of estimated standard error of 1.1439, and mean absolute deviation of 0.6751. Thus, the stem taper of Q. acuta was estimated using the Kozak model. Moreover,thestemvolumecalculationwasperforme d by applying the Smalian formula to the diameter and height of each stem interval. In addition, an analysis of variance (ANOVA) was conducted to compare the two existing Q. acuta stem volume tables (2007 and 2010) and the newly created stem volume table (2023). This analysis revealed that the stem volume table constructed in the Wando region in 2007 included about twice as much as the stem volume tables constructed in 2010 and 2023. The stem volume table (2023) developed in this study is not only based on the regional collection range and number of utilized trees but also on a sound scientific basis. Therefore, it can be used at the national level as an official stem volume table for Q. acuta.