• Title/Summary/Keyword: Models, statistical

Search Result 3,041, Processing Time 0.031 seconds

Application of the Neural Networks Models for the Daily Precipitation Downscaling (일 강우량 Downscaling을 위한 신경망모형의 적용)

  • Kim, Seong-Won;Kyoung, Min-Soo;Kim, Byung-Sik;Kim, Hyung-Soo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.125-128
    • /
    • 2009
  • The research of climate change impact in hydrometeorology often relies on climate change information. In this paper, neural networks models such as generalized regression neural networks model (GRNNM) and multilayer perceptron neural networks model (MLP-NNM) are proposed statistical downscaling of the daily precipitation. The input nodes of neural networks models consist of the atmospheric meteorology and the atmospheric pressure data for 4 grid points including $127.5^{\circ}E/37.5^{\circ}N$, $127.5^{\circ}E/35^{\circ}N$, $125^{\circ}E/37.5^{\circ}N$ and $125^{\circ}E/35^{\circ}N$, respectively. The output node of neural networks models consist of the daily precipitation data for Seoul station. For the performances of the neural networks models, they are composed of training and test performances, respectively. From this research, we evaluate the impact of GRNNM and MLP-NNM performances for the downscaling of the daily precipitation data. We should, therefore, construct the credible daily precipitation data for Seoul station using statistical downscaling method. The proposed methods can be applied to future climate prediction/projection using the various climate change scenarios such as GCMs and RCMs.

  • PDF

Comparative analysis on digital models obtained by white light and blue LED optical scanners (백색광과 청색 LED 방식의 광학스캐너로 채득된 디지털 모형의 비교분석)

  • Choi, Seog-Soon;Kim, Jae-Hong;Kim, Ji-Hwan
    • Journal of Technologic Dentistry
    • /
    • v.36 no.1
    • /
    • pp.17-23
    • /
    • 2014
  • Purpose: The purpose of this study was to analyze and compare the relative accuracy of digitized stone models of lower full arch, using two different scanning system. Methods: Replica stone models(N=20) were produced from lower arch acrylic model. Twenty digital models were made with the white light and blue LED($Medit^{(R)}$, Korea) scanner. Two-dimensional distance between the landmarks were measured on the Delcam $CopyCAD^{(R)}$(Delcam plc, UK). Independent samples t-test was applied for comparison of the groups. All statistical analyses were performed using the SPSS software package(Statistical Package for Social Sciences for Windows, version 12.0). Results: The absolute disagreement between measurements made directly on the two different scanner-based dental digital models was 0.02~0.04mm, and was not statistically significant(P>0.05). Conclusion: The precision of the blue LED optical scanner was comparable with the digitization device, and relative accuracy was similar. However, there still is room for improvement and further standardization of dental CAD technologies.

Cure rate proportional odds models with spatial frailties for interval-censored data

  • Yiqi, Bao;Cancho, Vicente Garibay;Louzada, Francisco;Suzuki, Adriano Kamimura
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.605-625
    • /
    • 2017
  • This paper presents proportional odds cure models to allow spatial correlations by including spatial frailty in the interval censored data setting. Parametric cure rate models with independent and dependent spatial frailties are proposed and compared. Our approach enables different underlying activation mechanisms that lead to the event of interest; in addition, the number of competing causes which may be responsible for the occurrence of the event of interest follows a Geometric distribution. Markov chain Monte Carlo method is used in a Bayesian framework for inferential purposes. For model comparison some Bayesian criteria were used. An influence diagnostic analysis was conducted to detect possible influential or extreme observations that may cause distortions on the results of the analysis. Finally, the proposed models are applied for the analysis of a real data set on smoking cessation. The results of the application show that the parametric cure model with frailties under the first activation scheme has better findings.

Likelihood Approximation of Diffusion Models through Approximating Brownian Bridge (브라운다리 근사를 통한 확산모형의 우도 근사법)

  • Lee, Eun-kyung;Sim, Songyong;Lee, Yoon Dong
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.5
    • /
    • pp.895-906
    • /
    • 2015
  • Diffusion is a mathematical tool to explain the fluctuation of financial assets and the movement of particles in a micro time scale. There are ongoing statistical trials to develop an estimation method for diffusion models based on likelihood. When we estimate diffusion models by applying the maximum likelihood estimation method on data observed at discrete time points, we need to know the transition density of the diffusion. In order to approximate the transition densities of diffusion models, we suggests the method to approximate the path integral of the random process with normal random variables, and compare the numerical properties of the method with other approximation methods.

Comparing the efficiency of dispersion parameter estimators in gamma generalized linear models (감마 일반화 선형 모형에서의 산포 모수 추정량에 대한 효율성 연구)

  • Jo, Seongil;Lee, Woojoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.95-102
    • /
    • 2017
  • Gamma generalized linear models have received less attention than Poisson and binomial generalized linear models. Therefore, many old-established statistical techniques are still used in gamma generalized linear models. In particular, existing literature and textbooks still use approximate estimates for the dispersion parameter. In this paper we study the efficiency of various dispersion parameter estimators in gamma generalized linear models and perform numerical simulations. Numerical studies show that the maximum likelihood estimator and Cox-Reid adjusted maximum likelihood estimator are recommended and that approximate estimates should be avoided in practice.

Reliability analysis of simply supported beam using GRNN, ELM and GPR

  • Jagan, J;Samui, Pijush;Kim, Dookie
    • Structural Engineering and Mechanics
    • /
    • v.71 no.6
    • /
    • pp.739-749
    • /
    • 2019
  • This article deals with the application of reliability analysis for determining the safety of simply supported beam under the uniformly distributed load. The uncertainties of the existing methods were taken into account and hence reliability analysis has been adopted. To accomplish this aim, Generalized Regression Neural Network (GRNN), Extreme Learning Machine (ELM) and Gaussian Process Regression (GPR) models are developed. Reliability analysis is the probabilistic style to determine the possibility of failure free operation of a structure. The application of probabilistic mathematics into the quantitative aspects of a structure and improve the qualitative aspects of a structure. In order to construct the GRNN, ELM and GPR models, the dataset contains Modulus of Elasticity (E), Load intensity (w) and performance function (${\delta}$) in which E and w are inputs and ${\delta}$ is the output. The achievement of the developed models was weighed by various statistical parameters; one among the most primitive parameter is Coefficient of Determination ($R^2$) which has 0.998 for training and 0.989 for testing. The GRNN outperforms the other ELM and GPR models. Other different statistical computations have been carried out, which speaks out the errors and prediction performance in order to justify the capability of the developed models.

Robustness of model averaging methods for the violation of standard linear regression assumptions

  • Lee, Yongsu;Song, Juwon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.2
    • /
    • pp.189-204
    • /
    • 2021
  • In a regression analysis, a single best model is usually selected among several candidate models. However, it is often useful to combine several candidate models to achieve better performance, especially, in the prediction viewpoint. Model combining methods such as stacking and Bayesian model averaging (BMA) have been suggested from the perspective of averaging candidate models. When the candidate models include a true model, it is expected that BMA generally gives better performance than stacking. On the other hand, when candidate models do not include the true model, it is known that stacking outperforms BMA. Since stacking and BMA approaches have different properties, it is difficult to determine which method is more appropriate under other situations. In particular, it is not easy to find research papers that compare stacking and BMA when regression model assumptions are violated. Therefore, in the paper, we compare the performance among model averaging methods as well as a single best model in the linear regression analysis when standard linear regression assumptions are violated. Simulations were conducted to compare model averaging methods with the linear regression when data include outliers and data do not include them. We also compared them when data include errors from a non-normal distribution. The model averaging methods were applied to the water pollution data, which have a strong multicollinearity among variables. Simulation studies showed that the stacking method tends to give better performance than BMA or standard linear regression analysis (including the stepwise selection method) in the sense of risks (see (3.1)) or prediction error (see (3.2)) when typical linear regression assumptions are violated.

Review of Statistical Methods for Evaluating the Performance of Survival or Other Time-to-Event Prediction Models (from Conventional to Deep Learning Approaches)

  • Seo Young Park;Ji Eun Park;Hyungjin Kim;Seong Ho Park
    • Korean Journal of Radiology
    • /
    • v.22 no.10
    • /
    • pp.1697-1707
    • /
    • 2021
  • The recent introduction of various high-dimensional modeling methods, such as radiomics and deep learning, has created a much greater diversity in modeling approaches for survival prediction (or, more generally, time-to-event prediction). The newness of the recent modeling approaches and unfamiliarity with the model outputs may confuse some researchers and practitioners about the evaluation of the performance of such models. Methodological literacy to critically appraise the performance evaluation of the models and, ideally, the ability to conduct such an evaluation would be needed for those who want to develop models or apply them in practice. This article intends to provide intuitive, conceptual, and practical explanations of the statistical methods for evaluating the performance of survival prediction models with minimal usage of mathematical descriptions. It covers from conventional to deep learning methods, and emphasis has been placed on recent modeling approaches. This review article includes straightforward explanations of C indices (Harrell's C index, etc.), time-dependent receiver operating characteristic curve analysis, calibration plot, other methods for evaluating the calibration performance, and Brier score.

Bayesian Inference for Predicting the Default Rate Using the Power Prior

  • Kim, Seong-W.;Son, Young-Sook;Choi, Sang-A
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.685-699
    • /
    • 2006
  • Commercial banks and other related areas have developed internal models to better quantify their financial risks. Since an appropriate credit risk model plays a very important role in the risk management at financial institutions, it needs more accurate model which forecasts the credit losses, and statistical inference on that model is required. In this paper, we propose a new method for estimating a default rate. It is a Bayesian approach using the power prior which allows for incorporating of historical data to estimate the default rate. Inference on current data could be more reliable if there exist similar data based on previous studies. Ibrahim and Chen (2000) utilize these data to characterize the power prior. It allows for incorporating of historical data to estimate the parameters in the models. We demonstrate our methodologies with a real data set regarding SOHO data and also perform a simulation study.

Controling the Healthy Worker Effect in Occupational Epidemiology

  • Kim, Jin-Heum;Nam, Chung-Mo
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.11a
    • /
    • pp.197-201
    • /
    • 2002
  • The healthy worker effect is an important issue in occupational epidemiology. We proposed a new statistical method to test the relationship between exposure and time to death in the presence of the healthy worker effect. In this study, we considered the healthy worker hire effect to operate as a confounder and the healthy worker survival effect to operate as a confounder and an intermediate variable. The basic idea of the proposed method reflects the length bias-sampling caused by changing one's employment status. Simulation studies were also carried out to compare the proposed method with the Cox proportional hazards models. According to our simulation studies, both the proposed test and the test based on the Cox model having the change of the employment status as a time-dependent covariate seem to be satisfactory at an upper 5% significance level. The Cox models, however, are inadequate with the change, if any, of the employment status as time-independent covariate. The proposed test is superior in power to the test based on the Cox model including the time-dependent employment status.

  • PDF