• Title/Summary/Keyword: 비모수 모형

Search Result 395, Processing Time 0.022 seconds

Multifactor Dimensionality Reduction(MDR) Analysis by Dummy Variables (더미(dummy) 변수를 활용한 다중인자 차원 축소(MDR) 방법)

  • Lee, Jea-Young;Lee, Ho-Guen
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.2
    • /
    • pp.435-442
    • /
    • 2009
  • Multiple genes interacting is a difficult due to the limitations of parametric statistical method like as logistic regression for detection of gene effects that are dependent solely on interactions with other genes and with environmental exposures. Multifactor dimensionality reduction(MDR) statistical method by dummy variables was applied to identify interaction effects of single nucleotide polymorphisms(SNPs) responsible for longissimus mulcle dorsi area(LMA), carcass cold weight(CWT) and average daily gain(ADG) in a Hanwoo beef cattle population.

Bayesian Parameter Estimation of 2D infinite Hidden Markov Model for Image Segmentation (영상분할을 위한 2차원 무한 은닉 마코프 모형의 비모수적 베이스 추정)

  • Kim, Sun-Worl;Cho, Wan-Hyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.477-479
    • /
    • 2011
  • 본 논문에서는 1차원 은닉 마코프 모델을 2차원으로 확장하기 위하여 노드들의 마코프 특성이 인과적인 관계를 갖는 마코프 메쉬 모델을 이용하여 완전한 2차원 HMM의 구조를 갖는 모델을 제안한다. 마코프메쉬 모델은 이웃시스템을 통하여 이전의 시점을 정의하고, 인과적인 관계를 통하여 전이확률의 계산을 가능하게 한다. 또한 영상의 최적의 분할을 위하여 계층적 디리슐레 과정을 사전분포로 두어 고정된 상태의 수가 아닌 무한의 상태 수를 갖는 2차원 HMM을 제안한다. HDP로 정의된 사전분포와 관측된 표본 자료의 정보를 갖는 우도함수를 결합한 사후분포의 베이스 추정은 깁스샘플링 알고리즘을 이용하여 계산된다.

A Sequential Monte Carlo inference for longitudinal data with luespotted mud hopper data (짱뚱어 자료로 살펴본 장기 시계열 자료의 순차적 몬테 칼로 추론)

  • Choi, Il-Su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.6
    • /
    • pp.1341-1345
    • /
    • 2005
  • Sequential Monte Carlo techniques are a set of powerful and versatile simulation-based methods to perform optimal state estimation in nonlinear non-Gaussian state-space models. We can use Monte Carlo particle filters adaptively, i.e. so that they simultaneously estimate the parameters and the signal. However, Sequential Monte Carlo techniques require the use of special panicle filtering techniques which suffer from several drawbacks. We consider here an alternative approach combining particle filtering and Sequential Hybrid Monte Carlo. We give some examples of applications in fisheries(luespotted mud hopper data).

A Study on the Efficiency and Productivity Change of Korean Non-Life Insurance Company After Financial Crisis (금융위기 이후 국내 손해보험회사의 효율성 및 생산성 변화 연구)

  • Park, Chun-Gwang;Kim, Byeong-Chul
    • The Korean Journal of Financial Management
    • /
    • v.23 no.2
    • /
    • pp.57-83
    • /
    • 2006
  • The purpose of this paper is to analyze the efficiency and productivity change and inefficiency cause of the korean non-life insurance companies of the before($1993{\sim}1996$) and after($1998{\sim}2004$) of IMF. we use DEA (Data Envelopment Analysis) model to measure company efficiency and MPI(Malmquist productivity indices) to measure company productivity change and Tobit regression to analyze inefficiency cause. we utilize ten non-life insurance companies in korea and the time-series data for eleven from 1993 to 2004 except 1997. The empirical results show the following findings. First, total cost efficiency shows that the after of IMF decrease of 3.7% over the before of IMF and MPI change indicates that the after of IMF increase 7.7% over the before IMF. Second, the results of Tobit regression to analysis the cause of inefficiency show that total cost efficiency is positively related invested assets, acquisition expenses ratio, collection expenses ratio and is negatively related solicitors ratio, personnel expenses ratio, land & buildings expenses ratio, loss ratio, net operating expenses ratio. Especially inefficiency of small-to-mid sized companies is main cause of total cost efficiency of non-life insurance companies in korea. Small-to-mid sized companies endeavored various aspects of business strategies.

  • PDF

The Assessing Comparative Study for Statistical Process Control of Software Reliability Model Based on Logarithmic Learning Effects (대수형 학습효과에 근거한 소프트웨어 신뢰모형에 관한 통계적 공정관리 비교 연구)

  • Kim, Kyung-Soo;Kim, Hee-Cheul
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.319-326
    • /
    • 2013
  • There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on infinite failure model and non-homogeneous Poisson Processes (NHPP). Statistical process control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper, we proposed a control mechanism based on NHPP using mean value function of logarithmic hazard learning effects property.

Predicting claim size in the auto insurance with relative error: a panel data approach (상대오차예측을 이용한 자동차 보험의 손해액 예측: 패널자료를 이용한 연구)

  • Park, Heungsun
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.5
    • /
    • pp.697-710
    • /
    • 2021
  • Relative error prediction is preferred over ordinary prediction methods when relative/percentile errors are regarded as important, especially in econometrics, software engineering and government official statistics. The relative error prediction techniques have been developed in linear/nonlinear regression, nonparametric regression using kernel regression smoother, and stationary time series models. However, random effect models have not been used in relative error prediction. The purpose of this article is to extend relative error prediction to some of generalized linear mixed model (GLMM) with panel data, which is the random effect models based on gamma, lognormal, or inverse gaussian distribution. For better understanding, the real auto insurance data is used to predict the claim size, and the best predictor and the best relative error predictor are comparatively illustrated.

Development of Freeway Traffic Incident Clearance Time Prediction Model by Accident Level (사고등급별 고속도로 교통사고 처리시간 예측모형 개발)

  • LEE, Soong-bong;HAN, Dong Hee;LEE, Young-Ihn
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.5
    • /
    • pp.497-507
    • /
    • 2015
  • Nonrecurrent congestion of freeway was primarily caused by incident. The main cause of incident was known as a traffic accident. Therefore, accurate prediction of traffic incident clearance time is very important in accident management. Traffic accident data on freeway during year 2008 to year 2014 period were analyzed for this study. KNN(K-Nearest Neighbor) algorithm was hired for developing incident clearance time prediction model with the historical traffic accident data. Analysis result of accident data explains the level of accident significantly affect on the incident clearance time. For this reason, incident clearance time was categorized by accident level. Data were sorted by classification of traffic volume, number of lanes and time periods to consider traffic conditions and roadway geometry. Factors affecting incident clearance time were analyzed from the extracted data for identifying similar types of accident. Lastly, weight of detail factors was calculated in order to measure distance metric. Weight was calculated with applying standard method of normal distribution, then incident clearance time was predicted. Prediction result of model showed a lower prediction error(MAPE) than models of previous studies. The improve model developed in this study is expected to contribute to the efficient highway operation management when incident occurs.

A Study on Developing Crash Prediction Model for Urban Intersections Considering Random Effects (임의효과를 고려한 도심지 교차로 교통사고모형 개발에 관한 연구)

  • Lee, Sang Hyuk;Park, Min Ho;Woo, Yong Han
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.14 no.1
    • /
    • pp.85-93
    • /
    • 2015
  • Previous studies have estimated crash prediction models with the fixed effect model which assumes the fixed value of coefficients without considering characteristics of each intersections. However the fixed effect model would estimate under estimation of the standard error resulted in over estimation of t-value. In order to overcome these shortcomings, the random effect model can be used with considering heterogeneity of AADT, geometric information and unobserved factors. In this study, data collections from 89 intersections in Daejeon and estimates of crash prediction models were conducted using the random and fixed effect negative binomial regression model for comparison and analysis of two models. As a result of model estimates, AADT, speed limits, number of lanes, exclusive right turn pockets and front traffic signal were found to be significant. For comparing statistical significance of two models, the random effect model could be better statistical significance with -1537.802 of log-likelihood at convergence comparing with -1691.327 for the fixed effect model. Also likelihood ration value was computed as 0.279 for the random effect model and 0.207 for the fixed effect model. This mean that the random effect model can be improved for statistical significance of models comparing with the fixed effect model.

Estimation of conditional mean residual life function with random censored data (임의중단자료에서의 조건부 평균잔여수명함수 추정)

  • Lee, Won-Kee;Song, Myung-Unn;Jeong, Seong-Hwa
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.1
    • /
    • pp.89-97
    • /
    • 2011
  • The aims of this study were to propose a method of estimation for mean residual life function (MRLF) from conditional survival function using the Buckley and James's (1979) pseudo random variables, and then to assess the performance of the proposed method through the simulation studies. The mean squared error (MSE) of proposed method were less than those of the Cox's proportional hazard model (PHM) and Beran's nonparametric method for non-PHM case. Futhermore in the case of PHM, the MSE's of proposed method were similar to those of Cox's PHM. Finally, to evaluate the appropriateness of practical use, we applied the proposed method to the gastric cancer data. The data set consist of the 1, 192 patients with gastric cancer underwent surgery at the Department of Surgery, K-University Hospital.

Nonparametric Detection of a Discontinuity Point in the Variance Function with the Second Moment Function

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.3
    • /
    • pp.591-601
    • /
    • 2005
  • In this paper we consider detection of a discontinuity point in the variance function. When the mean function is discontinuous at a point, the variance function is usually discontinuous at the point. In this case, we had better estimate the location of the discontinuity point with the mean function rather than the variance function. On the other hand, the variance function only has a discontinuity point. The target function in order to estimate the location can be used the second moment function since the variance function and the second moment function have the same location and jump size of the discontinuity point. We propose a nonparametric detection method of the discontinuity point with the second moment function. We give the asymptotic results of these estimators. Computer simulation demonstrates the improved performance of the method over the existing ones.

  • PDF