• Title/Summary/Keyword: 자기회귀계수

Search Result 149, Processing Time 0.025 seconds

Estimation for random coefficient autoregressive model (확률계수 자기회귀 모형의 추정)

  • Kim, Ju Sung;Lee, Sung Duck;Jo, Na Rae;Ham, In Suk
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.1
    • /
    • pp.257-266
    • /
    • 2016
  • Random Coefficient Autoregressive models (RCA) have attracted increased interest due to the wide range of applications in biology, economics, meteorology and finance. We consider an RCA as an appropriate model for non-linear properties and better than an AR model for linear properties. We study the methods of RCA parameter estimation. Especially we proposed the special case that an random coefficient ${\phi}(t)$ has the initial value ${\phi}(0)$ in the RCA model. In practical study, we estimated the parameters and compared Prediction Error Sum of Squares (PRESS) criterion between AR and RCA using Korean Mumps data.

자기회귀계수에 대한 소표본 점근추론

  • Na, Jong-Hwa;Kim, Jeong-Suk;Jang, Yeong-Mi
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.05a
    • /
    • pp.209-213
    • /
    • 2005
  • 본 논문에서는 1차 자기회귀모형에서 자기회귀계수에 대한 여러 가지 추정량들의 분포함수에 대한 근사적추론 방법에 대해 연구하였다. 이차형식에 대한 안장점근사의 결과를 이용한 이 근사법은 여러 형태의 추정량들에 대해 근사분포의 유도과정이 불필요하며, 소표본은 물론 통계적 추론의 주요 관심영역에서의 근사정도가 매우 뛰어난 장점을 가지고 있다. 모의실험을 통해 Edgeworth근사를 비롯한 기존의 여러 근사법보다 효율이 뛰어남을 확인하였다.

  • PDF

공변량을 갖는 패널자기회귀 과정에 대한 베이즈추정

  • 신민웅;신기일
    • Communications for Statistical Applications and Methods
    • /
    • v.1 no.1
    • /
    • pp.94-101
    • /
    • 1994
  • 본 논문은 패널(panel) 자기회귀 모형에서 자기회귀 계수의 추정을 베이지안 방법으로 접근하였는데, 이 때 특별히 Gibbs Sampling 방법을 이용하여 사후분포를 계산하였다. 또한 모의 실험을 통하여 자기회귀계수를 Gibbs Sampling 방법으로 추정한 베이지안 추정치가 non-Bayesian 방법으로 구한 추정치보다 더 우월함을 보였다.

  • PDF

Small Sample Asymptotic Inferences for Autoregressive Coefficients via Saddlepoint Approximation (안장점근사를 이용한 자기회귀계수에 대한 소표본 점근추론)

  • Na, Jong-Hwa;Kim, Jeong-Sook
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.1
    • /
    • pp.103-115
    • /
    • 2007
  • In this paper we studied the small sample asymptotic inference for the autoregressive coefficient in AR(1) model. Based on saddlepoint approximations to the distribution of quadratic forms, we suggest a new approximation to the distribution of the estimators of the noncircular autoregressive coefficients. Simulation results show that the suggested methods are very accurate even in the small sample sizes and extreme tail area.

Autocovariance based estimation in the linear regression model (선형회귀 모형에서 자기공분산 기반 추정)

  • Park, Cheol-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.839-847
    • /
    • 2011
  • In this study, we derive an estimator based on autocovariance for the regression coefficients vector in the multiple linear regression model. This method is suggested by Park (2009), and although this method does not seem to be intuitively attractive, this estimator is unbiased for the regression coefficients vector. When the vectors of exploratory variables satisfy some regularity conditions, under mild conditions which are satisfied when errors are from autoregressive and moving average models, this estimator has asymptotically the same distribution as the least squares estimator and also converges in probability to the regression coefficients vector. Finally we provide a simulation study that the forementioned theoretical results hold for small sample cases.

A Study on Speech Recognition using Recurrent Neural Networks (회귀신경망을 이용한 음성인식에 관한 연구)

  • 한학용;김주성;허강인
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.62-67
    • /
    • 1999
  • In this paper, we investigates a reliable model of the Predictive Recurrent Neural Network for the speech recognition. Predictive Neural Networks are modeled by syllable units. For the given input syllable, then a model which gives the minimum prediction error is taken as the recognition result. The Predictive Neural Network which has the structure of recurrent network was composed to give the dynamic feature of the speech pattern into the network. We have compared with the recognition ability of the Recurrent Network proposed by Elman and Jordan. ETRI's SAMDORI has been used for the speech DB. In order to find a reliable model of neural networks, the changes of two recognition rates were compared one another in conditions of: (1) changing prediction order and the number of hidden units: and (2) accumulating previous values with self-loop coefficient in its context. The result shows that the optimum prediction order, the number of hidden units, and self-loop coefficient have differently responded according to the structure of neural network used. However, in general, the Jordan's recurrent network shows relatively higher recognition rate than Elman's. The effects of recognition rate on the self-loop coefficient were variable according to the structures of neural network and their values.

  • PDF

Bayesian Method for the Multiple Test of an Autoregressive Parameter in Stationary AR(L) Model (AR(1)모형에서 자기회귀계수의 다중검정을 위한 베이지안방법)

  • 김경숙;손영숙
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.141-150
    • /
    • 2003
  • This paper presents the multiple testing method of an autoregressive parameter in stationary AR(1) model using the usual Bayes factor. As prior distributions of parameters in each model, uniform prior and noninformative improper priors are assumed. Posterior probabilities through the usual Bayes factors are used for the model selection. Finally, to check whether these theoretical results are correct, simulated data and real data are analyzed.

An estimation method based on autocovariance in the simple linear regression model (단순 선형회귀 모형에서 자기공분산에 근거한 최적 추정 방법)

  • Park, Cheol-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.2
    • /
    • pp.251-260
    • /
    • 2009
  • In this study, we propose a new estimation method based on autocovariance for selecting optimal estimators of the regression coefficients in the simple linear regression model. Although this method does not seem to be intuitively attractive, these estimators are unbiased for the corresponding regression coefficients. When the exploratory variable takes the equally spaced values between 0 and 1, under mild conditions which are satisfied when errors follow an autoregressive moving average model, we show that these estimators have asymptotically the same distributions as the least squares estimators. Additionally, under the same conditions as before, we provide a self-contained proof that these estimators converge in probability to the corresponding regression coefficients.

  • PDF

Filtered Coupling Measures for Variable Selection in Sparse Vector Autoregressive Modeling (필터링된 잔차를 이용한 희박벡터자기회귀모형에서의 변수 선택 측도)

  • Lee, Seungkyu;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.5
    • /
    • pp.871-883
    • /
    • 2015
  • Vector autoregressive (VAR) models in high dimension suffer from noisy estimates, unstable predictions and hard interpretation. Consequently, the sparse vector autoregressive (sVAR) model, which forces many small coefficients in VAR to exactly zero, has been suggested and proven effective for the modeling of high dimensional time series data. This paper studies coupling measures to select non-zero coefficients in sVAR. The basic idea based on the simulation study reveals that removing the effect of other variables greatly improves the performance of coupling measures. sVAR model coefficients are asymmetric; therefore, asymmetric coupling measures such as Granger causality improve computational costs. We propose two asymmetric coupling measures, filtered-cross-correlation and filtered-Granger-causality, based on the filtered residuals series. Our proposed coupling measures are proven adequate for heavy-tailed and high order sVAR models in the simulation study.

Adaptive lasso in sparse vector autoregressive models (Adaptive lasso를 이용한 희박벡터자기회귀모형에서의 변수 선택)

  • Lee, Sl Gi;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.1
    • /
    • pp.27-39
    • /
    • 2016
  • This paper considers variable selection in the sparse vector autoregressive (sVAR) model where sparsity comes from setting small coefficients to exact zeros. In the estimation perspective, Davis et al. (2015) showed that the lasso type of regularization method is successful because it provides a simultaneous variable selection and parameter estimation even for time series data. However, their simulations study reports that the regular lasso overestimates the number of non-zero coefficients, hence its finite sample performance needs improvements. In this article, we show that the adaptive lasso significantly improves the performance where the adaptive lasso finds the sparsity patterns superior to the regular lasso. Some tuning parameter selections in the adaptive lasso are also discussed from the simulations study.