• Title/Summary/Keyword: VHAR

Search Result 4, Processing Time 0.015 seconds

Banded vector heterogeneous autoregression models (밴드구조 VHAR 모형)

  • Sangtae Kim;Changryong Baek
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.529-545
    • /
    • 2023
  • This paper introduces the Banded-VHAR model suitable for high-dimensional long-memory time series with band structure. The Banded-VHAR model has nonignorable correlations only with adjacent dimensions due to data features, for example, geographical information. Row-wise estimation method is adapted for fast computation. Also, two estimation methods, namely BIC and ratio methods, are proposed to estimate the width of band. We demonstrate asymptotic consistency of our proposed estimation methods through simulation study. Real data applications to pm2.5 and apartment trading volume substantiate that our Banded-VHAR model outperforms traditional sparse VHAR model in forecasting and easy to interpret model coefficients.

Outlier detection for multivariate long memory processes (다변량 장기 종속 시계열에서의 이상점 탐지)

  • Kim, Kyunghee;Yu, Seungyeon;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.3
    • /
    • pp.395-406
    • /
    • 2022
  • This paper studies the outlier detection method for multivariate long memory time series. The existing outlier detection methods are based on a short memory VARMA model, so they are not suitable for multivariate long memory time series. It is because higher order of autoregressive model is necessary to account for long memory, however, it can also induce estimation instability as the number of parameter increases. To resolve this issue, we propose outlier detection methods based on the VHAR structure. We also adapt the robust estimation method to estimate VHAR coefficients more efficiently. Our simulation results show that our proposed method performs well in detecting outliers in multivariate long memory time series. Empirical analysis with stock index shows RVHAR model finds additional outliers that existing model does not detect.

Sparse vector heterogeneous autoregressive model with nonconvex penalties

  • Shin, Andrew Jaeho;Park, Minsu;Baek, Changryong
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.1
    • /
    • pp.53-64
    • /
    • 2022
  • High dimensional time series is gaining considerable attention in recent years. The sparse vector heterogeneous autoregressive (VHAR) model proposed by Baek and Park (2020) uses adaptive lasso and debiasing procedure in estimation, and showed superb forecasting performance in realized volatilities. This paper extends the sparse VHAR model by considering non-convex penalties such as SCAD and MCP for possible bias reduction from their penalty design. Finite sample performances of three estimation methods are compared through Monte Carlo simulation. Our study shows first that taking into cross-sectional correlations reduces bias. Second, nonconvex penalties performs better when the sample size is small. On the other hand, the adaptive lasso with debiasing performs well as sample size increases. Also, empirical analysis based on 20 multinational realized volatilities is provided.

Controlling the false discovery rate in sparse VHAR models using knockoffs (KNOCKOFF를 이용한 성근 VHAR 모형의 FDR 제어)

  • Minsu, Park;Jaewon, Lee;Changryong, Baek
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.6
    • /
    • pp.685-701
    • /
    • 2022
  • FDR is widely used in high-dimensional data inference since it provides more liberal criterion contrary to FWER which is known to be very conservative by controlling Type-1 errors. This paper proposes a sparse VHAR model estimation method controlling FDR by adapting the knockoff introduced by Barber and Candès (2015). We also compare knockoff with conventional method using adaptive Lasso (AL) through extensive simulation study. We observe that AL shows sparsistency and decent forecasting performance, however, AL is not satisfactory in controlling FDR. To be more specific, AL tends to estimate zero coefficients as non-zero coefficients. On the other hand, knockoff controls FDR sufficiently well under desired level, but it finds too sparse model when the sample size is small. However, the knockoff is dramatically improved as sample size increases and the model is getting sparser.