• Title/Summary/Keyword: Block Maxima Model

Search Result 6, Processing Time 0.02 seconds

Usefulness and Limitations of Extreme Value Theory VAR model : The Korean Stock Market (극한치이론을 이용한 VAR 추정치의 유용성과 한계 - 우리나라 주식시장을 중심으로 -)

  • Kim, Kyu-Hyong;Lee, Joon-Haeng
    • The Korean Journal of Financial Management
    • /
    • v.22 no.1
    • /
    • pp.119-146
    • /
    • 2005
  • This study applies extreme value theory to get extreme value-VAR for Korean Stock market and showed the usefulness of the approach. Block maxima model and POT model were used as extreme value models and tested which model was more appropriate through back testing. It was shown that the block maxima model was unstable as the variation of the estimate was very large depending on the confidence level and the magnitude of the estimates depended largely on the block size. This shows that block maxima model was not appropriate for Korean Stock market. On the other hand POT model was relatively stable even though extreme value VAR depended on the selection of the critical value. Back test also showed VAR showed a better result than delta VAR above 97.5% confidence level. POT model performs better the higher the confidence level, which suggests that POT model is useful as a risk management tool especially for VAR estimates with a confidence level higher than 99%. This study picks up the right tail and left tail of the return distribution and estimates the EVT-VAR for each, which reflects the asymmetry of the return distribution of the Korean Stock market.

  • PDF

Estimation of VaR Using Extreme Losses, and Back-Testing: Case Study (극단 손실값들을 이용한 VaR의 추정과 사후검정: 사례분석)

  • Seo, Sung-Hyo;Kim, Sung-Gon
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.2
    • /
    • pp.219-234
    • /
    • 2010
  • In index investing according to KOSPI, we estimate Value at Risk(VaR) from the extreme losses of the daily returns which are obtained from KOSPI. To this end, we apply Block Maxima(BM) model which is one of the useful models in the extreme value theory. We also estimate the extremal index to consider the dependency in the occurrence of extreme losses. From the back-testing based on the failure rate method, we can see that the model is adaptable for the VaR estimation. We also compare this model with the GARCH model which is commonly used for the VaR estimation. Back-testing says that there is no meaningful difference between the two models if we assume that the conditional returns follow the t-distribution. However, the estimated VaR based on GARCH model is sensitive to the extreme losses occurred near the epoch of estimation, while that on BM model is not. Thus, estimating the VaR based on GARCH model is preferred for the short-term prediction. However, for the long-term prediction, BM model is better.

Extreme Value Analysis of Statistically Independent Stochastic Variables

  • Choi, Yongho;Yeon, Seong Mo;Kim, Hyunjoe;Lee, Dongyeon
    • Journal of Ocean Engineering and Technology
    • /
    • v.33 no.3
    • /
    • pp.222-228
    • /
    • 2019
  • An extreme value analysis (EVA) is essential to obtain a design value for highly nonlinear variables such as long-term environmental data for wind and waves, and slamming or sloshing impact pressures. According to the extreme value theory (EVT), the extreme value distribution is derived by multiplying the initial cumulative distribution functions for independent and identically distributed (IID) random variables. However, in the position mooring of DNVGL, the sampled global maxima of the mooring line tension are assumed to be IID stochastic variables without checking their independence. The ITTC Recommended Procedures and Guidelines for Sloshing Model Tests never deal with the independence of the sampling data. Hence, a design value estimated without the IID check would be under- or over-estimated because of considering observations far away from a Weibull or generalized Pareto distribution (GPD) as outliers. In this study, the IID sampling data are first checked in an EVA. With no IID random variables, an automatic resampling scheme is recommended using the block maxima approach for a generalized extreme value (GEV) distribution and peaks-over-threshold (POT) approach for a GPD. A partial autocorrelation function (PACF) is used to check the IID variables. In this study, only one 5 h sample of sloshing test results was used for a feasibility study of the resampling IID variables approach. Based on this study, the resampling IID variables may reduce the number of outliers, and the statistically more appropriate design value could be achieved with independent samples.

Multivariate design estimations under copulas constructions. Stage-1: Parametrical density constructions for defining flood marginals for the Kelantan River basin, Malaysia

  • Latif, Shahid;Mustafa, Firuza
    • Ocean Systems Engineering
    • /
    • v.9 no.3
    • /
    • pp.287-328
    • /
    • 2019
  • Comprehensive understanding of the flood risk assessments via frequency analysis often demands multivariate designs under the different notations of return periods. Flood is a tri-variate random consequence, which often pointing the unreliability of univariate return period and demands for the joint dependency construction by accounting its multiple intercorrelated flood vectors i.e., flood peak, volume & durations. Selecting the most parsimonious probability functions for demonstrating univariate flood marginals distributions is often a mandatory pre-processing desire before the establishment of joint dependency. Especially under copulas methodology, which often allows the practitioner to model univariate marginals separately from their joint constructions. Parametric density approximations often hypothesized that the random samples must follow some specific or predefine probability density functions, which usually defines different estimates especially in the tail of distributions. Concentrations of the upper tail often seem interesting during flood modelling also, no evidence exhibited in favours of any fixed distributions, which often characterized through the trial and error procedure based on goodness-of-fit measures. On another side, model performance evaluations and selections of best-fitted distributions often demand precise investigations via comparing the relative sample reproducing capabilities otherwise, inconsistencies might reveal uncertainty. Also, the strength & weakness of different fitness statistics usually vary and having different extent during demonstrating gaps and dispensary among fitted distributions. In this literature, selections efforts of marginal distributions of flood variables are incorporated by employing an interactive set of parametric functions for event-based (or Block annual maxima) samples over the 50-years continuously-distributed streamflow characteristics for the Kelantan River basin at Gulliemard Bridge, Malaysia. Model fitness criteria are examined based on the degree of agreements between cumulative empirical and theoretical probabilities. Both the analytical as well as graphically visual inspections are undertaken to strengthen much decisive evidence in favour of best-fitted probability density.

Comparison of log-logistic and generalized extreme value distributions for predicted return level of earthquake (지진 재현수준 예측에 대한 로그-로지스틱 분포와 일반화 극단값 분포의 비교)

  • Ko, Nak Gyeong;Ha, Il Do;Jang, Dae Heung
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.107-114
    • /
    • 2020
  • Extreme value distributions have often been used for the analysis (e.g., prediction of return level) of data which are observed from natural disaster. By the extreme value theory, the block maxima asymptotically follow the generalized extreme value distribution as sample size increases; however, this may not hold in a small sample case. For solving this problem, this paper proposes the use of a log-logistic (LLG) distribution whose validity is evaluated through goodness-of-fit test and model selection. The proposed method is illustrated with data from annual maximum earthquake magnitudes of China. Here, we present the predicted return level and confidence interval according to each return period using LLG distribution.

Concept of Seasonality Analysis of Hydrologic Extreme Variables and Design Rainfall Estimation Using Nonstationary Frequency Analysis (극치수문자료의 계절성 분석 개념 및 비정상성 빈도해석을 이용한 확률강수량 해석)

  • Lee, Jeong-Ju;Kwon, Hyun-Han;Hwang, Kyu-Nam
    • Journal of Korea Water Resources Association
    • /
    • v.43 no.8
    • /
    • pp.733-745
    • /
    • 2010
  • Seasonality of hydrologic extreme variable is a significant element from a water resources managemental point of view. It is closely related with various fields such as dam operation, flood control, irrigation water management, and so on. Hydrological frequency analysis conjunction with partial duration series rather than block maxima, offers benefits that include data expansion, analysis of seasonality and occurrence. In this study, nonstationary frequency analysis based on the Bayesian model has been suggested which effectively linked with advantage of POT (peaks over threshold) analysis that contains seasonality information. A selected threshold that the value of upper 98% among the 24 hours duration rainfall was applied to extract POT series at Seoul station, and goodness-fit-test of selected GEV distribution has been examined through graphical representation. Seasonal variation of location and scale parameter ($\mu$ and $\sigma$) of GEV distribution were represented by Fourier series, and the posterior distributions were estimated by Bayesian Markov Chain Monte Carlo simulation. The design rainfall estimated by GEV quantile function and derived posterior distribution for the Fourier coefficients, were illustrated with a wide range of return periods. The nonstationary frequency analysis considering seasonality can reasonably reproduce underlying extreme distribution and simultaneously provide a full annual cycle of the design rainfall as well.