• Title/Summary/Keyword: 극단값 분포

Search Result 24, Processing Time 0.017 seconds

Prediction of recent earthquake magnitudes of Gyeongju and Pohang using historical earthquake data of the Chosun Dynasty (조선시대 역사지진자료를 이용한 경주와 포항의 최근 지진규모 예측)

  • Kim, Jun Cheol;Kwon, Sookhee;Jang, Dae-Heung;Rhee, Kun Woo;Kim, Young-Seog;Ha, Il Do
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.1
    • /
    • pp.119-129
    • /
    • 2022
  • In this paper, we predict the earthquake magnitudes which were recently occurred in Gyeongju and Pohang, using statistical methods based on historical data. For this purpose, we use the five-year block maximum data of 1392~1771 period, which has a relatively high annual density, among the historical earthquake magnitude data of the Chosun Dynasty. Then, we present the prediction and analysis of earthquake magnitudes for the return level over return period in the Chosun Dynasty using the extreme value theory based on the distribution of generalized extreme values (GEV). We use maximum likelihood estimation (MLE) and L-moments estimation for parameters of GEV distribution. In particular, this study also demonstrates via the goodness-of-fit tests that the GEV distribution can be an appropriate analytical model for these historical earthquake magnitude data.

Parametric nonparametric methods for estimating extreme value distribution (극단값 분포 추정을 위한 모수적 비모수적 방법)

  • Woo, Seunghyun;Kang, Kee-Hoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.531-536
    • /
    • 2022
  • This paper compared the performance of the parametric method and the nonparametric method when estimating the distribution for the tail of the distribution with heavy tails. For the parametric method, the generalized extreme value distribution and the generalized Pareto distribution were used, and for the nonparametric method, the kernel density estimation method was applied. For comparison of the two approaches, the results of function estimation by applying the block maximum value model and the threshold excess model using daily fine dust public data for each observatory in Seoul from 2014 to 2018 are shown together. In addition, the area where high concentrations of fine dust will occur was predicted through the return level.

A Bayesian Extreme Value Analysis of KOSPI Data (코스피 지수 자료의 베이지안 극단값 분석)

  • Yun, Seok-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.833-845
    • /
    • 2011
  • This paper conducts a statistical analysis of extreme values for both daily log-returns and daily negative log-returns, which are computed using a collection of KOSPI data from January 3, 1998 to August 31, 2011. The Poisson-GPD model is used as a statistical analysis model for extreme values and the maximum likelihood method is applied for the estimation of parameters and extreme quantiles. To the Poisson-GPD model is also added the Bayesian method that assumes the usual noninformative prior distribution for the parameters, where the Markov chain Monte Carlo method is applied for the estimation of parameters and extreme quantiles. According to this analysis, both the maximum likelihood method and the Bayesian method form the same conclusion that the distribution of the log-returns has a shorter right tail than the normal distribution, but that the distribution of the negative log-returns has a heavier right tail than the normal distribution. An advantage of using the Bayesian method in extreme value analysis is that there is nothing to worry about the classical asymptotic properties of the maximum likelihood estimators even when the regularity conditions are not satisfied, and that in prediction it is effective to reflect the uncertainties from both the parameters and a future observation.

Analysis of Extreme Values of Daily Percentage Increases and Decreases in Crude Oil Spot Prices (국제현물원유가의 일일 상승 및 하락율의 극단값 분석)

  • Yun, Seok-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.5
    • /
    • pp.835-844
    • /
    • 2010
  • Tools for statistical analysis of extreme values include the classical annual maximum method, the modern threshold method and variants improving the second one. While the annual maximum method is to t th generalized extreme value distribution to the annual maxima of a time series, the threshold method is to the generalized Pareto distribution to the excesses over a high threshold from the series. In this paper we deal with the Poisson-GPD method, a variant of the threshold method with a further assumption that the total number of exceedances follows the Poisson distribution, and apply it to the daily percentage increases and decreases computed from the spot prices of West Texas Intermediate, which were collected from January 4th, 1988 until December 31st, 2009. According to this analysis, the distribution of daily percentage increases as well as decreases turns out to have a heavy tail, unlike the normal distribution, which coincides well with the general phenomenon appearing in the analysis of lots of nowaday nancial data.

A Hierarchical Bayesian Modeling of Temporal Trends in Return Levels for Extreme Precipitations (한국지역 집중호우에 대한 반환주기의 베이지안 모형 분석)

  • Kim, Yongku
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.137-149
    • /
    • 2015
  • Flood planning needs to recognize trends for extreme precipitation events. Especially, the r-year return level is a common measure for extreme events. In this paper, we present a nonstationary temporal model for precipitation return levels using a hierarchical Bayesian modeling. For intensity, we model annual maximum daily precipitation measured in Korea with a generalized extreme value (GEV). The temporal dependence among the return levels is incorporated to the model for GEV model parameters and a linear model with autoregressive error terms. We apply the proposed model to precipitation data collected from various stations in Korea from 1973 to 2011.

Estimation for the Change of Daily Maxima Temperature (일일 최고기온의 변화에 대한 추정)

  • Ko, Wang-Kyung
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2007
  • This investigation on the change of the daily maxima temperature in Seoul, Daegu, Chunchen, Youngchen was triggered by news items such as the earth is getting warmer and a recent news item that said that Korea is getting warmer due to this climatic change. A statistical analysis on the daily maxima for June over this period in Seoul revealed a positive trend of 1.1190 centigrade over the 45 years, a change of 0.0249 degrees annually. Due to the large variation on these maximum temperatures, one can raise the question on the significance of this increase. To check the goodness of fit of the proposed extreme value model, we shown a Q-Q plot of the observed quantiles against the simulated quantiles and a probability plot. And we calculated statistics each month and a tolerance limit. This is tested through simulating a large number of similar datasets from an Extreme Value distribution which described the observed data very well. Only 0.02% of the simulated datasets showed an increase of this degrees or larger, meaning that the probability is very low for such an event to occur.

Comparison of log-logistic and generalized extreme value distributions for predicted return level of earthquake (지진 재현수준 예측에 대한 로그-로지스틱 분포와 일반화 극단값 분포의 비교)

  • Ko, Nak Gyeong;Ha, Il Do;Jang, Dae Heung
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.107-114
    • /
    • 2020
  • Extreme value distributions have often been used for the analysis (e.g., prediction of return level) of data which are observed from natural disaster. By the extreme value theory, the block maxima asymptotically follow the generalized extreme value distribution as sample size increases; however, this may not hold in a small sample case. For solving this problem, this paper proposes the use of a log-logistic (LLG) distribution whose validity is evaluated through goodness-of-fit test and model selection. The proposed method is illustrated with data from annual maximum earthquake magnitudes of China. Here, we present the predicted return level and confidence interval according to each return period using LLG distribution.

A Study of Outlier Detection Using the Mixture of Extreme Distributions Based on Deep-Sea Fishery Data (원양어선 조업 데이터의 혼합 극단분포를 이용한 이상점 탐색 연구)

  • Lee, Jung Jin;Kim, Jae Kyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.5
    • /
    • pp.847-858
    • /
    • 2015
  • Deep-sea fishery in the Antarctic Ocean has been actively progressed by the developed countries including Korea. In order to prevent the environmental destruction of the Antarctic Ocean, related countries have established the Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR) and have monitored any illegal unreported or unregulated fishing. Fishing of tooth fish, an expensive fish, in the Antarctic Ocean has increased recently and high catches per unit effort (CPUE) of fishing boats, which is suspicious for an illegal activity, have been frequently reported. The data of CPUEs in a fishing area of the Antarctic Ocean often show an extreme Distribution or a mixture of two extreme distributions. This paper proposes an algorithm to detect an outlier of CPUEs by using the mixture of two extreme distributions. The parameters of the mixture distribution are estimated by the EM algorithm. Log likelihood value and posterior probabilities are used to detect an outlier. Experiments show that the proposed algorithm to detect outlier of the data can be adopted instead of simple criteria such as a CPUE is greater than 1.

Extreme Quantile Estimation of Losses in KRW/USD Exchange Rate (원/달러 환율 투자 손실률에 대한 극단분위수 추정)

  • Yun, Seok-Hoon
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.5
    • /
    • pp.803-812
    • /
    • 2009
  • The application of extreme value theory to financial data is a fairly recent innovation. The classical annual maximum method is to fit the generalized extreme value distribution to the annual maxima of a data series. An alterative modern method, the so-called threshold method, is to fit the generalized Pareto distribution to the excesses over a high threshold from the data series. A more substantial variant is to take the point-process viewpoint of high-level exceedances. That is, the exceedance times and excess values of a high threshold are viewed as a two-dimensional point process whose limiting form is a non-homogeneous Poisson process. In this paper, we apply the two-dimensional non-homogeneous Poisson process model to daily losses, daily negative log-returns, in the data series of KBW/USD exchange rate, collected from January 4th, 1982 until December 31 st, 2008. The main question is how to estimate extreme quantiles of losses such as the 10-year or 50-year return level.

Estimation of VaR Using Extreme Losses, and Back-Testing: Case Study (극단 손실값들을 이용한 VaR의 추정과 사후검정: 사례분석)

  • Seo, Sung-Hyo;Kim, Sung-Gon
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.2
    • /
    • pp.219-234
    • /
    • 2010
  • In index investing according to KOSPI, we estimate Value at Risk(VaR) from the extreme losses of the daily returns which are obtained from KOSPI. To this end, we apply Block Maxima(BM) model which is one of the useful models in the extreme value theory. We also estimate the extremal index to consider the dependency in the occurrence of extreme losses. From the back-testing based on the failure rate method, we can see that the model is adaptable for the VaR estimation. We also compare this model with the GARCH model which is commonly used for the VaR estimation. Back-testing says that there is no meaningful difference between the two models if we assume that the conditional returns follow the t-distribution. However, the estimated VaR based on GARCH model is sensitive to the extreme losses occurred near the epoch of estimation, while that on BM model is not. Thus, estimating the VaR based on GARCH model is preferred for the short-term prediction. However, for the long-term prediction, BM model is better.