• 제목/요약/키워드: statistical approach

Search Result 2,335, Processing Time 0.038 seconds

Cyber risk measurement via loss distribution approach and GARCH model

  • Sanghee Kim;Seongjoo Song
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.1
    • /
    • pp.75-94
    • /
    • 2023
  • The growing trend of cyber risk has put forward the importance of cyber risk management. Cyber risk is defined as an accidental or intentional risk related to information and technology assets. Although cyber risk is a subset of operational risk, it is reported to be handled differently from operational risk due to its different features of the loss distribution. In this study, we aim to detect the characteristics of cyber loss and find a suitable model by measuring value at risk (VaR). We use the loss distribution approach (LDA) and the time series model to describe cyber losses of financial and non-financial business sectors, provided in SAS® OpRisk Global Data. Peaks over threshold (POT) method is also incorporated to improve the risk measurement. For the financial sector, the LDA and GARCH model with POT perform better than those without POT, respectively. The same result is obtained for the non-financial sector, although the differences are not significant. We also build a two-dimensional model reflecting the dependence structure between financial and non-financial sectors through a bivariate copula and check the model adequacy through VaR.

DR-LSTM: Dimension reduction based deep learning approach to predict stock price

  • Ah-ram Lee;Jae Youn Ahn;Ji Eun Choi;Kyongwon Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.213-234
    • /
    • 2024
  • In recent decades, increasing research attention has been directed toward predicting the price of stocks in financial markets using deep learning methods. For instance, recurrent neural network (RNN) is known to be competitive for datasets with time-series data. Long short term memory (LSTM) further improves RNN by providing an alternative approach to the gradient loss problem. LSTM has its own advantage in predictive accuracy by retaining memory for a longer time. In this paper, we combine both supervised and unsupervised dimension reduction methods with LSTM to enhance the forecasting performance and refer to this as a dimension reduction based LSTM (DR-LSTM) approach. For a supervised dimension reduction method, we use methods such as sliced inverse regression (SIR), sparse SIR, and kernel SIR. Furthermore, principal component analysis (PCA), sparse PCA, and kernel PCA are used as unsupervised dimension reduction methods. Using datasets of real stock market index (S&P 500, STOXX Europe 600, and KOSPI), we present a comparative study on predictive accuracy between six DR-LSTM methods and time series modeling.

RS-based method for estimating statistical moments and its application to reliability analysis (반응표면을 활용한 통계적 모멘트 추정 방법과 신뢰도해석에 적용)

  • Huh, Jae-Sung;Kwak, Byung-Man
    • Proceedings of the KSME Conference
    • /
    • 2004.11a
    • /
    • pp.852-857
    • /
    • 2004
  • A new and efficient method for estimating the statistical moments of a system performance function has been developed. The method consists of two steps: (1) An approximate response surface is generated by a quadratic regression model, and (2) the statistical moments of the regression model are then calculated by experimental design techniques proposed by Seo and $Kwak^{(4)}$. In this approach, the size of experimental region affects the accuracy of the statistical moments. Therefore, the region size should be selected suitably. The D-optimal design and the central composite design are adopted over the selected experimental region for the regression model. Finally, the Pearson system is adopted to decide the distribution type of the system performance function and to analyze structural reliability.

  • PDF

Permutation Predictor Tests in Linear Regression

  • Ryu, Hye Min;Woo, Min Ah;Lee, Kyungjin;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.2
    • /
    • pp.147-155
    • /
    • 2013
  • To determine whether each coefficient is equal to zero or not, usual $t$-tests are a popular choice (among others) in linear regression to practitioners because all statistical packages provide the statistics and their corresponding $p$-values. Under smaller samples (especially with non-normal errors) the tests often fail to correctly detect statistical significance. We propose a permutation approach by adopting a sufficient dimension reduction methodology to overcome this deficit. Numerical studies confirm that the proposed method has potential advantages over the t-tests. In addition, data analysis is also presented.

A Statistical Approach to the Pharmacokinetic Model (집단 약동학 모형에 대한 통계학적 고찰)

  • Lee, Eun-Kyung
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.3
    • /
    • pp.511-520
    • /
    • 2010
  • The Pharmacokinetic model is a complex nonlinear model with pharmacokinetic parameters that is some-times represented by a complex form of differential equations. A population pharmacokinetic model adds individual variability using the random effects to the pharmacokinetic model. It amounts to the nonlinear mixed effect model. This paper, reviews the population pharmacokinetic model from a statistical viewpoint; in addition, a population pharmacokinetic model is also applied to the real clinical data along with a review of the statistical meaning of this model.

A dynamic Bayesian approach for probability of default and stress test

  • Kim, Taeyoung;Park, Yousung
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.5
    • /
    • pp.579-588
    • /
    • 2020
  • Obligor defaults are cross-sectionally correlated as obligors share common economic conditions; in addition obligors are longitudinally correlated so that an economic shock like the IMF crisis in 1998 lasts for a period of time. A longitudinal correlation should be used to construct statistical scenarios of stress test with which we replace a type of artificial scenario that the banks have used. We propose a Bayesian model to accommodate such correlation structures. Using 402 obligors to a domestic bank in Korea, our model with a dynamic correlation is compared to a Bayesian model with a stationary longitudinal correlation and the classical logistic regression model. Our model generates statistical financial statement under a stress situation on individual obligor basis so that the genearted financial statement produces a similar distribution of credit grades to when the IMF crisis occurred and complies with Basel IV (Basel Committee on Banking Supervision, 2017) requirement that the credit grades under a stress situation are not sensitive to the business cycle.