• Title/Summary/Keyword: 꼬리 확률

Search Result 36, Processing Time 0.024 seconds

확률화 블록 계획법에서 우산형 대립가설에 대한 점근 분포 무관 검정법의 연구

  • 김동희;김현기;이주현
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.3
    • /
    • pp.83-92
    • /
    • 1996
  • 확률화 블록 계획법에서 우산형 대립가설에 대한 점근 분포 무관 검정법을 제시하고 제안된 검정통계량의 점근적 정규성과 모수적 방법 및 비모수적 방법의 점근상대효율을 관찰하였다. 검점통계량은 블록 효과를 추정하여 제거한 관측치의 전체 블록 순위를 사용하여 제안하였으며 제안된 검정통계량의 소표본 Monte Carlo 연구를 통해 실험 검정력을 비교하였다. 그 결과 본 논문에서 제안된 검정통계량이 꼬리가 두꺼운 분포에서는 전반적으로 우수하고 로버스트한 것으로 나타났다.

  • PDF

Small Sample Asymptotic Distribution for the Sum of Product of Normal Variables with Application to FSK Communication (곱 정규확률변수의 합에 대한 소표본 점근분표와 FSK 통신에의 응용)

  • Na, Jong-Hwa;Kim, Jung-Mi
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.1
    • /
    • pp.171-179
    • /
    • 2009
  • In this paper we studied the effective approximations to the distribution of the sum of products of normal variables. Based on the saddlepoint approximations to the quadratic forms, the suggested approximations are very accurate and easy to use. Applications to the FSK (Frequency Shift Keying) communication are also considered.

Finding optimal portfolio based on genetic algorithm with generalized Pareto distribution (GPD 기반의 유전자 알고리즘을 이용한 포트폴리오 최적화)

  • Kim, Hyundon;Kim, Hyun Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1479-1494
    • /
    • 2015
  • Since the Markowitz's mean-variance framework for portfolio analysis, the topic of portfolio optimization has been an important topic in finance. Traditional approaches focus on maximizing the expected return of the portfolio while minimizing its variance, assuming that risky asset returns are normally distributed. The normality assumption however has widely been criticized as actual stock price distributions exhibit much heavier tails as well as asymmetry. To this extent, in this paper we employ the genetic algorithm to find the optimal portfolio under the Value-at-Risk (VaR) constraint, where the tail of risky assets are modeled with the generalized Pareto distribution (GPD), the standard distribution for exceedances in extreme value theory. An empirical study using Korean stock prices shows that the performance of the proposed method is efficient and better than alternative methods.

The Role of the Cauchy Probability Distribution in a Continuous Taboo Search (연속형 타부 탐색에서 코시 확률 분포의 역할)

  • Lee, Chang-Yong;Lee, Dong-Ju
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.8
    • /
    • pp.591-598
    • /
    • 2010
  • In this study, we propose a new method for generating candidate solutions based on the Cauchy probability distribution in order to complement the shortcoming of the solutions generated by the normal distribution. The Cauchy probability distribution has infinite mean and variance, and it has rather large probability in the tail region relative to the normal distribution. Thus, the Cauchy distribution can yield higher probabilities of generating candidate solutions of large-varied variables, which in turn has an advantage of searching wider area of variable space. In order to compare and analyze the performance of the proposed method against the conventional method, we carried out an experiment using benchmarking problems of real valued function. From the result of the experiment, we found that the proposed method based on the Cauchy distribution outperformed the conventional one for all benchmarking problems, and verified its superiority by the statistical hypothesis test.

Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis (정확한 신뢰성 해석을 위한 아카이케 정보척도 기반 일반화파레토 분포의 임계점 추정)

  • Kang, Seunghoon;Lim, Woochul;Cho, Su-Gil;Park, Sanghyun;Lee, Minuk;Choi, Jong-Su;Hong, Sup;Lee, Tae Hee
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.2
    • /
    • pp.163-168
    • /
    • 2015
  • In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

Modeling Heavy-tailed Behavior of 802.11b Wireless LAN Traffic (무선 랜 802.11b 트래픽의 두꺼운 꼬리분포 모델링)

  • Yamkhin, Dashdorj;Won, You-Jip
    • Journal of Digital Contents Society
    • /
    • v.10 no.2
    • /
    • pp.357-365
    • /
    • 2009
  • To effectively exploit the underlying network bandwidth while maximizing user perceivable QoS, mandatory to make proper estimation on packet loss and queuing delay of the underling network. This issue is further emphasized in wireless network environment where network bandwidth is scarce resource. In this work, we focus our effort on developing performance model for wireless network. We collect packet trace from actually wireless network environment. We find that packet count process and bandwidth process in wireless environment exhibits long range property. We extract key performance parameters of the underlying network traffic. We develop an analytical model for buffer overflow probability and waiting time. We obtain the tail probability of the queueing system using Fractional Brown Motion (FBM). We represent average queuing delay from queue length model. Through our study based upon empirical data, it is found that our performance model well represent the physical characteristics of the IEEE 802.11b network traffic.

  • PDF

Frequency Analyses for Extreme Rainfall Data using the Burr XII Distribution (Burr XII 모형을 이용한 우리나라 극한 강우자료 빈도해석)

  • Seo, Jungho;Shin, Ju-Young;Jung, Younghun;Heo, Jun-Haeng
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.335-335
    • /
    • 2018
  • 최근 이상기후현상으로 지구상의 여러 지역에서 극치 수문 사상의 발생 빈도와 강도가 날로 증가하고 있는 추세이다. 이에 대해 수공구조물의 설계를 위한 극치강우사상의 빈도해석에 있어서 적절한 확률분포모형의 적용은 매우 중요하다. 이에 수문통계분야에서는 generalized extreme value(GEV), generalized logistic(GLO), Gumbel(GUM) 모형과 같은 극치 분포를 이용한 수문통계적 특성에 대한 접근이 주로 이루어지고 있다. 하지만 우리나라 강우 사상의 경우 GEV 분포와 GUM 분포가 비교적 적합한 것으로 알려져 있지만 하나의 형상매개변수를 가지고 있어 분포 모형이 표현할 수 있는 통계적 특성에 한계를 가지고 있다. 기존의 GEV나 GUM분포로는 적절히 재현되지 않는 자료들을 분석하기 위해서 두 개의 형상매개변수를 가지는 분포형에 대한 연구가 진행되고 있다. 이에 본 연구에서는 두 개의 형상매개변수를 가지는 Burr XII 분포형의 우리나라 극한 강우자료에 대한 적용성을 평가하였다. Burr XII 분포형은 gamma나 exponential 분포 모형처럼 양의 확률변수만을 가지고, Cauchy나 Pareto 분포 모형처럼 두꺼운 꼬리(heavy-tailed distribution) 형상을 나타내기 때문에 비교적 큰 확률변수가 빈번히 나타나는 극치사상에도 적합한 것으로 알려져 있다. 이를 위해 Burr XII 분포 모형을 이용하여 우리나라 강우자료에 대해 지점빈도해석 및 지역빈도해석을 수행하고 우리나라 강우자료에 비교적 적합하다고 알려진 분포인 GEV, GLO, GUM 분포형을 통해 산정된 결과와 비교하였다.

  • PDF

Sample Size Determination for Comparing Tail Probabilities (극소 비율의 비교에 대한 표본수 결정)

  • Lee, Ji-An;Song, Hae-Hiang
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.1
    • /
    • pp.183-194
    • /
    • 2007
  • The problem of calculating the sample sizes for comparing two independent binomial proportions is studied, when one of two probabilities or both are smaller than 0.05. The use of Whittemore(1981)'s corrected sample size formula for small response probability, which is derived based oB multiple logistic regression, demonstrates much larger sample sizes compared to those by the asymptotic normal method, which is derived for the comparison of response probabilities belonging to the normal range. Therefore, applied statisticians need to be careful in sample size determination with small response probability to ensure intended power during a planning stage of clinical trials. The results of this study describe that the use of the sample size formula in the textbooks might sometimes be risky.

Parametric nonparametric methods for estimating extreme value distribution (극단값 분포 추정을 위한 모수적 비모수적 방법)

  • Woo, Seunghyun;Kang, Kee-Hoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.531-536
    • /
    • 2022
  • This paper compared the performance of the parametric method and the nonparametric method when estimating the distribution for the tail of the distribution with heavy tails. For the parametric method, the generalized extreme value distribution and the generalized Pareto distribution were used, and for the nonparametric method, the kernel density estimation method was applied. For comparison of the two approaches, the results of function estimation by applying the block maximum value model and the threshold excess model using daily fine dust public data for each observatory in Seoul from 2014 to 2018 are shown together. In addition, the area where high concentrations of fine dust will occur was predicted through the return level.

Spreadsheet Model Approach for Buffer-Sharing Line Production Systems with General Processing Times (일반 공정시간을 갖는 버퍼 공유 라인 생산시스템의 스프레드시트 모형 분석)

  • Seo, Dong-Won
    • Journal of the Korea Society for Simulation
    • /
    • v.28 no.2
    • /
    • pp.119-129
    • /
    • 2019
  • Although line production systems with finite buffers have been studied over several decades, except for some special cases there are no explicit expressions for system performances such as waiting times(or response time) and blocking probability. Recently, a max-plus algebraic approach for buffer-sharing systems with constant processing times was introduced and it can lead to analytic expressions for (higher) moment and tail probability of stationary waiting. Theoretically this approach can be applied to general processing times, but it cannot give a proper way for computing performance measures. To this end, in this study we developed simulation models using @RISK software and the expressions derived from max-plus algebra, and computed and compared blocking probability, waiting time (or response time) with respect to two blocking policies: communication(BBS: Blocking Before Service) and production(BAS: Blocking After Service). Moreover, an optimization problem which determines the minimum shared-buffer capacity satisfying a predetermined QoS(quality of service) is also considered.