• Title/Summary/Keyword: probability distributions

Search Result 744, Processing Time 0.029 seconds

Bayesian Nonstationary Probability Rainfall Estimation using the Grid Method (Grid Method 기법을 이용한 베이지안 비정상성 확률강수량 산정)

  • Kwak, Dohyun;Kim, Gwangseob
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.1
    • /
    • pp.37-44
    • /
    • 2015
  • A Bayesian nonstationary probability rainfall estimation model using the Grid method is developed. A hierarchical Bayesian framework is consisted with prior and hyper-prior distributions associated with parameters of the Gumbel distribution which is selected for rainfall extreme data. In this study, the Grid method is adopted instead of the Matropolis Hastings algorithm for random number generation since it has advantage that it can provide a thorough sampling of parameter space. This method is good for situations where the best-fit parameter values are not easily inferred a priori, and where there is a high probability of false minima. The developed model was applied to estimated target year probability rainfall using hourly rainfall data of Seoul station from 1973 to 2012. Results demonstrated that the target year estimate using nonstationary assumption is about 5~8% larger than the estimate using stationary assumption.

Density estimation of summer extreme temperature over South Korea using mixtures of conditional autoregressive species sampling model (혼합 조건부 종추출모형을 이용한 여름철 한국지역 극한기온의 위치별 밀도함수 추정)

  • Jo, Seongil;Lee, Jaeyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.5
    • /
    • pp.1155-1168
    • /
    • 2016
  • This paper considers a probability density estimation problem of climate values. In particular, we focus on estimating probability densities of summer extreme temperature over South Korea. It is known that the probability density of climate values at one location is similar to those at near by locations and one doesn't follow well known parametric distributions. To accommodate these properties, we use a mixture of conditional autoregressive species sampling model, which is a nonparametric Bayesian model with a spatial dependency. We apply the model to a dataset consisting of summer maximum temperature and minimum temperature over South Korea. The dataset is obtained from University of East Anglia.

Expansion of Sensitivity Analysis for Statistical Moments and Probability Constraints to Non-Normal Variables (비정규 분포에 대한 통계적 모멘트와 확률 제한조건의 민감도 해석)

  • Huh, Jae-Sung;Kwak, Byung-Man
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.11
    • /
    • pp.1691-1696
    • /
    • 2010
  • The efforts of reflecting the system's uncertainties in design step have been made and robust optimization or reliabilitybased design optimization are examples of the most famous methodologies. The statistical moments of a performance function and the constraints corresponding to probability conditions are involved in the formulation of these methodologies. Therefore, it is essential to effectively and accurately calculate them. The sensitivities of these methodologies have to be determined when nonlinear programming is utilized during the optimization process. The sensitivity of statistical moments and probability constraints is expressed in the integral form and limited to the normal random variable; we aim to expand the sensitivity formulation to nonnormal variables. Additional functional calculation will not be required when statistical moments and failure or satisfaction probabilities are already obtained at a design point. On the other hand, the accuracy of the sensitivity results could be worse than that of the moments because the target function is expressed as a product of the performance function and the explicit functions derived from probability density functions.

Bivariate Frequency Analysis of Rainfall using Copula Model (Copula 모형을 이용한 이변량 강우빈도해석)

  • Joo, Kyung-Won;Shin, Ju-Young;Heo, Jun-Haeng
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.8
    • /
    • pp.827-837
    • /
    • 2012
  • The estimation of the rainfall quantile is of great importance in designing hydrologic structures. Conventionally, the rainfall quantile is estimated by univariate frequency analysis with an appropriate probability distribution. There is a limitation in which duration of rainfall is restrictive. To overcome this limitation, bivariate frequency analysis by using 3 copula models is performed in this study. Annual maximum rainfall events in 5 stations are used for frequency analysis and rainfall depth and duration are used as random variables. Gumbel (GUM), generalized logistic (GLO) distributions are applied for rainfall depth and generalized extreme value (GEV), GUM, GLO distributions are applied for rainfall duration. Copula models used in this study are Frank, Joe, and Gumbel-Hougaard models. Maximum pseudo-likelihood estimation method is used to estimate the parameter of copula, and the method of probability weighted moments is used to estimate the parameters of marginal distributions. Rainfall quantile from this procedure is compared with various marginal distributions and copula models. As a result, in change of marginal distribution, distribution of duration does not significantly affect on rainfall quantile. There are slight differences depending on the distribution of rainfall depth. In the case which the marginal distribution of rainfall depth is GUM, there is more significantly increasing along the return period than GLO. Comparing with rainfall quantiles from each copula model, Joe and Gumbel-Hougaard models show similar trend while Frank model shows rapidly increasing trend with increment of return period.

Comparison of Deterministic and Probabilistic Approaches through Cases of Exposure Assessment of Child Products (어린이용품 노출평가 연구에서의 결정론적 및 확률론적 방법론 사용실태 분석 및 고찰)

  • Jang, Bo Youn;Jeong, Da-In;Lee, Hunjoo
    • Journal of Environmental Health Sciences
    • /
    • v.43 no.3
    • /
    • pp.223-232
    • /
    • 2017
  • Objectives: In response to increased interest in the safety of children's products, a risk management system is being prepared through exposure assessment of hazardous chemicals. To estimate exposure levels, risk assessors are using deterministic and probabilistic approaches to statistical methodology and a commercialized Monte Carlo simulation based on tools (MCTool) to efficiently support calculation of the probability density functions. This study was conducted to analyze and discuss the usage patterns and problems associated with the results of these two approaches and MCTools used in the case of probabilistic approaches by reviewing research reports related to exposure assessment for children's products. Methods: We collected six research reports on exposure and risk assessment of children's products and summarized the deterministic results and corresponding underlying distributions for exposure dose and concentration results estimated through deterministic and probabilistic approaches. We focused on mechanisms and differences in the MCTools used for decision making with probabilistic distributions to validate the simulation adequacy in detail. Results: The estimation results of exposure dose and concentration from the deterministic approaches were 0.19-3.98 times higher than the results from the probabilistic approach. For the probabilistic approach, the use of lognormal, Student's T, and Weibull distributions had the highest frequency as underlying distributions of the input parameters. However, we could not examine the reasons for the selection of each distribution because of the absence of test-statistics. In addition, there were some cases estimating the discrete probability distribution model as the underlying distribution for continuous variables, such as weight. To find the cause of abnormal simulations, we applied two MCTools used for all reports and described the improper usage routes of MCTools. Conclusions: For transparent and realistic exposure assessment, it is necessary to 1) establish standardized guidelines for the proper use of the two statistical approaches, including notes by MCTool and 2) consider the development of a new software tool with proper configurations and features specialized for risk assessment. Such guidelines and software will make exposure assessment more user-friendly, consistent, and rapid in the future.

Does Breast Cancer Drive the Building of Survival Probability Models among States? An Assessment of Goodness of Fit for Patient Data from SEER Registries

  • Khan, Hafiz;Saxena, Anshul;Perisetti, Abhilash;Rafiq, Aamrin;Gabbidon, Kemesha;Mende, Sarah;Lyuksyutova, Maria;Quesada, Kandi;Blakely, Summre;Torres, Tiffany;Afesse, Mahlet
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.12
    • /
    • pp.5287-5294
    • /
    • 2016
  • Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer.

Spatial Analysis of Typhoon Genesis Distribution based on IPCC AR5 RCP 8.5 Scenario (IPCC AR5 RCP 8.5 시나리오 기반 태풍발생 공간분석)

  • Lee, Sungsu;Kim, Ga Young
    • Spatial Information Research
    • /
    • v.22 no.4
    • /
    • pp.49-58
    • /
    • 2014
  • Natural disasters of large scale such as typhoon, heat waves and snow storm have recently been increased because of climate change according to global warming which is most likely caused by greenhouse gas in the atmosphere. Increase of greenhouse gases concentration has caused the augmentation of earth's surface temperature, which raised the frequency of incidences of extreme weather in northern hemisphere. In this paper, we present spatial analysis of future typhoon genesis based on IPCC AR5 RCP 8.5 scenario, which applied latest carbon dioxide concentration trend. For this analysis, we firstly calculated GPI using RCP 8.5 monthly data during 1982~2100. By spatially comparing the monthly averaged GPIs and typhoon genesis locations of 1982~2010, a probability density distribution(PDF) of the typhoon genesis was estimated. Then, we defined 0.05GPI, 0.1GPI and 0.15GPI based on the GPI ranges which are corresponding to probability densities of 0.05, 0.1 and 0.15, respectively. Based on the PDF-related GPIs, spatial distributions of probability on the typhoon genesis were estimated for the periods of 1982~2010, 2011~2040, 2041~2070 and 2071~2100. Also, we analyzed area density using historical genesis points and spatial distributions. As the results, Philippines' east area corresponding to region of latitude $10^{\circ}{\sim}20^{\circ}$ shows high typhoon genesis probability in future. Using this result, we expect to estimate the potential region of typhoon genesis in the future and to develop the genesis model.

The Development and Didactic Mediation of the Correlation Concept (상관개념의 발달과 교수학적 중재에 관한 소고)

  • Nam, Joo-Hyun;Lee, Young-Ha
    • Journal of Educational Research in Mathematics
    • /
    • v.15 no.3
    • /
    • pp.315-334
    • /
    • 2005
  • The purpose of this study is to find out the implications on when and how the correlation concept can be taught. we investigate the development time and method of the concept in a statistical perspective those initially have discussed in psychology by Piaget. We first reviewed the 1958 research by Inhelder and Piaget. It was the first one which researched the development of the correlation and has become the foundation of psychological perspective. According to them, the correlation concept needs proportional and probability concept ahead of its development and argued on the coefficient of correlation based on formal and logical position. However, from a statistical perspective, the correlation concept is a part of the distribution concept. So, the level of the correlation concept grows from the comparison of conditional distributions to the conditional probability distribution where the proportional concept and probability concept are applied. As reviewed through the literature, we found that 11-12 years old students in early formal operation stage reasoned about correlation through the comparison of conditional distributions. In our study, we argue that we need to consider the possibility of beginning didactic mediation for correlation concept earlier and the method approaching it in a distribution perspective.

  • PDF

The Extreme Value Analysis of Deepwater Design Wave Height and Wind Velocity off the Southwest Coast (남서 해역 심해 설계 파고 및 풍속의 극치분석)

  • Kim, Kamg-Min;Lee, Joong-Woo;Lee, Hun;Yang, Sang-Yong;Jeong, Young-Hwan
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.29 no.1
    • /
    • pp.245-251
    • /
    • 2005
  • When we design coastal and harbol facilities deepwater design wave and wind speed are the important design parameters. Especially, the analysis of these informations is a vital step for the point of disaster prevention. In this study, we made and an extreme value analysis using a series of deep water significant wave data arranged in the 16 direction and supplied by KORDI real-time wave information system ,and the wind data gained from Wan-Do whether Station 1978-2003. The probability distributions considered in this characteristic analysis were the Weibull, the Gumbel, the Log-Pearson Type III, the Normal, the Lognormal, and the Gamma distribution. The parameter for each distribution was estimated by three methods, i.e. the method of moments, the maximum likelihood, and the method of probability weight moments. Furthermore, probability distributions for the extreme data had been selected by using Chi-square and Kolmogorov-Smirnov test within significant level of 5%, i,e. 95% reliance level. From this study we found that Gumbel distribution is the most proper model for the deep water design wave height off the southwest coast of Korea. However the result shows that the proper distribution made for the selected site is varied in each extreme data set.

  • PDF

An image sequence coding using motion-compensated transform technique based on the sub-band decomposition (움직임 보상 기법과 분할 대역 기법을 사용한 동영상 부호화 기법)

  • Paek, Hoon;Kim, Rin-Chul;Lee, Sang-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.1
    • /
    • pp.1-16
    • /
    • 1996
  • In this paper, by combining the motion compensated transform coding with the sub-band decomposition technique, we present a motion compensated sub-band coding technique(MCSBC) for image sequence coding. Several problems related to the MCSBC, such as a scheme for motion compensation in each sub-band and the efficient VWL coding of the DCT coefficients in each sub-band are discussed. For an efficient coding, the motion estimation and compensation is performed only on the LL sub-band, but the discrete cosine transform(DCT) is employed to encode all sub-bands in our approach. Then, the transform coefficients in each sub-band are scanned in a different manner depending on the energy distributions in the DCT domain, and coded by using separate 2-D Huffman code tables, which are optimized to the probability distributions in the DCT domain, and coded by using separate 2-D Huffman code tables, which are optimized to the probability distribution of each sub-band. The performance of the proposed MCSBC technique is intensively examined by computer simulations on the HDTV image sequences. The simulation results reveal that the proposed MCSBC technique outperforms other coding techniques, especially the well-known motion compensated transform coding technique by about 1.5dB, in terms of the average peak signal to noise ratio.

  • PDF