• Title/Summary/Keyword: Probability distributions

Search Result 741, Processing Time 0.02 seconds

A Study on Noninformative Priors of Intraclass Correlation Coefficients in Familial Data

  • Jin, Bong-Soo;Kim, Byung-Hwee
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.2
    • /
    • pp.395-411
    • /
    • 2005
  • In this paper, we develop the Jeffreys' prior, reference prior and the the probability matching priors for the difference of intraclass correlation coefficients in familial data. e prove the sufficient condition for propriety of posterior distributions. Using marginal posterior distributions under those noninformative priors, we compare posterior quantiles and frequentist coverage probability.

An Investigation on the Effect of Utility Variance on Choice Probability without Assumptions on the Specific Forms of Probability Distributions (특정한 확률분포를 가정하지 않는 경우에 효용의 분산이 제품선택확률에 미치는 영향에 대한 연구)

  • Won, Jee-Sung
    • Korean Management Science Review
    • /
    • v.28 no.1
    • /
    • pp.159-167
    • /
    • 2011
  • The theory of random utility maximization (RUM) defines the probability of an alternative being chosen as the probability of its utility being perceived as higher than those of all the other competing alternatives in the choice set (Marschak 1960). According to this theory, consumers perceive the utility of an alternative not as a constant but as a probability distribution. Over the last two decades, there have been an increasing number of studies on the effect of utility variance on choice probability. The common result of the previous studies is that as the utility variance increases, the effect of the mean value of the utility (the deterministic component of the utility) on choice probability is reduced. This study provides a theoretical investigation on the effect of utility variance on choice probability without any assumptions on the specific forms of probability distributions. This study suggests that without assumptions of the probability distribution functions, firms cannot apply the marketing strategy of maximizing choice probability (or market share), but can only adopt the strategy of maximizing the minimum or maximum value of the expected choice probability. This study applies the Chebyshef inequality and shows how the changes in utility variances affect the maximum of minimum of choice probabilities and provides managerial implications.

Direct Divergence Approximation between Probability Distributions and Its Applications in Machine Learning

  • Sugiyama, Masashi;Liu, Song;du Plessis, Marthinus Christoffel;Yamanaka, Masao;Yamada, Makoto;Suzuki, Taiji;Kanamori, Takafumi
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.2
    • /
    • pp.99-111
    • /
    • 2013
  • Approximating a divergence between two probability distributions from their samples is a fundamental challenge in statistics, information theory, and machine learning. A divergence approximator can be used for various purposes, such as two-sample homogeneity testing, change-point detection, and class-balance estimation. Furthermore, an approximator of a divergence between the joint distribution and the product of marginals can be used for independence testing, which has a wide range of applications, including feature selection and extraction, clustering, object matching, independent component analysis, and causal direction estimation. In this paper, we review recent advances in divergence approximation. Our emphasis is that directly approximating the divergence without estimating probability distributions is more sensible than a naive two-step approach of first estimating probability distributions and then approximating the divergence. Furthermore, despite the overwhelming popularity of the Kullback-Leibler divergence as a divergence measure, we argue that alternatives such as the Pearson divergence, the relative Pearson divergence, and the $L^2$-distance are more useful in practice because of their computationally efficient approximability, high numerical stability, and superior robustness against outliers.

Simulation Input Modeling : Sample Size Determination for Parameter Estimation of Probability Distributions (시뮬레이션 입력 모형화 : 확률분포 모수 추정을 위한 표본크기 결정)

  • Park Sung-Min
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.31 no.1
    • /
    • pp.15-24
    • /
    • 2006
  • In simulation input modeling, it is important to identify a probability distribution to represent the input process of interest. In this paper, an appropriate sample size is determined for parameter estimation associated with some typical probability distributions frequently encountered in simulation input modeling. For this purpose, a statistical measure is proposed to evaluate the effect of sample size on the precision as well as the accuracy related to the parameter estimation, square rooted mean square error to parameter ratio. Based on this evaluation measure, this sample size effect can be not only analyzed dimensionlessly against parameter's unit but also scaled regardless of parameter's magnitude. In the Monte Carlo simulation experiments, three continuous and one discrete probability distributions are investigated such as ; 1) exponential ; 2) gamma ; 3) normal ; and 4) poisson. The parameter's magnitudes tested are designed in order to represent distinct skewness respectively. Results show that ; 1) the evaluation measure drastically improves until the sample size approaches around 200 ; 2) up to the sample size about 400, the improvement continues but becomes ineffective ; and 3) plots of the evaluation measure have a similar plateau pattern beyond the sample size of 400. A case study with real datasets presents for verifying the experimental results.

Reliability Estimation of Buried Gas Pipelines in terms of Various Types of Random Variable Distribution

  • Lee Ouk Sub;Kim Dong Hyeok
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.6
    • /
    • pp.1280-1289
    • /
    • 2005
  • This paper presents the effects of corrosion environments of failure pressure model for buried pipelines on failure prediction by using a failure probability. The FORM (first order reliability method) is used in order to estimate the failure probability in the buried pipelines with corrosion defects. The effects of varying distribution types of random variables such as normal, lognormal and Weibull distributions on the failure probability of buried pipelines are systematically investigated. It is found that the failure probability for the MB31G model is larger than that for the B31G model. And the failure probability is estimated as the largest for the Weibull distribution and the smallest for the normal distribution. The effect of data scattering in corrosion environments on failure probability is also investigated and it is recognized that the scattering of wall thickness and yield strength of pipeline affects the failure probability significantly. The normalized margin is defined and estimated. Furthermore, the normalized margin is used to predict the failure probability using the fitting lines between failure probability and normalized margin.

The Determination of Probability Distributions of Annual, Seasonal and Monthly Precipitation in Korea (우리나라의 연 강수량, 계절 강수량 및 월 강수량의 확률분포형 결정)

  • Kim, Dong-Yeob;Lee, Sang-Ho;Hong, Young-Joo;Lee, Eun-Jai;Im, Sang-Jun
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.2
    • /
    • pp.83-94
    • /
    • 2010
  • The objective of this study was to determine the best probability distributions of annual, seasonal and monthly precipitation in Korea. Data observed at 32 stations in Korea were analyzed using the L-moment ratio diagram and the average weighted distance (AWD) to identify the best probability distributions of each precipitation. The probability distribution was best represented by 3-parameter Weibull distribution (W3) for the annual precipitation, 3-parameter lognormal distribution (LN3) for spring and autumn seasons, and generalized extreme value distribution (GEV) for summer and winter seasons. The best probability distribution models for monthly precipitation were LN3 for January, W3 for February and July, 2-parameter Weibull distribution (W2) for March, generalized Pareto distribution (GPA) for April, September, October and November, GEV for May and June, and log-Pearson type III (LP3) for August and December. However, from the goodness-of-fit test for the best probability distributions of the best fit, GPA for April, September, October and November, and LN3 for January showed considerably high reject rates due to computational errors in estimation of the probability distribution parameters and relatively higher AWD values. Meanwhile, analyses using data from 55 stations including additional 23 stations indicated insignificant differences to those using original data. Further studies using more long-term data are needed to identify more optimal probability distributions for each precipitation.

Combining Independent Permutation p Values Associated with Mann-Whitney Test Data

  • Um, Yonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.7
    • /
    • pp.99-104
    • /
    • 2018
  • In this paper, we compare Fisher's continuous method with an exact discrete analog of Fisher's continuous method from permutation tests for combining p values. The discrete analog of Fisher's continuous method is known to be adequate for combining independent p values from discrete probability distributions. Also permutation tests are widely used as alternatives to conventional parametric tests since these tests are distribution-free, and yield discrete probability distributions and exact p values. In this paper, we obtain permutation p values from discrete probability distributions using Mann-Whitney test data sets (real data and hypothetical data) and combine p values by the exact discrete analog of Fisher's continuous method.

Precise Vehicle Localization Using Gaussian Mixture Map Based on Road Marking

  • Kim, Kyu-Won;Jee, Gyu-In
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.9 no.1
    • /
    • pp.23-31
    • /
    • 2020
  • It is essential to estimate the vehicle localization for an autonomous safety driving. In particular, since LIDAR provides precise scan data, many studies carried out to estimate the vehicle localization using LIDAR and pre-generated map. The road marking always exists on the road because of provides driving information. Therefore, it is often used for map information. In this paper, we propose to generate the Gaussian mixture map based on road-marking information and localization method using this map. Generally, the probability distributions map stores the single Gaussian distribution for each grid. However, single resolution probability distributions map cannot express complex shapes when grid resolution is large. In addition, when grid resolution is small, map size is bigger and process time is longer. Therefore, it is difficult to apply the road marking. On the other hand, Gaussian mixture distribution can effectively express the road marking by several probability distributions. In this paper, we generate Gaussian mixture map and perform vehicle localization using Gaussian mixture map. Localization performance is analyzed through the experimental result.

Sensitivity analysis of probabilistic seismic behaviour of wood frame buildings

  • Gu, Jianzhong
    • Earthquakes and Structures
    • /
    • v.11 no.1
    • /
    • pp.109-127
    • /
    • 2016
  • This paper examines the contribution of three sources of uncertainties to probabilistic seismic behaviour of wood frame buildings, including ground motions, intensity and seismic mass. This sensitivity analysis is performed using three methods, including the traditional method based on the conditional distributions of ground motions at given intensity measures, a method using the summation of conditional distributions at given ground motion records, and the Monte Carlo simulation. FEMA P-695 ground motions and its scaling methods are used in the analysis. Two archetype buildings are used in the sensitivity analysis, including a two-storey building and a four-storey building. The results of these analyses indicate that using data-fitting techniques to obtain probability distributions may cause some errors. Linear interpolation combined with data-fitting technique may be employed to improve the accuracy of the calculated exceeding probability. The procedures can be used to quantify the risk of wood frame buildings in seismic events and to calibrate seismic design provisions towards design code improvement.

THE STUDY OF FLOOD FREQUENCY ESTIMATES USING CAUCHY VARIABLE KERNEL

  • Moon, Young-Il;Cha, Young-Il;Ashish Sharma
    • Water Engineering Research
    • /
    • v.2 no.1
    • /
    • pp.1-10
    • /
    • 2001
  • The frequency analyses for the precipitation data in Korea were performed. We used daily maximum series, monthly maximum series, and annual series. For nonparametric frequency analyses, variable kernel estimators were used. Nonparametric methods do not require assumptions about the underlying populations from which the data are obtained. Therefore, they are better suited for multimodal distributions with the advantage of not requiring a distributional assumption. In order to compare their performance with parametric distributions, we considered several probability density functions. They are Gamma, Gumbel, Log-normal, Log-Pearson type III, Exponential, Generalized logistic, Generalized Pareto, and Wakeby distributions. The variable kernel estimates are comparable and are in the middle of the range of the parametric estimates. The variable kernel estimates show a very small probability in extrapolation beyond the largest observed data in the sample. However, the log-variable kernel estimates remedied these defects with the log-transformed data.

  • PDF