• Title/Summary/Keyword: correct estimation probability

Search Result 30, Processing Time 0.024 seconds

An Improved Multi-stage Timing Offset Estimation Scheme for OFDM Systems in Multipath Fading Channel (다중경로 페이딩 환경에서 OFDM 시스템을 위한 개선된 다중단계 타이밍 옵셋 추정기법)

  • Park, Jong-In;Noh, Yoon-Kab;Yoon, Seok-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.9C
    • /
    • pp.589-595
    • /
    • 2011
  • This paper proposes an improved multi-stage timing offset estimation scheme for orthogonal frequency division multiplexing (OFDM) systems in multipath fading channel environment. The conventional multi-stage timing offset estimation scheme is very sensitive to the random multipath components. By exploiting the sample standard deviation of the cross-correlation values, the proposed scheme achieves a robustness to the random multipath components. Simulation results demonstrate that the proposed scheme has a higher correct estimation probability and has a better mean square error (MSE) performance than the conventional scheme in multipath fading channels.

Comparison of Benefit Estimation Models in Cost-Benefit Analysis: A Case of Chronic Hypertension Management Programs

  • Lim, Ji-Young;Kim, Mi-Ja;Park, Chang-Gi;Kim, Jung-Yun
    • Journal of Korean Academy of Nursing
    • /
    • v.41 no.6
    • /
    • pp.750-757
    • /
    • 2011
  • Purpose: Cost-benefit analysis is one of the most commonly used economic evaluation methods, which helps to inform the economic value of a program to decision makers. However, the selection of a correct benefit estimation method remains critical for accurate cost-benefit analysis. This paper compared benefit estimations among three different benefit estimation models. Methods: Data from community-based chronic hypertension management programs in a city in South Korea were used. Three different benefit estimation methods were compared. The first was a standard deterministic estimation model; second, a repeated-measures deterministic estimation model; and third, a transitional probability estimation model. Results: The estimated net benefit of the three different methods were $1,273.01, $-3,749.42, and $-5,122.55 respectively. Conclusion: The transitional probability estimation model showed the most correct and realistic benefit estimation, as it traced possible paths of changing status between time points and it accounted for both positive and negative benefits.

Performance estimation for Software Reliability Growth Model that Use Plot of Failure Data (고장 데이터의 플롯을 이용한 소프트웨어 신뢰도 성장 모델의 성능평가)

  • Jung, Hye-Jung;Yang, Hae-Sool;Park, In-Soo
    • The KIPS Transactions:PartD
    • /
    • v.10D no.5
    • /
    • pp.829-836
    • /
    • 2003
  • Software Reliability Growth Model that have been studied variously. But measurement of correct parameter of this model is not easy. Specially, estimation of correct model about failure data must be establish and estimation of parameter can consist exactly. To get correct testing, we calculate the normal score and describe the normal probability plot. Use the normal probability plot, we estimate the distribution for failure data. In this paper, we estimate the software reliability growth model for through the normal probability plot. In this research, we applies software reliability growth model through distribution characteristics of failure data. If we see plot, we determine the software reliability growth model, we can make sure superior in model's performance estimation.

Estimation of Geometric Mean for k Exponential Parameters Using a Probability Matching Prior

  • Kim, Hea-Jung;Kim, Dae Hwang
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.1
    • /
    • pp.1-9
    • /
    • 2003
  • In this article, we consider a Bayesian estimation method for the geometric mean of $textsc{k}$ exponential parameters, Using the Tibshirani's orthogonal parameterization, we suggest an invariant prior distribution of the $textsc{k}$ parameters. It is seen that the prior, probability matching prior, is better than the uniform prior in the sense of correct frequentist coverage probability of the posterior quantile. Then a weighted Monte Carlo method is developed to approximate the posterior distribution of the mean. The method is easily implemented and provides posterior mean and HPD(Highest Posterior Density) interval for the geometric mean. A simulation study is given to illustrates the efficiency of the method.

Identification of the associations between genes and quantitative traits using entropy-based kernel density estimation

  • Yee, Jaeyong;Park, Taesung;Park, Mira
    • Genomics & Informatics
    • /
    • v.20 no.2
    • /
    • pp.17.1-17.11
    • /
    • 2022
  • Genetic associations have been quantified using a number of statistical measures. Entropy-based mutual information may be one of the more direct ways of estimating the association, in the sense that it does not depend on the parametrization. For this purpose, both the entropy and conditional entropy of the phenotype distribution should be obtained. Quantitative traits, however, do not usually allow an exact evaluation of entropy. The estimation of entropy needs a probability density function, which can be approximated by kernel density estimation. We have investigated the proper sequence of procedures for combining the kernel density estimation and entropy estimation with a probability density function in order to calculate mutual information. Genotypes and their interactions were constructed to set the conditions for conditional entropy. Extensive simulation data created using three types of generating functions were analyzed using two different kernels as well as two types of multifactor dimensionality reduction and another probability density approximation method called m-spacing. The statistical power in terms of correct detection rates was compared. Using kernels was found to be most useful when the trait distributions were more complex than simple normal or gamma distributions. A full-scale genomic dataset was explored to identify associations using the 2-h oral glucose tolerance test results and γ-glutamyl transpeptidase levels as phenotypes. Clearly distinguishable single-nucleotide polymorphisms (SNPs) and interacting SNP pairs associated with these phenotypes were found and listed with empirical p-values.

Comparison of sequential estimation in response-adaptive designs with and without covariate-adjustment

  • Park, Eunsik
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.4
    • /
    • pp.287-296
    • /
    • 2016
  • Subjects on one side of the covariate population can be allocated to the inferior treatment when there is interaction between the covariate and treatment along with a response-adaptive (RA) design without covariate adjustment. An RA design allows a newly entered subject to have a better chance so that the subject is treated by a superior treatment based on cumulative information from previous subjects. A covariate-adjusted response-adaptive (CARA) is the same as RA design and additionally adjusts the allocation based on individual covariate information. A comparison has been made for the sequential estimation procedure with and without covariate adjustment to see how ignoring significantly interactive covariate affects the correct treatment allocation. Using logistic models, we present simulation results regarding the coverage probability of treatment effect, correct allocation, and stopping time.

Performance Analysis of Sequential Estimation Schemes for Fast Acquisition of Direct Sequence Spread Spectrum Systems (직접 수열 확산 대역 시스템의 고속 부호 획득을 위한 순차 추정 기법들의 성능 분석)

  • Lee, Seong Ro;Chae, Keunhong;Yoon, Seokho;Jeong, Min-A
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.8
    • /
    • pp.467-473
    • /
    • 2014
  • In the direct sequence spread spectrum system, the correct synchronization is very important; hence, several acquisition schemes based on the sequential estimation have been developed. Typically, the rapid acquisition sequential estimation (RASE) scheme, the seed accumulating sequential estimation (SASE) scheme, the recursive soft sequential estimation (RSSE) scheme have been developed for the correct acquisition. However, the objective performance comparison and analysis between former estimation schemes have not been performed so far. In this paper, we compare and analyze the performance of the above sequential estimation schemes by simulating the correct chip probability and the mean acquisition time (MAT).

Comparison of confidence intervals for testing probabilities of a system (시스템의 확률 값 시험을 위한 신뢰구간 비교 분석)

  • Hwang, Ik-Soon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.5
    • /
    • pp.435-443
    • /
    • 2010
  • When testing systems that incorporate probabilistic behavior, it is necessary to apply test inputs a number of times in order to give a test verdict. Interval estimation can be used to assert the correctness of probabilities where the selection of confidence interval is one of the important issues for quality of testing. The Wald interval has been widely accepted for interval estimation. In this paper, we compare the Wald interval and the Agresti-Coull interval for various sizes of samples. The comparison is carried out based on the test pass probability of correct implementations and the test fail probability of incorrect implementations when these confidence intervals are used for probability testing. We consider two-sided confidence intervals to check if the probability is close to a given value. Also one-sided confidence intervals are considered in the comparison in order to check if the probability is not less than a given value. When testing probabilities using two-sided confidence intervals, we recommend the Agresti-Coull interval. For one-sided confidence intervals, the Agresti-Coull interval is recommended when the size of samples is large while either one of two confidence intervals can be used for small size samples.

MCE Training Algorithm for a Speech Recognizer Detecting Mispronunciation of a Foreign Language (외국어 발음오류 검출 음성인식기를 위한 MCE 학습 알고리즘)

  • Bae, Min-Young;Chung, Yong-Joo;Kwon, Chul-Hong
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.43-52
    • /
    • 2004
  • Model parameters in HMM based speech recognition systems are normally estimated using Maximum Likelihood Estimation(MLE). The MLE method is based mainly on the principle of statistical data fitting in terms of increasing the HMM likelihood. The optimality of this training criterion is conditioned on the availability of infinite amount of training data and the correct choice of model. However, in practice, neither of these conditions is satisfied. In this paper, we propose a training algorithm, MCE(Minimum Classification Error), to improve the performance of a speech recognizer detecting mispronunciation of a foreign language. During the conventional MLE(Maximum Likelihood Estimation) training, the model parameters are adjusted to increase the likelihood of the word strings corresponding to the training utterances without taking account of the probability of other possible word strings. In contrast to MLE, the MCE training scheme takes account of possible competing word hypotheses and tries to reduce the probability of incorrect hypotheses. The discriminant training method using MCE shows better recognition results than the MLE method does.

  • PDF

Power Estimation by Using Testability (테스트 용이도를 이용한 전력소모 예측)

  • Lee, Jae-Hun;Min, Hyeong-Bok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.766-772
    • /
    • 1999
  • With the increase of portable system and high-density IC, power consumption of VLSI circuits is very important factor in design process. Power estimation is required in order to estimate the power consumption. A simple and correct solution of power estimation is to use circuit simulation. But it is very time consuming and inefficient way. Probabilistic method has been proposed to overcome this problem. Transition density using probability was an efficient method to estimate power consumption using BDD and Boolean difference. But it is difficult to build the BDD and compute complex Boolean difference. In this paper, we proposed Propowest. Propowest is building a digraph of circuit, and easy and fast in computing transition density by using modified COP algorithm. Propowest provides an efficient way for power estimation.

  • PDF