• Title/Summary/Keyword: Likelihood measure

Search Result 185, Processing Time 0.02 seconds

Automatic Speech Database Verification Method Based on Confidence Measure

  • Kang Jeomja;Jung Hoyoung;Kim Sanghun
    • MALSORI
    • /
    • no.51
    • /
    • pp.71-84
    • /
    • 2004
  • In this paper, we propose the automatic speech database verification method(or called automatic verification) based on confidence measure for a large speech database. This method verifies the consistency between given transcription and speech using the confidence measure. The automatic verification process consists of two stages : the word-level likelihood computation stage and multi-level likelihood ratio computation stage. In the word-level likelihood computation stage, we calculate the word-level likelihood using the viterbi decoding algorithm and make the segment information. In the multi-level likelihood ratio computation stage, we calculate the word-level and the phone-level likelihood ratio based on confidence measure with anti-phone model. By automatic verification, we have achieved about 61% error reduction. And also we can reduce the verification time from 1 month in manual to 1-2 days in automatic.

  • PDF

A modification of McFadden's R2 for binary and ordinal response models

  • Ejike R. Ugba;Jan Gertheiss
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.1
    • /
    • pp.49-63
    • /
    • 2023
  • A lot of studies on the summary measures of predictive strength of categorical response models consider the likelihood ratio index (LRI), also known as the McFadden-R2, a better option than many other measures. We propose a simple modification of the LRI that adjusts for the effect of the number of response categories on the measure and that also rescales its values, mimicking an underlying latent measure. The modified measure is applicable to both binary and ordinal response models fitted by maximum likelihood. Results from simulation studies and a real data example on the olfactory perception of boar taint show that the proposed measure outperforms most of the widely used goodness-of-fit measures for binary and ordinal models. The proposed R2 interestingly proves quite invariant to an increasing number of response categories of an ordinal model.

Reliability measure improvement of Phoneme character extract In Out-of-Vocabulary Rejection Algorithm (미등록어 거절 알고리즘에서 음소 특성 추출의 신뢰도 측정 개선)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.10 no.6
    • /
    • pp.219-224
    • /
    • 2012
  • In the communication mobile terminal, Vocabulary recognition system has low recognition rates, because this problems are due to phoneme feature extract from inaccurate vocabulary. Therefore they are not recognize the phoneme and similar phoneme misunderstanding error. To solve this problem, this paper propose the system model, which based on the two step process. First, input phoneme is represent by number which measure the distance of phonemes through phoneme likelihood process. next step is recognize the result through the reliability measure. By this process, we minimize the phoneme misunderstanding error caused by inaccurate vocabulary and perform error correction rate for error provrd vocabulary using phoneme likelihood and reliability. System performance comparison as a result of recognition improve represent 2.7% by method using error pattern learning and semantic pattern.

On Effective Speaker Verification Based on Subword Model

  • Ahn, Sung-Joo;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.49-59
    • /
    • 2002
  • This paper concerns an effective text-dependent speaker verification method to increase the performance of speaker verification. While various speaker verification methods have already been developed, their effectiveness has not yet been formally proven in terms of achieving acceptable performance levels. This paper proposes a weighted likelihood procedure along with a confidence measure based on subword-based text-dependent speaker verification. Our aim is to remedy the low performance problem in speaker verification by exploring a means to strengthen the verification likelihood via subword-based hypothesis criteria and weighted likelihood method. Experimental results show that the proposed speaker verification method outperforms that of the speaker verification scheme without using the proposed decision by a factor of up to 1.6 times. From these results, the proposed speaker verification method is shown to be very effective and to achieve a reliable performance.

  • PDF

EVALUATION OF DIAGNOSTIC TESTS WITH MULTIPLE DIAGNOSTIC CATEGORIES

  • Birkett N.J.
    • 대한예방의학회:학술대회논문집
    • /
    • 1994.02b
    • /
    • pp.154-157
    • /
    • 1994
  • The evaluation of diagnostic tests attempts to obtain one or more statistical parameters which can indicate the intrinsic diagnostic utility of a test. Sensitivity. specificity and predictive value are not appropriate for this use. The likelihood ratio has been proposed as a useful measure when using a test to diagnose one of two disease states (e.g. disease present or absent). In this paper, we generalize the likelihood ratio concept to a situation in which the goal is to diagnose one of several non-overlapping disease states. A formula is derived to determine the post-test probability of a specific disease state. The post-test odds are shown to be related to the pre-test odds of a disease and to the usual likelihood ratios derived from considering the diagnosis between the target diagnosis and each alternate in turn. Hence, likelihood ratios derived from comparing pairs of diseases can be used to determine test utility in a multiple disease diagnostic situation.

  • PDF

A Density-based Clustering Method

  • Ahn, Sung Mahn;Baik, Sung Wook
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.3
    • /
    • pp.715-723
    • /
    • 2002
  • This paper is to show a clustering application of a density estimation method that utilizes the Gaussian mixture model. We define "closeness measure" as a clustering criterion to see how close given two Gaussian components are. Closeness measure is defined as the ratio of log likelihood between two Gaussian components. According to simulations using artificial data, the clustering algorithm turned out to be very powerful in that it can correctly determine clusters in complex situations, and very flexible in that it can produce different sizes of clusters based on different threshold valuesold values

Application of Logit Model in Qualitative Dependent Variables (로짓모형을 이용한 질적 종속변수의 분석)

  • Lee, Kil-Soon;Yu, Wann
    • Journal of Families and Better Life
    • /
    • v.10 no.1 s.19
    • /
    • pp.131-138
    • /
    • 1992
  • Regression analysis has become a standard statistical tool in the behavioral science. Because of its widespread popularity. regression has been often misused. Such is the case when the dependent variable is a qualitative measure rather than a continuous, interval measure. Regression estimates with a qualitative dependent variable does not meet the assumptions underlying regression. It can lead to serious errors in the standard statistical inference. Logit model is recommended as alternatives to the regression model for qualitative dependent variables. Researchers can employ this model to measure the relationship between independent variables and qualitative dependent variables without assuming that logit model was derived from probabilistic choice theory. Coefficients in logit model are typically estimated by the method of Maximum Likelihood Estimation in contrast to ordinary regression model which estimated by the method of Least Squares Estimation. Goodness of fit in logit model is based on the likelihood ratio statistics and the t-statistics is used for testing the null hypothesis.

  • PDF

SVM-based Utterance Verification Using Various Confidence Measures (다양한 신뢰도 척도를 이용한 SVM 기반 발화검증 연구)

  • Kwon, Suk-Bong;Kim, Hoi-Rin;Kang, Jeom-Ja;Koo, Myong-Wan;Ryu, Chang-Sun
    • MALSORI
    • /
    • no.60
    • /
    • pp.165-180
    • /
    • 2006
  • In this paper, we present several confidence measures (CM) for speech recognition systems to evaluate the reliability of recognition results. We propose heuristic CMs such as mean log-likelihood score, N-best word log-likelihood ratio, likelihood sequence fluctuation and likelihood ratio testing(LRT)-based CMs using several types of anti-models. Furthermore, we propose new algorithms to add weighting terms on phone-level log-likelihood ratio to merge word-level log-likelihood ratios. These weighting terms are computed from the distance between acoustic models and knowledge-based phoneme classifications. LRT-based CMs show better performance than heuristic CMs excessively, and LRT-based CMs using phonetic information show that the relative reduction in equal error rate ranges between $8{\sim}13%$ compared to the baseline LRT-based CMs. We use the support vector machine to fuse several CMs and improve the performance of utterance verification. From our experiments, we know that selection of CMs with low correlation is more effective than CMs with high correlation.

  • PDF

The Effect of Uncertainty in Roughness and Discharge on Flood Inundation Mapping (조도계수와 유량의 불확실성이 홍수범람도 구축에 미치는 영향)

  • Jung, Younghun;Yeo, Kyu Dong;Kim, Soo Young;Lee, Seung Oh
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.3
    • /
    • pp.937-945
    • /
    • 2013
  • The accuracy of flood inundation maps is determined by the uncertainty propagated from all variables involved in the overall process including input data, model parameters and modeling approaches. This study investigated the uncertainty arising from key variables (flow condition and Manning's n) among model variables in flood inundation mapping for the Missouri River near Boonville, Missouri, USA. Methodology of this study involves the generalized likelihood uncertainty estimation (GLUE) to quantify the uncertainty bounds of flood inundation area. Uncertainty bounds in the GLUE procedure are evaluated by selecting two likelihood functions, which is two statistic (inverse of sum of squared error (1/SAE) and inverse of sum of absolute error (1/SSE)) based on an observed water surface elevation and simulated water surface elevations. The results from GLUE show that likelihood measure based on 1/SSE is more sensitive on observation than likelihood measure based on 1/SAE, and that the uncertainty propagated from two variables produces an uncertainty bound of about 2% in the inundation area compared to observed inundation. Based on the results obtained form this study, it is expected that this study will be useful to identify the characteristic of flood.

A Study on Utterance Verification Using Accumulation of Negative Log-likelihood Ratio (음의 유사도 비율 누적 방법을 이용한 발화검증 연구)

  • 한명희;이호준;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3
    • /
    • pp.194-201
    • /
    • 2003
  • In speech recognition, confidence measuring is to decide whether it can be accepted as the recognized results or not. The confidence is measured by integrating frames into phone and word level. In case of word recognition, the confidence measuring verifies the results of recognition and Out-Of-Vocabulary (OOV). Therefore, the post-processing could improve the performance of recognizer without accepting it as a recognition error. In this paper, we measure the confidence modifying log likelihood ratio (LLR) which was the previous confidence measuring. It accumulates only those which the log likelihood ratio is negative when integrating the confidence to phone level from frame level. When comparing the verification performance for the results of word recognizer with the previous method, the FAR (False Acceptance Ratio) is decreased about 3.49% for the OOV and 15.25% for the recognition error when CAR (Correct Acceptance Ratio) is about 90%.