• Title/Summary/Keyword: squared error loss

Search Result 70, Processing Time 0.02 seconds

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

Bayesian and maximum likelihood estimations from exponentiated log-logistic distribution based on progressive type-II censoring under balanced loss functions

  • Chung, Younshik;Oh, Yeongju
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.425-445
    • /
    • 2021
  • A generalization of the log-logistic (LL) distribution called exponentiated log-logistic (ELL) distribution on lines of exponentiated Weibull distribution is considered. In this paper, based on progressive type-II censored samples, we have derived the maximum likelihood estimators and Bayes estimators for three parameters, the survival function and hazard function of the ELL distribution. Then, under the balanced squared error loss (BSEL) and the balanced linex loss (BLEL) functions, their corresponding Bayes estimators are obtained using Lindley's approximation (see Jung and Chung, 2018; Lindley, 1980), Tierney-Kadane approximation (see Tierney and Kadane, 1986) and Markov Chain Monte Carlo methods (see Hastings, 1970; Gelfand and Smith, 1990). Here, to check the convergence of MCMC chains, the Gelman and Rubin diagnostic (see Gelman and Rubin, 1992; Brooks and Gelman, 1997) was used. On the basis of their risks, the performances of their Bayes estimators are compared with maximum likelihood estimators in the simulation studies. In this paper, research supports the conclusion that ELL distribution is an efficient distribution to modeling data in the analysis of survival data. On top of that, Bayes estimators under various loss functions are useful for many estimation problems.

A new extension of Lindley distribution: modified validation test, characterizations and different methods of estimation

  • Ibrahim, Mohamed;Yadav, Abhimanyu Singh;Yousof, Haitham M.;Goual, Hafida;Hamedani, G.G.
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.5
    • /
    • pp.473-495
    • /
    • 2019
  • In this paper, a new extension of Lindley distribution has been introduced. Certain characterizations based on truncated moments, hazard and reverse hazard function, conditional expectation of the proposed distribution are presented. Besides, these characterizations, other statistical/mathematical properties of the proposed model are also discussed. The estimation of the parameters is performed through different classical methods of estimation. Bayes estimation is computed under gamma informative prior under the squared error loss function. The performances of all estimation methods are studied via Monte Carlo simulations in mean square error sense. The potential of the proposed model is analyzed through two data sets. A modified goodness-of-fit test using the Nikulin-Rao-Robson statistic test is investigated via two examples and is observed that the new extension might be used as an alternative lifetime model.

SOME POINT ESTIMATES FOR THE SHAPE PARAMETERS OF EXPONENTIATED-WEIBULL FAMILY

  • Singh Umesh;Gupta Pramod K.;Upadhyay S.K.
    • Journal of the Korean Statistical Society
    • /
    • v.35 no.1
    • /
    • pp.63-77
    • /
    • 2006
  • Maximum product of spacings estimator is proposed in this paper as a competent alternative of maximum likelihood estimator for the parameters of exponentiated-Weibull distribution, which does work even when the maximum likelihood estimator does not exist. In addition, a Bayes type estimator known as generalized maximum likelihood estimator is also obtained for both of the shape parameters of the aforesaid distribution. Though, the closed form solutions for these proposed estimators do not exist yet these can be obtained by simple appropriate numerical techniques. The relative performances of estimators are compared on the basis of their relative risk efficiencies obtained under symmetric and asymmetric losses. An example based on simulated data is considered for illustration.

Bayesian estimation for the exponential distribution based on generalized multiply Type-II hybrid censoring

  • Jeon, Young Eun;Kang, Suk-Bok
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.4
    • /
    • pp.413-430
    • /
    • 2020
  • The multiply Type-II hybrid censoring scheme is disadvantaged by an experiment time that is too long. To overcome this limitation, we propose a generalized multiply Type-II hybrid censoring scheme. Some estimators of the scale parameter of the exponential distribution are derived under a generalized multiply Type-II hybrid censoring scheme. First, the maximum likelihood estimator of the scale parameter of the exponential distribution is obtained under the proposed censoring scheme. Second, we obtain the Bayes estimators under different loss functions with a noninformative prior and an informative prior. We approximate the Bayes estimators by Lindleys approximation and the Tierney-Kadane method since the posterior distributions obtained by the two priors are complicated. In addition, the Bayes estimators are obtained by using the Markov Chain Monte Carlo samples. Finally, all proposed estimators are compared in the sense of the mean squared error through the Monte Carlo simulation and applied to real data.

An Integrated Process Control Scheme Based on the Future Loss (미래손실에 기초한 통합공정관리계획)

  • Park, Chang-Soon;Lee, Jae-Heon
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.2
    • /
    • pp.247-264
    • /
    • 2008
  • This paper considers the integrated process control procedure for detecting special causes in an ARIMA(0,1,1) process that is being adjusted automatically after each observation using a minimum mean squared error adjustment policy. It is assumed that a special cause can change the process mean and the process variance. We derive expressions for the process deviation from target for a variety of different process parameter changes, and introduce a control chart, based on the generalized likelihood ratio, for detecting special causes. We also propose the integrated process control scheme bases on the future loss. The future loss denotes the cost that will be incurred in a process remaining interval from a true out-of-control signal.

Performance Evaluation of Loss Functions and Composition Methods of Log-scale Train Data for Supervised Learning of Neural Network (신경 망의 지도 학습을 위한 로그 간격의 학습 자료 구성 방식과 손실 함수의 성능 평가)

  • Donggyu Song;Seheon Ko;Hyomin Lee
    • Korean Chemical Engineering Research
    • /
    • v.61 no.3
    • /
    • pp.388-393
    • /
    • 2023
  • The analysis of engineering data using neural network based on supervised learning has been utilized in various engineering fields such as optimization of chemical engineering process, concentration prediction of particulate matter pollution, prediction of thermodynamic phase equilibria, and prediction of physical properties for transport phenomena system. The supervised learning requires training data, and the performance of the supervised learning is affected by the composition and the configurations of the given training data. Among the frequently observed engineering data, the data is given in log-scale such as length of DNA, concentration of analytes, etc. In this study, for widely distributed log-scaled training data of virtual 100×100 images, available loss functions were quantitatively evaluated in terms of (i) confusion matrix, (ii) maximum relative error and (iii) mean relative error. As a result, the loss functions of mean-absolute-percentage-error and mean-squared-logarithmic-error were the optimal functions for the log-scaled training data. Furthermore, we figured out that uniformly selected training data lead to the best prediction performance. The optimal loss functions and method for how to compose training data studied in this work would be applied to engineering problems such as evaluating DNA length, analyzing biomolecules, predicting concentration of colloidal suspension.

RELIABILITY ANALYSIS FOR THE TWO-PARAMETER PARETO DISTRIBUTION UNDER RECORD VALUES

  • Wang, Liang;Shi, Yimin;Chang, Ping
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.5_6
    • /
    • pp.1435-1451
    • /
    • 2011
  • In this paper the estimation of the parameters as well as survival and hazard functions are presented for the two-parameter Pareto distribution by using Bayesian and non-Bayesian approaches under upper record values. Maximum likelihood estimation (MLE) and interval estimation are derived for the parameters. Bayes estimators of reliability performances are obtained under symmetric (Squared error) and asymmetric (Linex and general entropy (GE)) losses, when two parameters have discrete and continuous priors, respectively. Finally, two numerical examples with real data set and simulated data, are presented to illustrate the proposed method. An algorithm is introduced to generate records data, then a simulation study is performed and different estimates results are compared.

Modeling pediatric tumor risks in Florida with conditional autoregressive structures and identifying hot-spots

  • Kim, Bit;Lim, Chae Young
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.5
    • /
    • pp.1225-1239
    • /
    • 2016
  • We investigate pediatric tumor incidence data collected by the Florida Association for Pediatric Tumor program using various models commonly used in disease mapping analysis. Particularly, we consider Poisson normal models with various conditional autoregressive structure for spatial dependence, a zero-in ated component to capture excess zero counts and a spatio-temporal model to capture spatial and temporal dependence, together. We found that intrinsic conditional autoregressive model provides the smallest Deviance Information Criterion (DIC) among the models when only spatial dependence is considered. On the other hand, adding an autoregressive structure over time decreases DIC over the model without time dependence component. We adopt weighted ranks squared error loss to identify high risk regions which provides similar results with other researchers who have worked on the same data set (e.g. Zhang et al., 2014; Wang and Rodriguez, 2014). Our results, thus, provide additional statistical support on those identied high risk regions discovered by the other researchers.

A Novel Broadband Channel Estimation Technique Based on Dual-Module QGAN

  • Li Ting;Zhang Jinbiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1369-1389
    • /
    • 2024
  • In the era of 6G, the rapid increase in communication data volume poses higher demands on traditional channel estimation techniques and those based on deep learning, especially when processing large-scale data as their computational load and real-time performance often fail to meet practical requirements. To overcome this bottleneck, this paper introduces quantum computing techniques, exploring for the first time the application of Quantum Generative Adversarial Networks (QGAN) to broadband channel estimation challenges. Although generative adversarial technology has been applied to channel estimation, obtaining instantaneous channel information remains a significant challenge. To address the issue of instantaneous channel estimation, this paper proposes an innovative QGAN with a dual-module design in the generator. The adversarial loss function and the Mean Squared Error (MSE) loss function are separately applied for the parameter updates of these two modules, facilitating the learning of statistical channel information and the generation of instantaneous channel details. Experimental results demonstrate the efficiency and accuracy of the proposed dual-module QGAN technique in channel estimation on the Pennylane quantum computing simulation platform. This research opens a new direction for physical layer techniques in wireless communication and offers expanded possibilities for the future development of wireless communication technologies.