• Title/Summary/Keyword: Maximum mutual information (MMI)

Search Result 5, Processing Time 0.018 seconds

LFMMI-based acoustic modeling by using external knowledge (External knowledge를 사용한 LFMMI 기반 음향 모델링)

  • Park, Hosung;Kang, Yoseb;Lim, Minkyu;Lee, Donghyun;Oh, Junseok;Kim, Ji-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.5
    • /
    • pp.607-613
    • /
    • 2019
  • This paper proposes LF-MMI (Lattice Free Maximum Mutual Information)-based acoustic modeling using external knowledge for speech recognition. Note that an external knowledge refers to text data other than training data used in acoustic model. LF-MMI, objective function for optimization of training DNN (Deep Neural Network), has high performances in discriminative training. In LF-MMI, a phoneme probability as prior probability is used for predicting posterior probability of the DNN-based acoustic model. We propose using external knowledges for training the prior probability model to improve acoustic model based on DNN. It is measured to relative improvement 14 % as compared with the conventional LF-MMI-based model.

Maximum mutual information estimation linear spectral transform based adaptation (Maximum mutual information estimation을 이용한 linear spectral transformation 기반의 adaptation)

  • Yoo, Bong-Soo;Kim, Dong-Hyun;Yook, Dong-Suk
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.53-56
    • /
    • 2005
  • In this paper, we propose a transformation based robust adaptation technique that uses the maximum mutual information(MMI) estimation for the objective function and the linear spectral transformation(LST) for adaptation. LST is an adaptation method that deals with environmental noises in the linear spectral domain, so that a small number of parameters can be used for fast adaptation. The proposed technique is called MMI-LST, and evaluated on TIMIT and FFMTIMIT corpora to show that it is advantageous when only a small amount of adaptation speech is used.

  • PDF

Adaptive Quantization Scheme for Multi-Level Cell NAND Flash Memory (멀티 레벨 셀 낸드 플래시 메모리용 적응적 양자화기 설계)

  • Lee, Dong-Hwan;Sung, Wonyong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.6
    • /
    • pp.540-549
    • /
    • 2013
  • An adaptive non-uniform quantization scheme is proposed for soft-decision error correction in NAND flash memory. Even though the conventional maximizing mutual information (MMI) quantizer shows the optimal post-FEC (forward error correction) bit error rate (BER) performance, this quantization scheme demands heavy computational overheads due to the exhaustive search to find the optimal parameter values. The proposed quantization scheme has a simple structure that is constructed by only six parameters, and the optimal values of them are found by maximizing the mutual information between the input and the output symbols. It is demonstrated that the proposed quantization scheme improves the BER performance of soft-decision decoding with only small computational overheads.

On Learning of HMM-Net Classifiers Using Hybrid Methods (하이브리드법에 의한 HMM-Net 분류기의 학습)

  • 김상운;신성효
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1273-1276
    • /
    • 1998
  • The HMM-Net is an architecture for a neural network that implements a hidden Markov model (HMM). The architecture is developed for the purpose of combining the discriminant power of neural networks with the time-domain modeling capability of HMMs. Criteria used for learning HMM-Net classifiers are maximum likelihood (ML), maximum mutual information (MMI), and minimization of mean squared error(MMSE). In this paper we propose an efficient learning method of HMM-Net classifiers using hybrid criteria, ML/MMSE and MMI/MMSE, and report the results of an experimental study comparing the performance of HMM-Net classifiers trained by the gradient descent algorithm with the above criteria. Experimental results for the isolated numeric digits from /0/ to /9/ show that the performance of the proposed method is better than the others in the respects of learning and recognition rates.

  • PDF

On learning of HMM-Net classifiers (HMM-Net 분류기의 학습)

  • 김상운;오수환
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.9
    • /
    • pp.61-67
    • /
    • 1997
  • The HMM-Net is an architecture for a neural network that implements a hidden markov model(HMM). The architecture is developed for the purpose of combining the classification power of neural networks with the time-domain modeling capability of HMMs. Criteria which are used for learning HMM_Net classifiers are maximum likelihood(ML), maximum mutual information (MMI), and minimization of mean squared error(MMSE). In this classifiers trained by the gradient descent algorithm with the above criteria. Experimental results for the isolated numbers from /young/to/koo/ show that in the binary inputs the performance of MMSE is better than the others, while in the fuzzy inputs the performance of MMI is better than the others.

  • PDF