• Title/Summary/Keyword: Variance reduction

Search Result 328, Processing Time 0.026 seconds

Dimensionality Reduction in Speech Recognition by Principal Component Analysis (음성인식에서 주 성분 분석에 의한 차원 저감)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.9
    • /
    • pp.1299-1305
    • /
    • 2013
  • In this paper, we investigate a method of reducing the computational cost in speech recognition by dimensionality reduction of MFCC feature vectors. Eigendecomposition of the feature vectors renders linear transformation of the vectors in such a way that puts the vector components in order of variances. The first component has the largest variance and hence serves as the most important one in relevant pattern classification. Therefore, we might consider a method of reducing the computational cost and achieving no degradation of the recognition performance at the same time by dimensionality reduction through exclusion of the least-variance components. Experimental results show that the MFCC components might be reduced by about half without significant adverse effect on the recognition error rate.

ONNEGATIVE MINIMUM BIASED ESTIMATION IN VARIANCE COMPONENT MODELS

  • Lee, Jong-Hoo
    • East Asian mathematical journal
    • /
    • v.5 no.1
    • /
    • pp.95-110
    • /
    • 1989
  • In a general variance component model, nonnegative quadratic estimators of the components of variance are considered which are invariant with respect to mean value translaion and have minimum bias (analogously to estimation theory of mean value parameters). Here the minimum is taken over an appropriate cone of positive semidefinite matrices, after having made a reduction by invariance. Among these estimators, which always exist the one of minimum norm is characterized. This characterization is achieved by systems of necessary and sufficient condition, and by a cone restricted pseudoinverse. In models where the decomposing covariance matrices span a commutative quadratic subspace, a representation of the considered estimator is derived that requires merely to solve an ordinary convex quadratic optimization problem. As an example, we present the two way nested classification random model. An unbiased estimator is derived for the mean squared error of any unbiased or biased estimator that is expressible as a linear combination of independent sums of squares. Further, it is shown that, for the classical balanced variance component models, this estimator is the best invariant unbiased estimator, for the variance of the ANOVA estimator and for the mean squared error of the nonnegative minimum biased estimator. As an example, the balanced two way nested classification model with ramdom effects if considered.

  • PDF

Real variance estimation in iDTMC-based depletion analysis

  • Inyup Kim;Yonghee Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.4228-4237
    • /
    • 2023
  • The Improved Deterministic Truncation of Monte Carlo (iDTMC) is a powerful acceleration and variance reduction scheme in the Monte Carlo analysis. The concept of the iDTMC method and correlated sampling-based real variance estimation are briefly introduced. Moreover, the application of the iterative scheme to the correlated sampling is discussed. The iDTMC method is utilized in a 3-dimensional small modular reactor (SMR) model problem. The real variances of burnup-dependent criticality and power distribution are evaluated and compared with the ones obtained from 30 independent iDTMC calculations. The impact of the inactive cycles on the correlated sampling is also evaluated to investigate the consistency of the correlated sample scheme. In addition, numerical performances and sensitivity analysis on the real variance estimation are performed in view of the figure of merit of the iDTMC method. The numerical results show that the correlated sampling accurately estimates the real variances with high computational efficiencies.

Case study: application of fused sliced average variance estimation to near-infrared spectroscopy of biscuit dough data (Fused sliced average variance estimation의 실증분석: 비스킷 반죽의 근적외분광분석법 분석 자료로의 적용)

  • Um, Hye Yeon;Won, Sungmin;An, Hyoin;Yoo, Jae Keun
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.6
    • /
    • pp.835-842
    • /
    • 2018
  • The so-called sliced average variance estimation (SAVE) is a popular methodology in sufficient dimension reduction literature. SAVE is sensitive to the number of slices in practice. To overcome this, a fused SAVE (FSAVE) is recently proposed by combining the kernel matrices obtained from various numbers of slices. In the paper, we consider practical applications of FSAVE to large p-small n data. For this, near-infrared spectroscopy of biscuit dough data is analyzed. In this case study, the usefulness of FSAVE in high-dimensional data analysis is confirmed by showing that the result by FASVE is superior to existing analysis results.

Variance Distributions of the DFT and CDFT (DFT와 CDFT의 분산 분포)

  • 최태영
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.21 no.4
    • /
    • pp.7-12
    • /
    • 1984
  • A composite - discrete courier transform (CDFT) is developed, which can diagonalize a real symmetric circulant matrix. In general the circulant matrices can be diagonalized by the discrete Fourier transform (DFT). With the analysis of the variance distributions of the DFT and CDFT for the general symmetric covariance matrix of real signals, the DFT and CDFT are compared with respect to the rate distortion performance measure. The results show that the CDFT is more efficient than the DFT in bit rate reduction. In addition, for a particular 64$\times$64 points covariance matrix (f(q)=(0.95)q), the amount of the relative average bit rate reduction for the CDFT with respect to the DFT is obtained by 0.0095 bit with a numerical calculation.

  • PDF

A Study on Human Training System for Prosthetic Arm Control (의수제어를 위한 인체학습시스템에 관한 연구)

  • 장영건;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.15 no.4
    • /
    • pp.465-474
    • /
    • 1994
  • This study is concerned with a method which helps human to generate EMG signals accurately and consistently to make reliable design samples of function discriminator for prosthetic arm control. We intend to ensure a signal accuracy and consistency by training human as a signal generation source. For the purposes, we construct a human training system using a digital computer, which generates visual graphes to compare real target motion trajectory with the desired one, to observe EMG signals and their features. To evaluate the effect which affects a feature variance and a feature separability between motion classes by the human training system, we select 4 features such as integral absolute value, zero crossing counts, AR coefficients and LPC cepstrum coefficients. We perform a experiment four times during 2 months. The experimental results show that the hu- man training system is effective for accurate and consistent EMG signal generation and reduction of a feature variance, but is not correlated for a feature separability, The cepstrum coefficient is the most preferable among the used features for reduction of variance, class separability and robustness to a time varing property of EMG signals.

  • PDF

The methods of CADIS-NEE and CADIS-DXTRAN in NECP-MCX and their applications

  • Qingming He;Zhanpeng Huang;Liangzhi Cao;Hongchun Wu
    • Nuclear Engineering and Technology
    • /
    • v.56 no.7
    • /
    • pp.2748-2755
    • /
    • 2024
  • This paper presents two new methods for variance reduction for shielding calculation in Monte Carlo radiation transport. One method is CADIS-NEE, which combines Consistent Adjoint Driven Importance Sampling (CADIS) and next-event estimator (NEE) methods to increase the calculation efficiency of tallies at points. The other is CADIS-deterministic transport (DXTRAN), which combines CADIS and DXTRAN to obtain higher performance than using CADIS and DXTRAN separately. The combination processes are derived and implemented in the hybrid Monte-Carlo-Deterministic particle-transport code NECP-MCX. Various problems are tested to demonstrate the effectiveness of the two methods. According to the results, the two combination methods have higher efficiency than using CADIS, NEE or DXTRAN separately. In a long-distance photon-transport problem, CADIS-NEE converges faster than NEE and the figure of merit (FOM) of CADIS-NEE is 75.6 times of NEE. In a labyrinthine problem, CADIS-DXTRAN's FOM surpasses that of DXTRAN and CADIS by a factor of 45.3 and 17.7, respectively. Therefore, it is advisable to employ these two novel methods selectively in appropriate scenarios to reduce variance.

Optimal Design of Ferromagnetic Pole Pieces for Transmission Torque Ripple Reduction in a Magnetic-Geared Machine

  • Kim, Sung-Jin;Park, Eui-Jong;Kim, Yong-Jae
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1628-1633
    • /
    • 2016
  • This paper derives an effective shape of the ferromagnetic pole pieces (low-speed rotor) for the reduction of transmission torque ripple in a magnetic-geared machine based on a Box-Behnken design (BBD). In particular, using a non-linear finite element method (FEM) based on 2-D numerical analysis, we conduct a numerical investigation and analysis between independent variables (selected by the BBD) and reaction variables. In addition, we derive a regression equation for reaction variables according to the independent variables by using multiple regression analysis and analysis of variance (ANOVA). We assess the validity of the optimized design by comparing characteristics of the optimized model derived from a response surface analysis and an initial model.

Face Recognition Using A New Methodology For Independent Component Analysis (새로운 독립 요소 해석 방법론에 의한 얼굴 인식)

  • 류재흥;고재흥
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.305-309
    • /
    • 2000
  • In this paper, we presents a new methodology for face recognition after analysing conventional ICA(Independent Component Analysis) based approach. In the literature we found that ICA based methods have followed the same procedure without any exception, first PCA(Principal Component Analysis) has been used for feature extraction, next ICA learning method has been applied for feature enhancement in the reduced dimension. However, it is contradiction that features are extracted using higher order moments depend on variance, the second order statistics. It is not considered that a necessary component can be located in the discarded feature space. In the new methodology, features are extracted using the magnitude of kurtosis(4-th order central moment or cumulant). This corresponds to the PCA based feature extraction using eigenvalue(2nd order central moment or variance). The synergy effect of PCA and ICA can be achieved if PCA is used for noise reduction filter. ICA methodology is analysed using SVD(Singular Value Decomposition). PCA does whitening and noise reduction. ICA performs the feature extraction. Simulation results show the effectiveness of the methodology compared to the conventional ICA approach.

  • PDF

Asymptotic Properties of Variance Change-point in the Long-memory Process

  • Chu Minjeong;Cho Sinsup
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2000.11a
    • /
    • pp.23-26
    • /
    • 2000
  • It is noted that many econometric time series have long-memory properties. A long-memory process, or strongly dependent process, is characterized by hyperbolic decaying autocorrelations and unbounded spectral density at the origin. Since the long-memory property can be observed by data obtained from rather a long period, there is some possibility of parameter change in the process. In this paper, we consider the estimation of change-point when there is a change in the variance of a long-memory process. The estimator is based on some reasonable statistic and the consistency is shown using Taqqu's strong reduction theorem

  • PDF