• Title/Summary/Keyword: Variance Reduction Method

Search Result 129, Processing Time 0.023 seconds

Face Recognition Using A New Methodology For Independent Component Analysis (새로운 독립 요소 해석 방법론에 의한 얼굴 인식)

  • 류재흥;고재흥
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.305-309
    • /
    • 2000
  • In this paper, we presents a new methodology for face recognition after analysing conventional ICA(Independent Component Analysis) based approach. In the literature we found that ICA based methods have followed the same procedure without any exception, first PCA(Principal Component Analysis) has been used for feature extraction, next ICA learning method has been applied for feature enhancement in the reduced dimension. However, it is contradiction that features are extracted using higher order moments depend on variance, the second order statistics. It is not considered that a necessary component can be located in the discarded feature space. In the new methodology, features are extracted using the magnitude of kurtosis(4-th order central moment or cumulant). This corresponds to the PCA based feature extraction using eigenvalue(2nd order central moment or variance). The synergy effect of PCA and ICA can be achieved if PCA is used for noise reduction filter. ICA methodology is analysed using SVD(Singular Value Decomposition). PCA does whitening and noise reduction. ICA performs the feature extraction. Simulation results show the effectiveness of the methodology compared to the conventional ICA approach.

  • PDF

Large-Scale Phase Retrieval via Stochastic Reweighted Amplitude Flow

  • Xiao, Zhuolei;Zhang, Yerong;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4355-4371
    • /
    • 2020
  • Phase retrieval, recovering a signal from phaseless measurements, is generally considered to be an NP-hard problem. This paper adopts an amplitude-based nonconvex optimization cost function to develop a new stochastic gradient algorithm, named stochastic reweighted phase retrieval (SRPR). SRPR is a stochastic gradient iteration algorithm, which runs in two stages: First, we use a truncated sample stochastic variance reduction algorithm to initialize the objective function. The second stage is the gradient refinement stage, which uses continuous updating of the amplitude-based stochastic weighted gradient algorithm to improve the initial estimate. Because of the stochastic method, each iteration of the two stages of SRPR involves only one equation. Therefore, SRPR is simple, scalable, and fast. Compared with the state-of-the-art phase retrieval algorithm, simulation results show that SRPR has a faster convergence speed and fewer magnitude-only measurements required to reconstruct the signal, under the real- or complex- cases.

A Study on Human Training System for Prosthetic Arm Control (의수제어를 위한 인체학습시스템에 관한 연구)

  • 장영건;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.15 no.4
    • /
    • pp.465-474
    • /
    • 1994
  • This study is concerned with a method which helps human to generate EMG signals accurately and consistently to make reliable design samples of function discriminator for prosthetic arm control. We intend to ensure a signal accuracy and consistency by training human as a signal generation source. For the purposes, we construct a human training system using a digital computer, which generates visual graphes to compare real target motion trajectory with the desired one, to observe EMG signals and their features. To evaluate the effect which affects a feature variance and a feature separability between motion classes by the human training system, we select 4 features such as integral absolute value, zero crossing counts, AR coefficients and LPC cepstrum coefficients. We perform a experiment four times during 2 months. The experimental results show that the hu- man training system is effective for accurate and consistent EMG signal generation and reduction of a feature variance, but is not correlated for a feature separability, The cepstrum coefficient is the most preferable among the used features for reduction of variance, class separability and robustness to a time varing property of EMG signals.

  • PDF

Tutorial: Methodologies for sufficient dimension reduction in regression

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.2
    • /
    • pp.105-117
    • /
    • 2016
  • In the paper, as a sequence of the first tutorial, we discuss sufficient dimension reduction methodologies used to estimate central subspace (sliced inverse regression, sliced average variance estimation), central mean subspace (ordinary least square, principal Hessian direction, iterative Hessian transformation), and central $k^{th}$-moment subspace (covariance method). Large-sample tests to determine the structural dimensions of the three target subspaces are well derived in most of the methodologies; however, a permutation test (which does not require large-sample distributions) is introduced. The test can be applied to the methodologies discussed in the paper. Theoretical relationships among the sufficient dimension reduction methodologies are also investigated and real data analysis is presented for illustration purposes. A seeded dimension reduction approach is then introduced for the methodologies to apply to large p small n regressions.

Evaluation Method for Measurement System and Process Capability Using Gage R&R and Performance Indices (게이지 R&R과 성능지수를 이용한 측정시스템과 공정능력 평가 방법)

  • Ju, Youngdon;Lee, Dongju
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.2
    • /
    • pp.78-85
    • /
    • 2019
  • High variance observed in the measurement system can cause high process variation that can affect process capability badly. Therefore, measurement system analysis is closely related to process capability analysis. Generally, the evaluation for measurement system and process variance is performed separately in the industry. That is, the measurement system analysis is implemented before process monitoring, process capability and process performance analysis even though these analyses are closely related. This paper presents the effective concurrent evaluation procedure for measurement system analysis and process capability analysis using the table that contains Process Performance (Pp), Gage Repeatability & Reproducibility (%R&R) and Number of Distinct Categories (NDC). Furthermore, the long-term process capability index (Pp), which takes into account both gage variance and process variance, is used instead of the short-term process capability (Cp) considering only process variance. The long-term capability index can reflect well the relationship between the measurement system and process capability. The quality measurement and improvement guidelines by region scale are also described in detail. In conclusion, this research proposes the procedure that can execute the measurement system analysis and process capability analysis at the same time. The proposed procedure can contribute to reduction of the measurement staff's effort and to improvement of accurate evaluation.

A Study of Nitrous Oxide Thermal Decomposition and Reaction Rate in High Temperature Inert Gas (고온 불활성 기체 분위기에서 아산화질소 열분해 및 반응속도에 관한 연구)

  • Lee, Han Min;Yun, Jae Geun;Hong, Jung Goo
    • Journal of ILASS-Korea
    • /
    • v.25 no.3
    • /
    • pp.132-138
    • /
    • 2020
  • N2O is hazardous atmosphere pollution matter which can damage the ozone layer and cause green house effect. There are many other nitrogen oxide emission control but N2O has no its particular method. Preventing further environmental pollution and global warming, it is essential to control N2O emission from industrial machines. In this study, the thermal decomposition experiment of N2O gas mixture is conducted by using cylindrical reactor to figure out N2O reduction and NO formation. And CHEMKIN calculation is conducted to figure out reaction rate and mechanism. Residence time of the N2O gas in the reactor is set as experimental variable to imitate real SNCR system. As a result, most of the nitrogen components are converted into N2. Reaction rate of the N2O gas decreases with N2O emitted concentration. At 800℃ and 900℃, N2O reduction variance and NO concentration are increased with residence time and temperature. However, at 1000℃, N2O reduction variance and NO concentration are deceased in 40s due to forward reaction rate diminished and reverse reaction rate appeared.

Statistical algorithm and application for the noise variance estimation (영상 잡음의 분산 추정에 관한 통계적 알고리즘 및 응용)

  • Kim, Yeong-Hwa;Nam, Ji-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.869-878
    • /
    • 2009
  • Image restoration techniques such as noise reduction and contrast enhancement have been researched for enhancing a contaminated image by the noise. An image degraded by additive random noise can be enhanced by noise reduction. Sigma filtering is one of the most widely used method to reduce the noise. In this paper, we propose a new sigma filter algorithm based on noise variance estimation which effectively enhances the degraded image by noise. Specifically, the Bartlett test is used to measure the degree of noise with respect to the degree of image feature. Simulation results are also given to show the performance of the proposed algorithm.

  • PDF

Combined Correlation Methods for Multipopulation Metamodel (다분포 대형 시뮬레이션 모형에 대한 결합상관방법)

  • 권치명
    • Journal of the Korea Society for Simulation
    • /
    • v.1 no.1
    • /
    • pp.1-16
    • /
    • 1992
  • This research develops two variance reduction methods for estimating the parameters of the experimental simulation model having multiple design points based on an approach focusing on reduction of the variances of the mean responses across multiple design points. The first method extends a combined approach of antithetic variates and control variates for a single design point to the multipopulation context with independent streams across the design points. The second method extends the same strategy in conjunction with the Schruben-Margolin method for improving the first method. We illustrate an example for implementing the second method. We expect these two approaches may improve the estimation of the parameters of interest compared with the control variates method.

  • PDF

Determination of Incentive Level of Direct Load Control using Probabilistic Technique with Variance Reduction Technique (확률적 기법을 통한 직접부하제어의 제어지원금 산정)

  • Jeong Yun-Won;Park Jong-Bae;Shin Joong-Rin
    • Journal of Energy Engineering
    • /
    • v.14 no.1
    • /
    • pp.46-53
    • /
    • 2005
  • This paper presents a new approach for determining an accurate incentive levels of Direct Load Control (DLC) program using probabilistic techniques. The economic analysis of DLC resources needs to identify the hourly-by-hourly expected energy-not-served resulting from the random outage characteristics of generators as well as to reflect the availability and duration of DLC resources, which results the computational explosion. Therefore, the conventional methods are based on the scenario approaches to reduce the computation time as well as to avoid the complexity of economic studies. In this paper, we have developed a new technique based on the sequential Monte Carlo simulation to evaluate the required expected load control amount in each hour and to decide the incentive level satisfying the economic constraints. In addition, we have applied the variance reduction technique to enhance the efficiency of the simulation. To show the efficiency and effectiveness of the suggested method, the numerical studies have been performed for the modified IEEE 24-bus reliability test system.

Comparison of Subsampling Error Associated with Analysis of Explosive Compounds in Soil (화약물질 오염토양의 부시료 제조방법에 따른 오차 비교)

  • Bae, Bumhan
    • Journal of Soil and Groundwater Environment
    • /
    • v.22 no.6
    • /
    • pp.57-65
    • /
    • 2017
  • Six soil subsampling methods were evaluated with explosive compounds-contaminated soils to quantify the variance associated with each method. The methods include modified grab sampling, simplified ripple splitting, fractional shoveling, coning & quatering, degenerate fractional shoveling, and rolling & quatering. All the methods resulted in significantly lower CV (coefficient of variation) of 1~5%, compared to common grab sampling that gave 8~98% of CV, possibly due to the reduction of grouping and segregation errors described by Gy sampling theory. Among the methods, simplified ripple splitting tends to result in lower explosive compounds concentrations, while the rolling & quatering gave the opposite result. Fractional shoveling method showed the least variance and the highest reproducibility in the analysis.