• 제목/요약/키워드: Fisher linear discriminant

검색결과 53건 처리시간 0.021초

절삭조건과 절삭력 파라메타를 이용한 공구상태 진단에 관한 연구(I) - 신호처리 및 특징추출 - (A Study on the Diagnosis of Cutting Tool States Using Cutting Conditions and Cutting Force Parameters(l) - Signal Processing and Feature Extraction -)

  • Cheong, C.Y.;Yu, K.H.;Suh, N.S.
    • 한국정밀공학회지
    • /
    • 제14권10호
    • /
    • pp.135-140
    • /
    • 1997
  • The detection of cutting tool states in machining is important for the automation. The information of cutting tool states in metal cutting process is uncertain. Hence a industry needs the system which can detect the cutting tool states in real time and control the feed motion. Cutting signal features must be sifted before the classification. In this paper the Fisher's linear discriminant function was applied to the pattern recognition of the cutting tool states successfully. Cutting conditions and cutting force para- meters have shown to be sensitive to tool states, so these cutting conditions and cutting force paramenters can be used as features for tool state detection.

  • PDF

An Improved method of Two Stage Linear Discriminant Analysis

  • Chen, Yarui;Tao, Xin;Xiong, Congcong;Yang, Jucheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권3호
    • /
    • pp.1243-1263
    • /
    • 2018
  • The two-stage linear discrimination analysis (TSLDA) is a feature extraction technique to solve the small size sample problem in the field of image recognition. The TSLDA has retained all subspace information of the between-class scatter and within-class scatter. However, the feature information in the four subspaces may not be entirely beneficial for classification, and the regularization procedure for eliminating singular metrics in TSLDA has higher time complexity. In order to address these drawbacks, this paper proposes an improved two-stage linear discriminant analysis (Improved TSLDA). The Improved TSLDA proposes a selection and compression method to extract superior feature information from the four subspaces to constitute optimal projection space, where it defines a single Fisher criterion to measure the importance of single feature vector. Meanwhile, Improved TSLDA also applies an approximation matrix method to eliminate the singular matrices and reduce its time complexity. This paper presents comparative experiments on five face databases and one handwritten digit database to validate the effectiveness of the Improved TSLDA.

웨이블렛에 기반한 FLD를 사용한 얼굴인식 (Wavelet-Based FLD for Face Recognition)

  • 이완수;이형지;정재호
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 제13회 신호처리 합동 학술대회 논문집
    • /
    • pp.435-438
    • /
    • 2000
  • 본 논문에서는 웨이블렛에 기반한 FLD(Fisher Linear Discriminant) 방법을 제안한다. 본 논문은 얼굴인식에 대한 속도와 정확성을 다룬다. 128×128의 해상도를 가진 영상은 웨이블렛 변환을 통해 16×16의 부영상들로 분해된 후에, 저대역과 중대역에 해당하는 두 개의 부영상을 사용하여 학습과 인식을 한다. 실험 결과, 제안된 방법은 기존의 FLD 방법의 인식률을 유지하며, 보다 더 빠른 속도를 가진다. 우리의 실험에서는 약 6배의 속도 향상을 보인다.

  • PDF

상상 움직임에 대한 실시간 뇌전도 뇌 컴퓨터 상호작용, 큐 없는 상상 움직임에서의 뇌 신호 분류 (Real-time BCI for imagery movement and Classification for uncued EEG signal)

  • 강성욱;전성찬
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.2083-2085
    • /
    • 2009
  • Brain Computer Interface (BCI) is a communication pathway between devices (computers) and human brain. It treats brain signals in real-time basis and discriminates some information of what human brain is doing. In this work, we develop a EEG BCI system using a feature extraction such as common spatial pattern (CSP) and a classifier using Fisher linear discriminant analysis (FLDA). Two-class EEG motor imagery movement datasets with both cued and uncued are tested to verify its feasibility.

  • PDF

Principal Discriminant Variate (PDV) Method for Classification of Multicollinear Data: Application to Diagnosis of Mastitic Cows Using Near-Infrared Spectra of Plasma Samples

  • Jiang, Jian-Hui;Tsenkova, Roumiana;Yu, Ru-Qin;Ozaki, Yukihiro
    • 한국근적외분광분석학회:학술대회논문집
    • /
    • 한국근적외분광분석학회 2001년도 NIR-2001
    • /
    • pp.1244-1244
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from mastitic and healthy cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from mastitic and healthy cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA and FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference, thereby providing a useful means for spectroscopy-based clinic applications.

  • PDF

PRINCIPAL DISCRIMINANT VARIATE (PDV) METHOD FOR CLASSIFICATION OF MULTICOLLINEAR DATA WITH APPLICATION TO NEAR-INFRARED SPECTRA OF COW PLASMA SAMPLES

  • Jiang, Jian-Hui;Yuqing Wu;Yu, Ru-Qin;Yukihiro Ozaki
    • 한국근적외분광분석학회:학술대회논문집
    • /
    • 한국근적외분광분석학회 2001년도 NIR-2001
    • /
    • pp.1042-1042
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from daily monitoring of two Japanese cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from two cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA md FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference.

  • PDF

피처벡터 축소방법에 기반한 장애음성 분류 (Classification of pathological and normal voice based on dimension reduction of feature vectors)

  • 이지연;정상배;최홍식;한민수
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.123-126
    • /
    • 2007
  • This paper suggests a method to improve the performance of the pathological/normal voice classification. The effectiveness of the mel frequency-based filter bank energies using the fisher discriminant ratio (FDR) is analyzed. And mel frequency cepstrum coefficients (MFCCs) and the feature vectors through the linear discriminant analysis (LDA) transformation of the filter bank energies (FBE) are implemented. This paper shows that the FBE LDA-based GMM is more distinct method for the pathological/normal voice classification than the MFCC-based GMM.

  • PDF

A Spatial Regularization of LDA for Face Recognition

  • Park, Lae-Jeong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제10권2호
    • /
    • pp.95-100
    • /
    • 2010
  • This paper proposes a new spatial regularization of Fisher linear discriminant analysis (LDA) to reduce the overfitting due to small size sample (SSS) problem in face recognition. Many regularized LDAs have been proposed to alleviate the overfitting by regularizing an estimate of the within-class scatter matrix. Spatial regularization methods have been suggested that make the discriminant vectors spatially smooth, leading to mitigation of the overfitting. As a generalized version of the spatially regularized LDA, the proposed regularized LDA utilizes the non-uniformity of spatial correlation structures in face images in adding a spatial smoothness constraint into an LDA framework. The region-dependent spatial regularization is advantageous for capturing the non-flat spatial correlation structure within face image as well as obtaining a spatially smooth projection of LDA. Experimental results on public face databases such as ORL and CMU PIE show that the proposed regularized LDA performs well especially when the number of training images per individual is quite small, compared with other regularized LDAs.

계층적인 가버 특징들과 베이지안 망을 이용한 필기체 숫자인식 (Hierarchical Gabor Feature and Bayesian Network for Handwritten Digit Recognition)

  • 성재모;방승양
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권1호
    • /
    • pp.1-7
    • /
    • 2004
  • 본 논문에서는 필기체 숫자인식을 위해서 계층적으로 서로 다른 레벨의 정보를 표현할 수 있는 구조화된 특징들의 추출 방법과 특징들 사이에 의존도를 이용하여 분류하는 베이지안 망을 제안한다. 이러한 계층적 특징들을 추출하기 위해서 레벨 단위로 가버 필터들을 정의하고, FLD(Fisher Linear Discriminant) 척도를 이용하여 최적화된 가버 필터들을 선택한다. 계층적 가버 특징들은 최적화된 가버 특징들을 이용하여 추출되며, 하위 레벨일수록 더욱 국부적인 정보를 표현한다. 추출된 계층적 가버 특징들의 분류성능 향상을 위해서 가버 특징들 사이의 계층적 의존도를 이용하는 베이지안 망을 생성한다. 본 논문에서 제안하는 방법은 naive Bayesian 분류기, k-nearest neighbor 분류기, 그리고 신경망 분류기들과 함께 필기체 숫자인식에 적용되어 계층적 가버 특징들의 효율성과 계층적 의존도를 이용하는 베이지안 망은 분류성능을 향상시킬 수 있다는 것을 보여준다.

고속철도 열차지연 유형의 구분지표 및 기준 (Types of Train Delay of High-Speed Rail : Indicators and Criteria for Classification)

  • 김한수;강중혁;배영규
    • 한국경영과학회지
    • /
    • 제38권3호
    • /
    • pp.37-50
    • /
    • 2013
  • The purpose of this study is to determine the indicators and the criteria to classify types of train delays of high-speed rail in South Korea. Types of train delays have divided into the chronic delays and the knock-on delays. The Indicators based on relevance, reliability, and comparability were selected with arrival delay rate of over five minutes, median of arrival delays of preceding train and following train, knock-on delay rate of over five minutes, correlation of delay between preceding train and following train on intermediate and last stations, average train headway, average number of passengers per train, and average seat usages. Types of train delays were separated using the Ward's hierarchical cluster analysis. The criteria for classification of train delay were presented by the Fisher's linear discriminant. The analysis on the situational characteristics of train delays is as follows. If the train headway in last station is short, the probability of chronic delay is high. If the planned running times of train is short, the seriousness of chronic delay is high. The important causes of train delays are short headway of train, shortly planned running times, delays of preceding train, and the excessive number of passengers per train.