• Title/Summary/Keyword: Fisher Linear Discriminant

Search Result 53, Processing Time 0.034 seconds

A Study on the Diagnosis of Cutting Tool States Using Cutting Conditions and Cutting Force Parameters(l) - Signal Processing and Feature Extraction - (절삭조건과 절삭력 파라메타를 이용한 공구상태 진단에 관한 연구(I) - 신호처리 및 특징추출 -)

  • Cheong, C.Y.;Yu, K.H.;Suh, N.S.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.14 no.10
    • /
    • pp.135-140
    • /
    • 1997
  • The detection of cutting tool states in machining is important for the automation. The information of cutting tool states in metal cutting process is uncertain. Hence a industry needs the system which can detect the cutting tool states in real time and control the feed motion. Cutting signal features must be sifted before the classification. In this paper the Fisher's linear discriminant function was applied to the pattern recognition of the cutting tool states successfully. Cutting conditions and cutting force para- meters have shown to be sensitive to tool states, so these cutting conditions and cutting force paramenters can be used as features for tool state detection.

  • PDF

An Improved method of Two Stage Linear Discriminant Analysis

  • Chen, Yarui;Tao, Xin;Xiong, Congcong;Yang, Jucheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1243-1263
    • /
    • 2018
  • The two-stage linear discrimination analysis (TSLDA) is a feature extraction technique to solve the small size sample problem in the field of image recognition. The TSLDA has retained all subspace information of the between-class scatter and within-class scatter. However, the feature information in the four subspaces may not be entirely beneficial for classification, and the regularization procedure for eliminating singular metrics in TSLDA has higher time complexity. In order to address these drawbacks, this paper proposes an improved two-stage linear discriminant analysis (Improved TSLDA). The Improved TSLDA proposes a selection and compression method to extract superior feature information from the four subspaces to constitute optimal projection space, where it defines a single Fisher criterion to measure the importance of single feature vector. Meanwhile, Improved TSLDA also applies an approximation matrix method to eliminate the singular matrices and reduce its time complexity. This paper presents comparative experiments on five face databases and one handwritten digit database to validate the effectiveness of the Improved TSLDA.

Wavelet-Based FLD for Face Recognition (웨이블렛에 기반한 FLD를 사용한 얼굴인식)

  • 이완수;이형지;정재호
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.435-438
    • /
    • 2000
  • 본 논문에서는 웨이블렛에 기반한 FLD(Fisher Linear Discriminant) 방법을 제안한다. 본 논문은 얼굴인식에 대한 속도와 정확성을 다룬다. 128×128의 해상도를 가진 영상은 웨이블렛 변환을 통해 16×16의 부영상들로 분해된 후에, 저대역과 중대역에 해당하는 두 개의 부영상을 사용하여 학습과 인식을 한다. 실험 결과, 제안된 방법은 기존의 FLD 방법의 인식률을 유지하며, 보다 더 빠른 속도를 가진다. 우리의 실험에서는 약 6배의 속도 향상을 보인다.

  • PDF

Real-time BCI for imagery movement and Classification for uncued EEG signal (상상 움직임에 대한 실시간 뇌전도 뇌 컴퓨터 상호작용, 큐 없는 상상 움직임에서의 뇌 신호 분류)

  • Kang, Sung-Wook;Jun, Sung-Chan
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.2083-2085
    • /
    • 2009
  • Brain Computer Interface (BCI) is a communication pathway between devices (computers) and human brain. It treats brain signals in real-time basis and discriminates some information of what human brain is doing. In this work, we develop a EEG BCI system using a feature extraction such as common spatial pattern (CSP) and a classifier using Fisher linear discriminant analysis (FLDA). Two-class EEG motor imagery movement datasets with both cued and uncued are tested to verify its feasibility.

  • PDF

Principal Discriminant Variate (PDV) Method for Classification of Multicollinear Data: Application to Diagnosis of Mastitic Cows Using Near-Infrared Spectra of Plasma Samples

  • Jiang, Jian-Hui;Tsenkova, Roumiana;Yu, Ru-Qin;Ozaki, Yukihiro
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1244-1244
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from mastitic and healthy cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from mastitic and healthy cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA and FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference, thereby providing a useful means for spectroscopy-based clinic applications.

  • PDF

PRINCIPAL DISCRIMINANT VARIATE (PDV) METHOD FOR CLASSIFICATION OF MULTICOLLINEAR DATA WITH APPLICATION TO NEAR-INFRARED SPECTRA OF COW PLASMA SAMPLES

  • Jiang, Jian-Hui;Yuqing Wu;Yu, Ru-Qin;Yukihiro Ozaki
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1042-1042
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from daily monitoring of two Japanese cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from two cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA md FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference.

  • PDF

Classification of pathological and normal voice based on dimension reduction of feature vectors (피처벡터 축소방법에 기반한 장애음성 분류)

  • Lee, Ji-Yeoun;Jeong, Sang-Bae;Choi, Hong-Shik;Hahn, Min-Soo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.123-126
    • /
    • 2007
  • This paper suggests a method to improve the performance of the pathological/normal voice classification. The effectiveness of the mel frequency-based filter bank energies using the fisher discriminant ratio (FDR) is analyzed. And mel frequency cepstrum coefficients (MFCCs) and the feature vectors through the linear discriminant analysis (LDA) transformation of the filter bank energies (FBE) are implemented. This paper shows that the FBE LDA-based GMM is more distinct method for the pathological/normal voice classification than the MFCC-based GMM.

  • PDF

A Spatial Regularization of LDA for Face Recognition

  • Park, Lae-Jeong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.2
    • /
    • pp.95-100
    • /
    • 2010
  • This paper proposes a new spatial regularization of Fisher linear discriminant analysis (LDA) to reduce the overfitting due to small size sample (SSS) problem in face recognition. Many regularized LDAs have been proposed to alleviate the overfitting by regularizing an estimate of the within-class scatter matrix. Spatial regularization methods have been suggested that make the discriminant vectors spatially smooth, leading to mitigation of the overfitting. As a generalized version of the spatially regularized LDA, the proposed regularized LDA utilizes the non-uniformity of spatial correlation structures in face images in adding a spatial smoothness constraint into an LDA framework. The region-dependent spatial regularization is advantageous for capturing the non-flat spatial correlation structure within face image as well as obtaining a spatially smooth projection of LDA. Experimental results on public face databases such as ORL and CMU PIE show that the proposed regularized LDA performs well especially when the number of training images per individual is quite small, compared with other regularized LDAs.

Hierarchical Gabor Feature and Bayesian Network for Handwritten Digit Recognition (계층적인 가버 특징들과 베이지안 망을 이용한 필기체 숫자인식)

  • 성재모;방승양
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.1
    • /
    • pp.1-7
    • /
    • 2004
  • For the handwritten digit recognition, this paper Proposes a hierarchical Gator features extraction method and a Bayesian network for them. Proposed Gator features are able to represent hierarchically different level information and Bayesian network is constructed to represent hierarchically structured dependencies among these Gator features. In order to extract such features, we define Gabor filters level by level and choose optimal Gabor filters by using Fisher's Linear Discriminant measure. Hierarchical Gator features are extracted by optimal Gabor filters and represent more localized information in the lower level. Proposed methods were successfully applied to handwritten digit recognition with well-known naive Bayesian classifier, k-nearest neighbor classifier. and backpropagation neural network and showed good performance.

Types of Train Delay of High-Speed Rail : Indicators and Criteria for Classification (고속철도 열차지연 유형의 구분지표 및 기준)

  • Kim, Hansoo;Kang, Joonghyuk;Bae, Yeong-Gyu
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.38 no.3
    • /
    • pp.37-50
    • /
    • 2013
  • The purpose of this study is to determine the indicators and the criteria to classify types of train delays of high-speed rail in South Korea. Types of train delays have divided into the chronic delays and the knock-on delays. The Indicators based on relevance, reliability, and comparability were selected with arrival delay rate of over five minutes, median of arrival delays of preceding train and following train, knock-on delay rate of over five minutes, correlation of delay between preceding train and following train on intermediate and last stations, average train headway, average number of passengers per train, and average seat usages. Types of train delays were separated using the Ward's hierarchical cluster analysis. The criteria for classification of train delay were presented by the Fisher's linear discriminant. The analysis on the situational characteristics of train delays is as follows. If the train headway in last station is short, the probability of chronic delay is high. If the planned running times of train is short, the seriousness of chronic delay is high. The important causes of train delays are short headway of train, shortly planned running times, delays of preceding train, and the excessive number of passengers per train.