• Title/Summary/Keyword: feature vector selection

Search Result 184, Processing Time 0.025 seconds

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

Combining genetic algorithms and support vector machines for bankruptcy prediction

  • Min, Sung-Hwan;Lee, Ju-Min;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2004.11a
    • /
    • pp.179-188
    • /
    • 2004
  • Bankruptcy prediction is an important and widely studied topic since it can have significant impact on bank lending decisions and profitability. Recently, support vector machine (SVM) has been applied to the problem of bankruptcy prediction. The SVM-based method has been compared with other methods such as neural network, logistic regression and has shown good results. Genetic algorithm (GA) has been increasingly applied in conjunction with other AI techniques such as neural network, CBR. However, few studies have dealt with integration of GA and SVM, though there is a great potential for useful applications in this area. This study proposes the methods for improving SVM performance in two aspects: feature subset selection and parameter optimization. GA is used to optimize both feature subset and parameters of SVM simultaneously for bankruptcy prediction.

  • PDF

A Novel Image Classification Method for Content-based Image Retrieval via a Hybrid Genetic Algorithm and Support Vector Machine Approach

  • Seo, Kwang-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.75-81
    • /
    • 2011
  • This paper presents a novel method for image classification based on a hybrid genetic algorithm (GA) and support vector machine (SVM) approach which can significantly improve the classification performance for content-based image retrieval (CBIR). Though SVM has been widely applied to CBIR, it has some problems such as the kernel parameters setting and feature subset selection of SVM which impact the classification accuracy in the learning process. This study aims at simultaneously optimizing the parameters of SVM and feature subset without degrading the classification accuracy of SVM using GA for CBIR. Using the hybrid GA and SVM model, we can classify more images in the database effectively. Experiments were carried out on a large-size database of images and experiment results show that the classification accuracy of conventional SVM may be improved significantly by using the proposed model. We also found that the proposed model outperformed all the other models such as neural network and typical SVM models.

Genetic Algorithm for Image Feature Selection (영상 특징 선택을 위한 유전 알고리즘)

  • Shin Youns-Geun;Park Sang-Sung;Jang Dong-Sik
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.193-195
    • /
    • 2006
  • As multimedia information increases sharply, In image retrieval field the method that can analyze image data quickly and exactly is required. In the case of image data, because each data includes a lot of informations, between accuracy and speed of retrieval become trade-off. To solve these problem, feature vector extracting process that use Genetic Algorithm for implementing prompt and correct image clustering system in case of retrieval of mass image data is proposed. After extracting color and texture features, the representative feature vector among these features is extracted by using Genetic Algorithm.

  • PDF

Study of Machine-Learning Classifier and Feature Set Selection for Intent Classification of Korean Tweets about Food Safety

  • Yeom, Ha-Neul;Hwang, Myunggwon;Hwang, Mi-Nyeong;Jung, Hanmin
    • Journal of Information Science Theory and Practice
    • /
    • v.2 no.3
    • /
    • pp.29-39
    • /
    • 2014
  • In recent years, several studies have proposed making use of the Twitter micro-blogging service to track various trends in online media and discussion. In this study, we specifically examine the use of Twitter to track discussions of food safety in the Korean language. Given the irregularity of keyword use in most tweets, we focus on optimistic machine-learning and feature set selection to classify collected tweets. We build the classifier model using Naive Bayes & Naive Bayes Multinomial, Support Vector Machine, and Decision Tree Algorithms, all of which show good performance. To select an optimum feature set, we construct a basic feature set as a standard for performance comparison, so that further test feature sets can be evaluated. Experiments show that precision and F-measure performance are best when using a Naive Bayes Multinomial classifier model with a test feature set defined by extracting Substantive, Predicate, Modifier, and Interjection parts of speech.

Protein Motif Extraction via Feature Interval Selection

  • Sohn, In-Suk;Hwang, Chang-Ha;Ko, Jun-Su;Chiu, David;Hong, Dug-Hun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.4
    • /
    • pp.1279-1287
    • /
    • 2006
  • The purpose of this paper is to present a new algorithm for extracting the consensus pattern, or motif from sequence belonging to the same family. Two methods are considered for feature interval partitioning based on equal probability and equal width interval partitioning. C2H2 zinc finger protein and epidermal growth factor protein sequences are used to demonstrate the effectiveness of the proposed algorithm for motif extraction. For two protein families, the equal width interval partitioning method performs better than the equal probability interval partitioning method.

  • PDF

Analyzing Factors Contributing to Research Performance using Backpropagation Neural Network and Support Vector Machine

  • Ermatita, Ermatita;Sanmorino, Ahmad;Samsuryadi, Samsuryadi;Rini, Dian Palupi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.153-172
    • /
    • 2022
  • In this study, the authors intend to analyze factors contributing to research performance using Backpropagation Neural Network and Support Vector Machine. The analyzing factors contributing to lecturer research performance start from defining the features. The next stage is to collect datasets based on defining features. Then transform the raw dataset into data ready to be processed. After the data is transformed, the next stage is the selection of features. Before the selection of features, the target feature is determined, namely research performance. The selection of features consists of Chi-Square selection (U), and Pearson correlation coefficient (CM). The selection of features produces eight factors contributing to lecturer research performance are Scientific Papers (U: 154.38, CM: 0.79), Number of Citation (U: 95.86, CM: 0.70), Conference (U: 68.67, CM: 0.57), Grade (U: 10.13, CM: 0.29), Grant (U: 35.40, CM: 0.36), IPR (U: 19.81, CM: 0.27), Qualification (U: 2.57, CM: 0.26), and Grant Awardee (U: 2.66, CM: 0.26). To analyze the factors, two data mining classifiers were involved, Backpropagation Neural Networks (BPNN) and Support Vector Machine (SVM). Evaluation of the data mining classifier with an accuracy score for BPNN of 95 percent, and SVM of 92 percent. The essence of this analysis is not to find the highest accuracy score, but rather whether the factors can pass the test phase with the expected results. The findings of this study reveal the factors that have a significant impact on research performance and vice versa.

Study on Rub Vibration of Rotary Machine for Turbine Blade Diagnosis (터빈 블레이드 진단을 위한 회전기계 마찰 진동에 관한 연구)

  • Yu, Hyeon Tak;Ahn, Byung Hyun;Lee, Jong Myeong;Ha, Jeong Min;Choi, Byeong Keun
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.26 no.6_spc
    • /
    • pp.714-720
    • /
    • 2016
  • Rubbing and misalignment are the most usual faults that occurs in rotating machinery and with them severe effect on power plant availability. Especially blade rubbing is hard to detect on FFT spectrum using the vibration signal. In this paper, the possibility of feature analysis of vibration signal is confirmed under blade rubbing and misalignment condition. And the lab-scale rotor test device provides the blade rubbing and shaft misalignment modes. Feature selection based on GA (genetic algorithm) is processed by the extracted feature of the time domain. Then, classification of the features is analyzed by using SVM (support vector machine) which is one of the machine learning algorithm. The results of features selection based on GA compared with those based on PCA (principal component analysis). According to the results, the possibility of feature analysis is confirmed. Therefore, blade rubbing and shaft misalignment can be diagnosed by feature of vibration signal.

Feature selection for text data via topic modeling (토픽 모형을 이용한 텍스트 데이터의 단어 선택)

  • Woosol, Jang;Ye Eun, Kim;Won, Son
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.6
    • /
    • pp.739-754
    • /
    • 2022
  • Usually, text data consists of many variables, and some of them are closely correlated. Such multi-collinearity often results in inefficient or inaccurate statistical analysis. For supervised learning, one can select features by examining the relationship between target variables and explanatory variables. On the other hand, for unsupervised learning, since target variables are absent, one cannot use such a feature selection procedure as in supervised learning. In this study, we propose a word selection procedure that employs topic models to find latent topics. We substitute topics for the target variables and select terms which show high relevance for each topic. Applying the procedure to real data, we found that the proposed word selection procedure can give clear topic interpretation by removing high-frequency words prevalent in various topics. In addition, we observed that, by applying the selected variables to the classifiers such as naïve Bayes classifiers and support vector machines, the proposed feature selection procedure gives results comparable to those obtained by using class label information.

Electricity Demand Forecasting based on Support Vector Regression (Support Vector Regression에 기반한 전력 수요 예측)

  • Lee, Hyoung-Ro;Shin, Hyun-Jung
    • IE interfaces
    • /
    • v.24 no.4
    • /
    • pp.351-361
    • /
    • 2011
  • Forecasting of electricity demand have difficulty in adapting to abrupt weather changes along with a radical shift in major regional and global climates. This has lead to increasing attention to research on the immediate and accurate forecasting model. Technically, this implies that a model requires only a few input variables all of which are easily obtainable, and its predictive performance is comparable with other competing models. To meet the ends, this paper presents an energy demand forecasting model that uses the variable selection or extraction methods of data mining to select only relevant input variables, and employs support vector regression method for accurate prediction. Also, it proposes a novel performance measure for time-series prediction, shift index, followed by description on preprocessing procedure. A comparative evaluation of the proposed method with other representative data mining models such as an auto-regression model, an artificial neural network model, an ordinary support vector regression model was carried out for obtaining the forecast of monthly electricity demand from 2000 to 2008 based on data provided by Korea Energy Economics Institute. Among the models tested, the proposed method was shown promising results than others.