• Title/Summary/Keyword: classifiers

Search Result 743, Processing Time 0.032 seconds

Double-Bagging Ensemble Using WAVE

  • Kim, Ahhyoun;Kim, Minji;Kim, Hyunjoong
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.5
    • /
    • pp.411-422
    • /
    • 2014
  • A classification ensemble method aggregates different classifiers obtained from training data to classify new data points. Voting algorithms are typical tools to summarize the outputs of each classifier in an ensemble. WAVE, proposed by Kim et al. (2011), is a new weight-adjusted voting algorithm for ensembles of classifiers with an optimal weight vector. In this study, when constructing an ensemble, we applied the WAVE algorithm on the double-bagging method (Hothorn and Lausen, 2003) to observe if any significant improvement can be achieved on performance. The results showed that double-bagging using WAVE algorithm performs better than other ensemble methods that employ plurality voting. In addition, double-bagging with WAVE algorithm is comparable with the random forest ensemble method when the ensemble size is large.

RECOGNIZING SIX EMOTIONAL STATES USING SPEECH SIGNALS

  • Kang, Bong-Seok;Han, Chul-Hee;Youn, Dae-Hee;Lee, Chungyong
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.366-369
    • /
    • 2000
  • This paper examines three algorithms to recognize speaker's emotion using the speech signals. Target emotions are happiness, sadness, anger, fear, boredom and neutral state. MLB(Maximum-Likeligood Bayes), NN(Nearest Neighbor) and HMM (Hidden Markov Model) algorithms are used as the pattern matching techniques. In all cases, pitch and energy are used as the features. The feature vectors for MLB and NN are composed of pitch mean, pitch standard deviation, energy mean, energy standard deviation, etc. For HMM, vectors of delta pitch with delta-delta pitch and delta energy with delta-delta energy are used. We recorded a corpus of emotional speech data and performed the subjective evaluation for the data. The subjective recognition result was 56% and was compared with the classifiers' recognition rates. MLB, NN, and HMM classifiers achieved recognition rates of 68.9%, 69.3% and 89.1% respectively, for the speaker dependent, and context-independent classification.

  • PDF

Design of Polynomial Neural Network Classifier for Pattern Classification with Two Classes

  • Park, Byoung-Jun;Oh, Sung-Kwun;Kim, Hyun-Ki
    • Journal of Electrical Engineering and Technology
    • /
    • v.3 no.1
    • /
    • pp.108-114
    • /
    • 2008
  • Polynomial networks have been known to have excellent properties as classifiers and universal approximators to the optimal Bayes classifier. In this paper, the use of polynomial neural networks is proposed for efficient implementation of the polynomial-based classifiers. The polynomial neural network is a trainable device consisting of some rules and three processes. The three processes are assumption, effect, and fuzzy inference. The assumption process is driven by fuzzy c-means and the effect processes deals with a polynomial function. A learning algorithm for the polynomial neural network is developed and its performance is compared with that of previous studies.

Polynomial Fuzzy Radial Basis Function Neural Network Classifiers Realized with the Aid of Boundary Area Decision

  • Roh, Seok-Beom;Oh, Sung-Kwun
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.2098-2106
    • /
    • 2014
  • In the area of clustering, there are numerous approaches to construct clusters in the input space. For regression problem, when forming clusters being a part of the overall model, the relationships between the input space and the output space are essential and have to be taken into consideration. Conditional Fuzzy C-Means (c-FCM) clustering offers an opportunity to analyze the structure in the input space with the mechanism of supervision implied by the distribution of data present in the output space. However, like other clustering methods, c-FCM focuses on the distribution of the data. In this paper, we introduce a new method, which by making use of the ambiguity index focuses on the boundaries of the clusters whose determination is essential to the quality of the ensuing classification procedures. The introduced design is illustrated with the aid of numeric examples that provide a detailed insight into the performance of the fuzzy classifiers and quantify several essentials design aspects.

A Simple Speech/Non-speech Classifier Using Adaptive Boosting

  • Kwon, Oh-Wook;Lee, Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3E
    • /
    • pp.124-132
    • /
    • 2003
  • We propose a new method for speech/non-speech classifiers based on concepts of the adaptive boosting (AdaBoost) algorithm in order to detect speech for robust speech recognition. The method uses a combination of simple base classifiers through the AdaBoost algorithm and a set of optimized speech features combined with spectral subtraction. The key benefits of this method are the simple implementation, low computational complexity and the avoidance of the over-fitting problem. We checked the validity of the method by comparing its performance with the speech/non-speech classifier used in a standard voice activity detector. For speech recognition purpose, additional performance improvements were achieved by the adoption of new features including speech band energies and MFCC-based spectral distortion. For the same false alarm rate, the method reduced 20-50% of miss errors.

Performance Analysis of Viola & Jones Face Detection Algorithm (Viola & Jones 얼굴 검출 알고리즘의 성능 분석)

  • Oh, Jeong-su;Heo, Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.477-480
    • /
    • 2018
  • Viola and Jones object detection algorithm is a representative face detection algorithm. The algorithm uses Haar-like features for face expression and uses a cascade-Adaboost algorithm consisting of strong classifiers, a linear combination of weak classifiers for classification. This algorithm requires several parameter settings for its implementation and the set values affect its performance. This paper analyzes face detection performance according to the parameters set in the algorithm.

  • PDF

Reinforcement Learning Algorithm Using Domain Knowledge

  • Young, Jang-Si;Hong, Suh-Il;Hak, Kong-Sung;Rok, Oh-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.173.5-173
    • /
    • 2001
  • Q-Learning is a most widely used reinforcement learning, which addresses the question of how an autonomous agent can learn to choose optimal actions to achieve its goal about any one problem. Q-Learning can acquire optimal control strategies from delayed rewards, even when the agent has no prior knowledge of the effects of its action in the environment. If agent has an ability using previous knowledge, then it is expected that the agent can speed up learning by interacting with environment. We present a novel reinforcement learning method using domain knowledge, which is represented by problem-independent features and their classifiers. Here neural network are implied as knowledge classifiers. To show that an agent using domain knowledge can have better performance than the agent with standard Q-Learner. Computer simulations are ...

  • PDF

Genetic Algorithm based Hybrid Ensemble Model (유전자 알고리즘 기반 통합 앙상블 모형)

  • Min, Sung-Hwan
    • Journal of Information Technology Applications and Management
    • /
    • v.23 no.1
    • /
    • pp.45-59
    • /
    • 2016
  • An ensemble classifier is a method that combines output of multiple classifiers. It has been widely accepted that ensemble classifiers can improve the prediction accuracy. Recently, ensemble techniques have been successfully applied to the bankruptcy prediction. Bagging and random subspace are the most popular ensemble techniques. Bagging and random subspace have proved to be very effective in improving the generalization ability respectively. However, there are few studies which have focused on the integration of bagging and random subspace. In this study, we proposed a new hybrid ensemble model to integrate bagging and random subspace method using genetic algorithm for improving the performance of the model. The proposed model is applied to the bankruptcy prediction for Korean companies and compared with other models in this study. The experimental results showed that the proposed model performs better than the other models such as the single classifier, the original ensemble model and the simple hybrid model.

Modularity and Modality in ‘Second’ Language Learning: The Case of a Polyglot Savant

  • Smith, Neil
    • Korean Journal of English Language and Linguistics
    • /
    • v.3 no.3
    • /
    • pp.411-426
    • /
    • 2003
  • I report on the case of a polyglot ‘savant’ (C), who is mildly autistic, severely apraxic, and of limited intellectual ability; yet who can read, write, speak and understand about twenty languages. I outline his abilities, both verbal and non-verbal, noting the asymmetry between his linguistic ability and his general intellectual inability and, within the former, between his unlimited morphological and lexical prowess as opposed to his limited syntax. I then spell out the implications of these findings for modularity. C's unique profile suggested a further project in which we taught him British Sign Language. I report on this work, paying particular attention to the learning and use of classifiers, and discuss its relevance to the issue of modality: whether the human language faculty is preferentially tied to the oral domain, or is ‘modality-neutral’ as between the spoken and the visual modes.

  • PDF

Ensemble of Classifiers Constructed on Class-Oriented Attribute Reduction

  • Li, Min;Deng, Shaobo;Wang, Lei
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.360-376
    • /
    • 2020
  • Many heuristic attribute reduction algorithms have been proposed to find a single reduct that functions as the entire set of original attributes without loss of classification capability; however, the proposed reducts are not always perfect for these multiclass datasets. In this study, based on a probabilistic rough set model, we propose the class-oriented attribute reduction (COAR) algorithm, which separately finds a reduct for each target class. Thus, there is a strong dependence between a reduct and its target class. Consequently, we propose a type of ensemble constructed on a group of classifiers based on class-oriented reducts with a customized weighted majority voting strategy. We evaluated the performance of our proposed algorithm based on five real multiclass datasets. Experimental results confirm the superiority of the proposed method in terms of four general evaluation metrics.