• Title/Summary/Keyword: Classification Performance

Search Result 3,735, Processing Time 0.038 seconds

Development of Portable Cable Fault Detection System with Automatic Fault Distinction and Distance Measurement (자동 고장 판별 및 거리 측정 기능을 갖는 휴대용 케이블 고장 검출 장치 개발)

  • Kim, Jae-Jin;Jeon, Jeong-Chay
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.10
    • /
    • pp.1774-1779
    • /
    • 2016
  • This paper proposes a portable cable fault detection system with automatic fault distinction and distance measurement using time-frequency correlation and reference signal elimination method and automatic fault classification algorithm in order to have more accurate fault determination and location detection than conventional time domain refelectometry (TDR) system despite increased signal attenuation due to the long distance to cable fault location. The performance of the developed system method was validated via an experiment in the test field constructed for the standardized performance test of power cable fault location equipments. The performance evaluation showed that accuracy of the developed system is less than 1.34%. Also, an error of automatic fault type and location by detection of phase and peak value through elimination of the reference signal and normalization of correlation coefficient and automatic fault classification algorithm not occurred.

Use of Word Clustering to Improve Emotion Recognition from Short Text

  • Yuan, Shuai;Huang, Huan;Wu, Linjing
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.4
    • /
    • pp.103-110
    • /
    • 2016
  • Emotion recognition is an important component of affective computing, and is significant in the implementation of natural and friendly human-computer interaction. An effective approach to recognizing emotion from text is based on a machine learning technique, which deals with emotion recognition as a classification problem. However, in emotion recognition, the texts involved are usually very short, leaving a very large, sparse feature space, which decreases the performance of emotion classification. This paper proposes to resolve the problem of feature sparseness, and largely improve the emotion recognition performance from short texts by doing the following: representing short texts with word cluster features, offering a novel word clustering algorithm, and using a new feature weighting scheme. Emotion classification experiments were performed with different features and weighting schemes on a publicly available dataset. The experimental results suggest that the word cluster features and the proposed weighting scheme can partly resolve problems with feature sparseness and emotion recognition performance.

Performance Comparison of Decision Trees of J48 and Reduced-Error Pruning

  • Jin, Hoon;Jung, Yong Gyu
    • International journal of advanced smart convergence
    • /
    • v.5 no.1
    • /
    • pp.30-33
    • /
    • 2016
  • With the advent of big data, data mining is more increasingly utilized in various decision-making fields by extracting hidden and meaningful information from large amounts of data. Even as exponential increase of the request of unrevealing the hidden meaning behind data, it becomes more and more important to decide to select which data mining algorithm and how to use it. There are several mainly used data mining algorithms in biology and clinics highlighted; Logistic regression, Neural networks, Supportvector machine, and variety of statistical techniques. In this paper it is attempted to compare the classification performance of an exemplary algorithm J48 and REPTree of ML algorithms. It is confirmed that more accurate classification algorithm is provided by the performance comparison results. More accurate prediction is possible with the algorithm for the goal of experiment. Based on this, it is expected to be relatively difficult visually detailed classification and distinction.

Estimating Prediction Errors in Binary Classification Problem: Cross-Validation versus Bootstrap

  • Kim Ji-Hyun;Cha Eun-Song
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.1
    • /
    • pp.151-165
    • /
    • 2006
  • It is important to estimate the true misclassification rate of a given classifier when an independent set of test data is not available. Cross-validation and bootstrap are two possible approaches in this case. In related literature bootstrap estimators of the true misclassification rate were asserted to have better performance for small samples than cross-validation estimators. We compare the two estimators empirically when the classification rule is so adaptive to training data that its apparent misclassification rate is close to zero. We confirm that bootstrap estimators have better performance for small samples because of small variance, and we have found a new fact that their bias tends to be significant even for moderate to large samples, in which case cross-validation estimators have better performance with less computation.

Predicting the Performance of Forecasting Strategies for Naval Spare Parts Demand: A Machine Learning Approach

  • Moon, Seongmin
    • Management Science and Financial Engineering
    • /
    • v.19 no.1
    • /
    • pp.1-10
    • /
    • 2013
  • Hierarchical forecasting strategy does not always outperform direct forecasting strategy. The performance generally depends on demand features. This research guides the use of the alternative forecasting strategies according to demand features. This paper developed and evaluated various classification models such as logistic regression (LR), artificial neural networks (ANN), decision trees (DT), boosted trees (BT), and random forests (RF) for predicting the relative performance of the alternative forecasting strategies for the South Korean navy's spare parts demand which has non-normal characteristics. ANN minimized classification errors and inventory costs, whereas LR minimized the Brier scores and the sum of forecasting errors.

Dimensionality reduction for pattern recognition based on difference of distribution among classes

  • Nishimura, Masaomi;Hiraoka, Kazuyuki;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1670-1673
    • /
    • 2002
  • For pattern recognition on high-dimensional data, such as images, the dimensionality reduction as a preprocessing is effective. By dimensionality reduction, we can (1) reduce storage capacity or amount of calculation, and (2) avoid "the curse of dimensionality" and improve classification performance. Popular tools for dimensionality reduction are Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Independent Component Analysis (ICA) recently. Among them, only LDA takes the class labels into consideration. Nevertheless, it, has been reported that, the classification performance with ICA is better than that with LDA because LDA has restriction on the number of dimensions after reduction. To overcome this dilemma, we propose a new dimensionality reduction technique based on an information theoretic measure for difference of distribution. It takes the class labels into consideration and still it does not, have restriction on number of dimensions after reduction. Improvement of classification performance has been confirmed experimentally.

  • PDF

The Prediction Performance of the CART Using Bank and Insurance Company Data (CART의 예측 성능:은행 및 보험 회사 데이터 사용)

  • Park, Jeong-Seon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1468-1472
    • /
    • 1996
  • In this study, the performance of the CART(Classification and Regression Tree) is compared with that of discriminant analysis method. In most experiments using bank data, discriminant analysis shows better performance in terms of the total cost. In contrast, most experiments using insurance data show that the CART is better than discriminant analysis in terms of the total cost. The contradictory result are analysed by using the characteristics of the data sets. The performances of both the Classification and Regression Tree and discriminant analysis depend on the parameters:failure prior probability, data used, type I error, type II error cost, and validation method.

  • PDF

Optimization of Domain-Independent Classification Framework for Mood Classification

  • Choi, Sung-Pil;Jung, Yu-Chul;Myaeng, Sung-Hyon
    • Journal of Information Processing Systems
    • /
    • v.3 no.2
    • /
    • pp.73-81
    • /
    • 2007
  • In this paper, we introduce a domain-independent classification framework based on both k-nearest neighbor and Naive Bayesian classification algorithms. The architecture of our system is simple and modularized in that each sub-module of the system could be changed or improved efficiently. Moreover, it provides various feature selection mechanisms to be applied to optimize the general-purpose classifiers for a specific domain. As for the enhanced classification performance, our system provides conditional probability boosting (CPB) mechanism which could be used in various domains. In the mood classification domain, our optimized framework using the CPB algorithm showed 1% of improvement in precision and 2% in recall compared with the baseline.

AUTOMATIC SELECTION AND ADJUSTMENT OF FEATURES FOR IMAGE CLASSIFICATION

  • Saiki, Kenji;Nagao, Tomoharu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.525-528
    • /
    • 2009
  • Recently, image classification has been an important task in various fields. Generally, the performance of image classification is not good without the adjustment of image features. Therefore, it is desired that the way of automatic feature extraction. In this paper, we propose an image classification method which adjusts image features automatically. We assume that texture features are useful in image classification tasks because natural images are composed of several types of texture. Thus, the classification accuracy rate is improved by using distribution of texture features. We obtain texture features by calculating image features from a current considering pixel and its neighborhood pixels. And we calculate image features from distribution of textures feature. Those image features are adjusted to image classification tasks using Genetic Algorithm. We apply proposed method to classifying images into "head" or "non-head" and "male" or "female".

  • PDF

Comparison of wavelet-based decomposition and empirical mode decomposition of electrohysterogram signals for preterm birth classification

  • Janjarasjitt, Suparerk
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.826-836
    • /
    • 2022
  • Signal decomposition is a computational technique that dissects a signal into its constituent components, providing supplementary information. In this study, the capability of two common signal decomposition techniques, including wavelet-based and empirical mode decomposition, on preterm birth classification was investigated. Ten time-domain features were extracted from the constituent components of electrohysterogram (EHG) signals, including EHG subbands and EHG intrinsic mode functions, and employed for preterm birth classification. Preterm birth classification and anticipation are crucial tasks that can help reduce preterm birth complications. The computational results show that the preterm birth classification obtained using wavelet-based decomposition is superior. This, therefore, implies that EHG subbands decomposed through wavelet-based decomposition provide more applicable information for preterm birth classification. Furthermore, an accuracy of 0.9776 and a specificity of 0.9978, the best performance on preterm birth classification among state-of-the-art signal processing techniques, were obtained using the time-domain features of EHG subbands.