• Title/Summary/Keyword: 최적 분류

Search Result 1,052, Processing Time 0.027 seconds

저수지 운영방안의 방법론적인 개괄

  • 김승권
    • Water for future
    • /
    • v.21 no.1
    • /
    • pp.16-24
    • /
    • 1988
  • 저수지 운영방안을 수자원의 안정공급(이수)을 위한 운영방안과 홍수조절(치수)을 위한 운영방안 그리고 실시간 저수지 운영방안 등으로 분류하였다. 최적운영을 위한 연구논문의 수와 종류가 실로 방대하므로 개별적인 열거보다는 방법론적 측면에서 개괄을 하였다.

  • PDF

Voice Personality Transformation Using an Optimum Classification and Transformation (최적 분류 변환을 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.400-409
    • /
    • 2004
  • In this paper. a voice personality transformation method is proposed. which makes one person's voice sound like another person's voice. To transform the voice personality. vocal tract transfer function is used as a transformation parameter. Comparing with previous methods. the proposed method makes transformed speech closer to target speaker's voice in both subjective and objective points of view. Conversion between vocal tract transfer functions is implemented by classification of entire vector space followed by linear transformation for each cluster. LPC cepstrum is used as a feature parameter. A joint classification and transformation method is proposed, where optimum clusters and transformation matrices are simultaneously estimated in the sense of a minimum mean square error criterion. To evaluate the performance of the proposed method. transformation rules are generated from 150 sentences uttered by three male and on female speakers. These rules are then applied to another 150 sentences uttered by the same speakers. and objective evaluation and subjective listening tests are performed.

Naive Bayes Learner for Propositionalized Attribute Taxonomy (명제화된 어트리뷰트 택소노미를 이용하는 나이브 베이스 학습 알고리즘)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.406-409
    • /
    • 2008
  • We consider the problem of exploiting a taxonomy of propositionalized attributes in order to learn compact and robust classifiers. We introduce Propositionalized Attribute Taxonomy guided Naive Bayes Learner (PAT-NBL), an inductive learning algorithm that exploits a taxonomy of propositionalized attributes as prior knowledge to generate compact and accurate classifiers. PAT-NBL uses top-down and bottom-up search to find a locally optimal cut that corresponds to the instance space from propositionalized attribute taxonomy and data. Our experimental results on University of California-Irvine (UCI) repository data sets show that the proposed algorithm can generate a classifier that is sometimes comparably compact and accurate to those produced by standard Naive Bayes learners.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

A Dual Filter-based Channel Selection for Classification of Motor Imagery EEG (동작 상상 EEG 분류를 위한 이중 filter-기반의 채널 선택)

  • Lee, David;Lee, Hee Jae;Park, Snag-Hoon;Lee, Sang-Goog
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.887-892
    • /
    • 2017
  • Brain-computer interface (BCI) is a technology that controls computer and transmits intention by measuring and analyzing electroencephalogram (EEG) signals generated in multi-channel during mental work. At this time, optimal EEG channel selection is necessary not only for convenience and speed of BCI but also for improvement in accuracy. The optimal channel is obtained by removing duplicate(redundant) channels or noisy channels. This paper propose a dual filter-based channel selection method to select the optimal EEG channel. The proposed method first removes duplicate channels using Spearman's rank correlation to eliminate redundancy between channels. Then, using F score, the relevance between channels and class labels is obtained, and only the top m channels are then selected. The proposed method can provide good classification accuracy by using features obtained from channels that are associated with class labels and have no duplicates. The proposed channel selection method greatly reduces the number of channels required while improving the average classification accuracy.

Detection of the Optimum Spectral Roll-off Point using Violin as a Sound Source (바이올린 음원을 이용한 스펙트랄 롤오프 포인트의 최적점 검출)

  • Kim, Jae-Chun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.1 s.45
    • /
    • pp.51-56
    • /
    • 2007
  • Feature functions were used for the classification of music. The spectral roll-off, variance, average peak level, and class were chosen to make up a feature function vector. Among these, it is the spectral roll-off function that has a low-frequency to high-frequency ratio. To find the optimal roll-off point, the roll-off points from 0.05 to 0.95 were swept. The classification success rate was monitored as the roll-off point was being changed. The data that were used for the experiments were taken from the sounds made by a modern violin and a baroque one. Their shapes and sounds are similar, but they differ slightly in sound texture. As such, the data obtained from the sounds of these two kinds of violin can be useful in finding an adequate roll-off point. The optimal roll-off point, as determined through the experiment, was 0.85. At this point, the classification success rate was 85%, which was the highest.

  • PDF

Optimal EEG Channel Selection using BPSO with Channel Impact Factor (Channel Impact Factor 접목한 BPSO 기반 최적의 EEG 채널 선택 기법)

  • Kim, Jun-Yeup;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.774-779
    • /
    • 2012
  • Brain-computer interface based on motor imagery is a system that transforms a subject's intention into a control signal by classifying EEG signals obtained from the imagination of movement of a subject's limbs. For the new paradigm, we do not know which positions are activated or not. A simple approach is to use as many channels as possible. The problem is that using many channels causes other problems. When applying a common spatial pattern (CSP), which is an EEG extraction method, many channels cause an overfit problem, in addition there is difficulty using this technique for medical analysis. To overcome these problems, we suggest a binary particle swarm optimization with channel impact factor in order to select channels close to the most important channels as channel selection method. This paper examines whether or not channel impact factor can improve accuracy by Support Vector Machine(SVM).

Search for Optimal Data Augmentation Policy for Environmental Sound Classification with Deep Neural Networks (심층 신경망을 통한 자연 소리 분류를 위한 최적의 데이터 증대 방법 탐색)

  • Park, Jinbae;Kumar, Teerath;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.854-860
    • /
    • 2020
  • Deep neural networks have shown remarkable performance in various areas, including image classification and speech recognition. The variety of data generated by augmentation plays an important role in improving the performance of the neural network. The transformation of data in the augmentation process makes it possible for neural networks to be learned more generally through more diverse forms. In the traditional field of image process, not only new augmentation methods have been proposed for improving the performance, but also exploring methods for an optimal augmentation policy that can be changed according to the dataset and structure of networks. Inspired by the prior work, this paper aims to explore to search for an optimal augmentation policy in the field of sound data. We carried out many experiments randomly combining various augmentation methods such as adding noise, pitch shift, or time stretch to empirically search which combination is most effective. As a result, by applying the optimal data augmentation policy we achieve the improved classification accuracy on the environmental sound classification dataset (ESC-50).

Terms Based Sentiment Classification for Online Review Using Support Vector Machine (Support Vector Machine을 이용한 온라인 리뷰의 용어기반 감성분류모형)

  • Lee, Taewon;Hong, Taeho
    • Information Systems Review
    • /
    • v.17 no.1
    • /
    • pp.49-64
    • /
    • 2015
  • Customer reviews which include subjective opinions for the product or service in online store have been generated rapidly and their influence on customers has become immense due to the widespread usage of SNS. In addition, a number of studies have focused on opinion mining to analyze the positive and negative opinions and get a better solution for customer support and sales. It is very important to select the key terms which reflected the customers' sentiment on the reviews for opinion mining. We proposed a document-level terms-based sentiment classification model by select in the optimal terms with part of speech tag. SVMs (Support vector machines) are utilized to build a predictor for opinion mining and we used the combination of POS tag and four terms extraction methods for the feature selection of SVM. To validate the proposed opinion mining model, we applied it to the customer reviews on Amazon. We eliminated the unmeaning terms known as the stopwords and extracted the useful terms by using part of speech tagging approach after crawling 80,000 reviews. The extracted terms gained from document frequency, TF-IDF, information gain, chi-squared statistic were ranked and 20 ranked terms were used to the feature of SVM model. Our experimental results show that the performance of SVM model with four POS tags is superior to the benchmarked model, which are built by extracting only adjective terms. In addition, the SVM model based on Chi-squared statistic for opinion mining shows the most superior performance among SVM models with 4 different kinds of terms extraction method. Our proposed opinion mining model is expected to improve customer service and gain competitive advantage in online store.