• Title/Summary/Keyword: feature vector selection

Search Result 184, Processing Time 0.029 seconds

Feature Selection to Predict Very Short-term Heavy Rainfall Based on Differential Evolution (미분진화 기반의 초단기 호우예측을 위한 특징 선택)

  • Seo, Jae-Hyun;Lee, Yong Hee;Kim, Yong-Hyuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.706-714
    • /
    • 2012
  • The Korea Meteorological Administration provided the recent four-years records of weather dataset for our very short-term heavy rainfall prediction. We divided the dataset into three parts: train, validation and test set. Through feature selection, we select only important features among 72 features to avoid significant increase of solution space that arises when growing exponentially with the dimensionality. We used a differential evolution algorithm and two classifiers as the fitness function of evolutionary computation to select more accurate feature subset. One of the classifiers is Support Vector Machine (SVM) that shows high performance, and the other is k-Nearest Neighbor (k-NN) that is fast in general. The test results of SVM were more prominent than those of k-NN in our experiments. Also we processed the weather data using undersampling and normalization techniques. The test results of our differential evolution algorithm performed about five times better than those using all features and about 1.36 times better than those using a genetic algorithm, which is the best known. Running times when using a genetic algorithm were about twenty times longer than those when using a differential evolution algorithm.

A Technique for On-line Automatic Signature Verification based on a Structural Representation (필기의 구조적 표현에 의한 온라인 자동 서명 검증 기법)

  • Kim, Seong-Hoon;Jang, Mun-Ik;Kim, Jai-Hie
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.11
    • /
    • pp.2884-2896
    • /
    • 1998
  • For on-line signature verification, the local shape of a signature is an important information. The current approaches, in which signatures are represented into a function of time or a feature vector without regarding of local shape, have not used the various features of local shapes, for example, local variation of a signer, local complexity of signature or local difficulty of forger, and etc. In this paper, we propose a new technique for on-line signature verification based on a structural signature representation so as to analyze local shape and to make a selection of important local parts in matching process. That is. based on a structural representation of signature, a technique of important of local weighting and personalized decision threshold is newly introduced and its experimental results under different conditions are compared.

  • PDF

Machine Learning Based Automatic Categorization Model for Text Lines in Invoice Documents

  • Shin, Hyun-Kyung
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.12
    • /
    • pp.1786-1797
    • /
    • 2010
  • Automatic understanding of contents in document image is a very hard problem due to involvement with mathematically challenging problems originated mainly from the over-determined system induced by document segmentation process. In both academic and industrial areas, there have been incessant and various efforts to improve core parts of content retrieval technologies by the means of separating out segmentation related issues using semi-structured document, e.g., invoice,. In this paper we proposed classification models for text lines on invoice document in which text lines were clustered into the five categories in accordance with their contents: purchase order header, invoice header, summary header, surcharge header, purchase items. Our investigation was concentrated on the performance of machine learning based models in aspect of linear-discriminant-analysis (LDA) and non-LDA (logic based). In the group of LDA, na$\"{\i}$ve baysian, k-nearest neighbor, and SVM were used, in the group of non LDA, decision tree, random forest, and boost were used. We described the details of feature vector construction and the selection processes of the model and the parameter including training and validation. We also presented the experimental results of comparison on training/classification error levels for the models employed.

Statistical Analysis for Feature Subset Selection Procedures.

  • Kim, In-Young;Lee, Sun-Ho;Kim, Sang-Cheol;Rha, Sun-Young;Chung, Hyun-Cheol;Kim, Byung-Soo
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.101-106
    • /
    • 2003
  • In this paper, we propose using Hotelling's T2 statistic for the detection of a set of a set of differentially expressed (DE) genes in colorectal cancer based on its gene expression level in tumor tissues compared with those in normal tissues and to evaluate its predictivity which let us rank genes for the development of biomarkers for population screening of colorectal cancer. We compared the prediction rate based on the DE genes selected by Hotelling's T2 statistic and univariate t statistic using various prediction methods, a regulized discrimination analysis and a support vector machine. The result shows that the prediction rate based on T2 is better than that of univatiate t. This implies that it may not be sufficient to look at each gene in a separate universe and that evaluating combinations of genes reveals interesting information that will not be discovered otherwise.

  • PDF

Classification of Ground-Glass Opacity Nodules with Small Solid Components using Multiview Images and Texture Analysis in Chest CT Images (흉부 CT 영상에서 다중 뷰 영상과 텍스처 분석을 통한 고형 성분이 작은 폐 간유리음영 결절 분류)

  • Lee, Seon Young;Jung, Julip;Lee, Han Sang;Hong, Helen
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.7
    • /
    • pp.994-1003
    • /
    • 2017
  • Ground-glass opacity nodules(GGNs) in chest CT images are associated with lung cancer, and have a different malignant rate depending on existence of solid component in the nodules. In this paper, we propose a method to classify pure GGNs and part-solid GGNs using multiview images and texture analysis in pulmonary GGNs with solid components of 5mm or smaller. We extracted 1521 features from the GGNs segmented from the chest CT images and classified the GGNs using a SVM classification model with selected features that classify pure GGNs and part-solid GGNs through a feature selection method. Our method showed 85% accuracy using the SVM classifier with the top 10 features selected in the multiview images.

A Novel Multifocus Image Fusion Algorithm Based on Nonsubsampled Contourlet Transform

  • Liu, Cuiyin;Cheng, Peng;Chen, Shu-Qing;Wang, Cuiwei;Xiang, Fenghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.539-557
    • /
    • 2013
  • A novel multifocus image fusion algorithm based on NSCT is proposed in this paper. In order to not only attain the image focusing properties and more visual information in the fused image, but also sensitive to the human visual perception, a local multidirection variance (LEOV) fusion rule is proposed for lowpass subband coefficient. In order to introduce more visual saliency, a modified local contrast is defined. In addition, according to the feature of distribution of highpass subband coefficients, a direction vector is proposed to constrain the modified local contrast and construct the new fusion rule for highpass subband coefficients selection The NSCT is a flexible multiscale, multidirection, and shift-invariant tool for image decomposition, which can be implemented via the atrous algorithm. The proposed fusion algorithm based on NSCT not only can prevent artifacts and erroneous from introducing into the fused image, but also can eliminate 'block effect' and 'frequency aliasing' phenomenon. Experimental results show that the proposed method achieved better fusion results than wavelet-based and CT-based fusion method in contrast and clarity.

A Comparative Study on Feature Selection and Classification Methods Using Closed Frequent Patterns Mining (닫힌 빈발 패턴을 기반으로 한 특징 선택과 분류방법 비교)

  • Zhang, Lei;Jin, Cheng Hao;Ryu, Keun Ho
    • Annual Conference of KIPS
    • /
    • 2010.11a
    • /
    • pp.148-151
    • /
    • 2010
  • 분류 기법은 데이터 마이닝 기술 중 가장 잘 알려진 방법으로서, Decision tree, SVM(Support Vector Machine), ANN(Artificial Neural Network) 등 기법을 포함한다. 분류 기법은 이미 알려진 상호 배반적인 몇 개 그룹에 속하는 다변량 관측치로부터 각각의 그룹이 어떤 특징을 가지고 있는지 분류 모델을 만들고, 소속 그룹이 알려지지 않은 새로운 관측치가 어떤 그룹에 분류될 것인가를 결정하는 분석 방법이다. 분류기법을 수행할 때에 기본적으로 특징 공간이 잘 표현되어 있다고 가정한다. 그러나 실제 응용에서는 단일 특징으로 구성된 특징공간이 분명하지 않기 때문에 분류를 잘 수행하지 못하는 문제점이 있다. 본 논문에서는 이 문제에 대한 해결방안으로써 많은 정보를 포함하면서 빈발패턴에 대한 정보의 순실이 없는 닫힌 빈발패턴 기반 분류에 대한 연구를 진행하였다. 본 실험에서는 ${\chi}^2$(Chi-square)과 정보이득(Information Gain) 속성 선택 척도를 사용하여 의미있는 특징 선택을 수행하였다. 그 결과, 이 연구에서 제시한 척도를 사용하여 특징 선택을 수행한 경우, C4.5, SVM 과 같은 분류기법보다 더 향상된 분류 성능을 보였다.

Comparative Analysis of Intrusion Detection Attack Based on Machine Learning Classifiers

  • Surafel Mehari;Anuja Kumar Acharya
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.10
    • /
    • pp.115-124
    • /
    • 2024
  • In current day information transmitted from one place to another by using network communication technology. Due to such transmission of information, networking system required a high security environment. The main strategy to secure this environment is to correctly identify the packet and detect if the packet contain a malicious and any illegal activity happened in network environments. To accomplish this we use intrusion detection system (IDS). Intrusion detection is a security technology that design detects and automatically alert or notify to a responsible person. However, creating an efficient Intrusion Detection System face a number of challenges. These challenges are false detection and the data contain high number of features. Currently many researchers use machine learning techniques to overcome the limitation of intrusion detection and increase the efficiency of intrusion detection for correctly identify the packet either the packet is normal or malicious. Many machine-learning techniques use in intrusion detection. However, the question is which machine learning classifiers has been potentially to address intrusion detection issue in network security environment. Choosing the appropriate machine learning techniques required to improve the accuracy of intrusion detection system. In this work, three machine learning classifier are analyzed. Support vector Machine, Naïve Bayes Classifier and K-Nearest Neighbor classifiers. These algorithms tested using NSL KDD dataset by using the combination of Chi square and Extra Tree feature selection method and Python used to implement, analyze and evaluate the classifiers. Experimental result show that K-Nearest Neighbor classifiers outperform the method in categorizing the packet either is normal or malicious.

Runtime Prediction Based on Workload-Aware Clustering (병렬 프로그램 로그 군집화 기반 작업 실행 시간 예측모형 연구)

  • Kim, Eunhye;Park, Ju-Won
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.3
    • /
    • pp.56-63
    • /
    • 2015
  • Several fields of science have demanded large-scale workflow support, which requires thousands of CPU cores or more. In order to support such large-scale scientific workflows, high capacity parallel systems such as supercomputers are widely used. In order to increase the utilization of these systems, most schedulers use backfilling policy: Small jobs are moved ahead to fill in holes in the schedule when large jobs do not delay. Since an estimate of the runtime is necessary for backfilling, most parallel systems use user's estimated runtime. However, it is found to be extremely inaccurate because users overestimate their jobs. Therefore, in this paper, we propose a novel system for the runtime prediction based on workload-aware clustering with the goal of improving prediction performance. The proposed method for runtime prediction of parallel applications consists of three main phases. First, a feature selection based on factor analysis is performed to identify important input features. Then, it performs a clustering analysis of history data based on self-organizing map which is followed by hierarchical clustering for finding the clustering boundaries from the weight vectors. Finally, prediction models are constructed using support vector regression with the clustered workload data. Multiple prediction models for each clustered data pattern can reduce the error rate compared with a single model for the whole data pattern. In the experiments, we use workload logs on parallel systems (i.e., iPSC, LANL-CM5, SDSC-Par95, SDSC-Par96, and CTC-SP2) to evaluate the effectiveness of our approach. Comparing with other techniques, experimental results show that the proposed method improves the accuracy up to 69.08%.

A Study on the Development of Search Algorithm for Identifying the Similar and Redundant Research (유사과제파악을 위한 검색 알고리즘의 개발에 관한 연구)

  • Park, Dong-Jin;Choi, Ki-Seok;Lee, Myung-Sun;Lee, Sang-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.11
    • /
    • pp.54-62
    • /
    • 2009
  • To avoid the redundant investment on the project selection process, it is necessary to check whether the submitted research topics have been proposed or carried out at other institutions before. This is possible through the search engines adopted by the keyword matching algorithm which is based on boolean techniques in national-sized research results database. Even though the accuracy and speed of information retrieval have been improved, they still have fundamental limits caused by keyword matching. This paper examines implemented TFIDF-based algorithm, and shows an experiment in search engine to retrieve and give the order of priority for similar and redundant documents compared with research proposals, In addition to generic TFIDF algorithm, feature weighting and K-Nearest Neighbors classification methods are implemented in this algorithm. The documents are extracted from NDSL(National Digital Science Library) web directory service to test the algorithm.