• Title/Summary/Keyword: classification algorithms

Search Result 1,195, Processing Time 0.027 seconds

A Comparative Performance Analysis of Blocking Artifact Reduction Algorithms (블록화 현상 제거 알고리듬의 성능 비교 분석)

  • 소현주;장익훈김남철
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.907-910
    • /
    • 1998
  • In this paper, we present a comparative performance analysis of several blocking artifact reduction algorithms. For the performance analysis, we propose a block boundary region classification algorithm which classifies each horizontal and vertical block boundary into four regions using brightness change near the block boundary. The PSNR performance of each algorithm is compared. The MSE according to each block boundary region is also compared. Experimental results show that the wavelet transform based blocking artifact reduction algorithms have better performance over the other methods.

  • PDF

SET-VALUED QUASI VARIATIONAL INCLUSIONS

  • Noor, Muhammad Aslam
    • Journal of applied mathematics & informatics
    • /
    • v.7 no.1
    • /
    • pp.101-113
    • /
    • 2000
  • In this paper, we introduce and study a new class of variational inclusions, called the set-valued quasi variational inclusions. The resolvent operator technique is used to establish the equivalence between the set-valued variational inclusions and the fixed point problem. This equivalence is used to study the existence of a solution and to suggest a number of iterative algorithms for solving the set-valued variational inclusions. We also study the convergence criteria of these algorithms.

Performance Comparison of Welding Flaws Classification using Ultrasonic Nondestructive Inspection Technique (초음파 비파괴 검사기법에 의한 용접결함 분류성능 비교)

  • 김재열;유신;김창현;송경석;양동조;김유홍
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2004.10a
    • /
    • pp.280-285
    • /
    • 2004
  • In this study, we made a comparative study of backpropagation neural network and probabilistic neural network and bayesian classifier and perceptron as shape recognition algorithm of welding flaws. For this purpose, variables are applied the same to four algorithms. Here, feature variable is composed of time domain signal itself and frequency domain signal itself. Through this process, we comfirmed advantages/disadvantages of four algorithms and identified application methods of four algorithms.

  • PDF

A Study on Classifying Sea Ice of the Summer Arctic Ocean Using Sentinel-1 A/B SAR Data and Deep Learning Models (Sentinel-1 A/B 위성 SAR 자료와 딥러닝 모델을 이용한 여름철 북극해 해빙 분류 연구)

  • Jeon, Hyungyun;Kim, Junwoo;Vadivel, Suresh Krishnan Palanisamy;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.999-1009
    • /
    • 2019
  • The importance of high-resolution sea ice maps of the Arctic Ocean is increasing due to the possibility of pioneering North Pole Routes and the necessity of precise climate prediction models. In this study,sea ice classification algorithms for two deep learning models were examined using Sentinel-1 A/B SAR data to generate high-resolution sea ice classification maps. Based on current ice charts, three classes (Open Water, First Year Ice, Multi Year Ice) of training data sets were generated by Arctic sea ice and remote sensing experts. Ten sea ice classification algorithms were generated by combing two deep learning models (i.e. Simple CNN and Resnet50) and five cases of input bands including incident angles and thermal noise corrected HV bands. For the ten algorithms, analyses were performed by comparing classification results with ground truth points. A confusion matrix and Cohen's kappa coefficient were produced for the case that showed best result. Furthermore, the classification result with the Maximum Likelihood Classifier that has been traditionally employed to classify sea ice. In conclusion, the Convolutional Neural Network case, which has two convolution layers and two max pooling layers, with HV and incident angle input bands shows classification accuracy of 96.66%, and Cohen's kappa coefficient of 0.9499. All deep learning cases shows better classification accuracy than the classification result of the Maximum Likelihood Classifier.

Comparison of Data Mining Classification Algorithms for Categorical Feature Variables (범주형 자료에 대한 데이터 마이닝 분류기법 성능 비교)

  • Sohn, So-Young;Shin, Hyung-Won
    • IE interfaces
    • /
    • v.12 no.4
    • /
    • pp.551-556
    • /
    • 1999
  • In this paper, we compare the performance of three data mining classification algorithms(neural network, decision tree, logistic regression) in consideration of various characteristics of categorical input and output data. $2^{4-1}$. 3 fractional factorial design is used to simulate the comparison situation where factors used are (1) the categorical ratio of input variables, (2) the complexity of functional relationship between the output and input variables, (3) the size of randomness in the relationship, (4) the categorical ratio of an output variable, and (5) the classification algorithm. Experimental study results indicate the following: decision tree performs better than the others when the relationship between output and input variables is simple while logistic regression is better when the other way is around; and neural network appears a better choice than the others when the randomness in the relationship is relatively large. We also use Taguchi design to improve the practicality of our study results by letting the relationship between the output and input variables as a noise factor. As a result, the classification accuracy of neural network and decision tree turns out to be higher than that of logistic regression, when the categorical proportion of the output variable is even.

  • PDF

Comparing Classification Accuracy of Ensemble and Clustering Algorithms Based on Taguchi Design (다구찌 디자인을 이용한 앙상블 및 군집분석 분류 성능 비교)

  • Shin, Hyung-Won;Sohn, So-Young
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.27 no.1
    • /
    • pp.47-53
    • /
    • 2001
  • In this paper, we compare the classification performances of both ensemble and clustering algorithms (Data Bagging, Variable Selection Bagging, Parameter Combining, Clustering) to logistic regression in consideration of various characteristics of input data. Four factors used to simulate the logistic model are (1) correlation among input variables (2) variance of observation (3) training data size and (4) input-output function. In view of the unknown relationship between input and output function, we use a Taguchi design to improve the practicality of our study results by letting it as a noise factor. Experimental study results indicate the following: When the level of the variance is medium, Bagging & Parameter Combining performs worse than Logistic Regression, Variable Selection Bagging and Clustering. However, classification performances of Logistic Regression, Variable Selection Bagging, Bagging and Clustering are not significantly different when the variance of input data is either small or large. When there is strong correlation in input variables, Variable Selection Bagging outperforms both Logistic Regression and Parameter combining. In general, Parameter Combining algorithm appears to be the worst at our disappointment.

  • PDF

Discretization of Continuous-Valued Attributes for Classification Learning (분류학습을 위한 연속 애트리뷰트의 이산화 방법에 관한 연구)

  • Lee, Chang-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.6
    • /
    • pp.1541-1549
    • /
    • 1997
  • Many classification algorithms require that training examples contain only discrete values. In order to use these algorithms when some attributes have continuous numeric values, the numeric attributes must be converted into discrete ones. This paper describes a new way of discretizing numeric values using information theory. Our method is context-sensitive in the sense that it takes into account the value of the target attribute. The amount of information each interval gives to the target attribute is measured using Hellinger divergence, and the interval boundaries are decided so that each interval contains as equal amount of information as possible. In order to compare our discretization method with some current discretization methods, several popular classification data sets are selected for experiment. We use back propagation algorithm and ID3 as classification tools to compare the accuracy of our discretization method with that of other methods.

  • PDF

Implementation of simple statistical pattern recognition methods for harmful gases classification using gas sensor array fabricated by MEMS technology (MEMS 기술로 제작된 가스 센서 어레이를 이용한 유해가스 분류를 위한 간단한 통계적 패턴인식방법의 구현)

  • Byun, Hyung-Gi;Shin, Jeong-Suk;Lee, Ho-Jun;Lee, Won-Bae
    • Journal of Sensor Science and Technology
    • /
    • v.17 no.6
    • /
    • pp.406-413
    • /
    • 2008
  • We have been implemented simple statistical pattern recognition methods for harmful gases classification using gas sensors array fabricated by MEMS (Micro Electro Mechanical System) technology. The performance of pattern recognition method as a gas classifier is highly dependent on the choice of pre-processing techniques for sensor and sensors array signals and optimal classification algorithms among the various classification techniques. We carried out pre-processing for each sensor's signal as well as sensors array signals to extract features for each gas. We adapted simple statistical pattern recognition algorithms, which were PCA (Principal Component Analysis) for visualization of patterns clustering and MLR (Multi-Linear Regression) for real-time system implementation, to classify harmful gases. Experimental results of adapted pattern recognition methods with pre-processing techniques have been shown good clustering performance and expected easy implementation for real-time sensing system.

Missing Value Imputation based on Locally Linear Reconstruction for Improving Classification Performance (분류 성능 향상을 위한 지역적 선형 재구축 기반 결측치 대치)

  • Kang, Pilsung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.38 no.4
    • /
    • pp.276-284
    • /
    • 2012
  • Classification algorithms generally assume that the data is complete. However, missing values are common in real data sets due to various reasons. In this paper, we propose to use locally linear reconstruction (LLR) for missing value imputation to improve the classification performance when missing values exist. We first investigate how much missing values degenerate the classification performance with regard to various missing ratios. Then, we compare the proposed missing value imputation (LLR) with three well-known single imputation methods over three different classifiers using eight data sets. The experimental results showed that (1) any imputation methods, although some of them are very simple, helped to improve the classification accuracy; (2) among the imputation methods, the proposed LLR imputation was the most effective over all missing ratios, and (3) when the missing ratio is relatively high, LLR was outstanding and its classification accuracy was as high as the classification accuracy derived from the compete data set.

Issues and Empirical Results for Improving Text Classification

  • Ko, Young-Joong;Seo, Jung-Yun
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.2
    • /
    • pp.150-160
    • /
    • 2011
  • Automatic text classification has a long history and many studies have been conducted in this field. In particular, many machine learning algorithms and information retrieval techniques have been applied to text classification tasks. Even though much technical progress has been made in text classification, there is still room for improvement in text classification. In this paper, we will discuss remaining issues in improving text classification. In this paper, three improvement issues are presented including automatic training data generation, noisy data treatment and term weighting and indexing, and four actual studies and their empirical results for those issues are introduced. First, the semi-supervised learning technique is applied to text classification to efficiently create training data. For effective noisy data treatment, a noisy data reduction method and a robust text classifier from noisy data are developed as a solution. Finally, the term weighting and indexing technique is revised by reflecting the importance of sentences into term weight calculation using summarization techniques.