• Title/Summary/Keyword: K-NN Classification Model

Search Result 56, Processing Time 0.022 seconds

Performance Evaluation of Car Model Recognition System Using HOG and Artificial Neural Network (HOG와 인공신경망을 이용한 자동차 모델 인식 시스템 성능 분석)

  • Park, Ki-Wan;Bang, Ji-Sung;Kim, Byeong-Man
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.5
    • /
    • pp.1-10
    • /
    • 2016
  • In this paper, a car model recognition system using image processing and machine learning is proposed and it's performance is also evaluated. The system recognizes the front of car because the front of car is different for every car model and manufacturer, and difficult to remodel. The proposed method extracts HOG features from training data set, then builds classification model by the HOG features. If user takes photo of the front of car, then HOG features are extracted from the photo image and are used to determine the model of car based on the trained classification model. Experimental results show a high average recognition rate of 98%.

Simultaneous Motion Recognition Framework using Data Augmentation based on Muscle Activation Model (근육 활성화 모델 기반의 데이터 증강을 활용한 동시 동작 인식 프레임워크)

  • Sejin Kim;Wan Kyun Chung
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.203-212
    • /
    • 2024
  • Simultaneous motion is essential in the activities of daily living (ADL). For motion intention recognition, surface electromyogram (sEMG) and corresponding motion label is necessary. However, this process is time-consuming and it may increase the burden of the user. Therefore, we propose a simultaneous motion recognition framework using data augmentation based on muscle activation model. The model consists of multiple point sources to be optimized while the number of point sources and their initial parameters are automatically determined. From the experimental results, it is shown that the framework has generated the data which are similar to the real one. This aspect is quantified with the following two metrics: structural similarity index measure (SSIM) and mean squared error (MSE). Furthermore, with k-nearest neighbor (k-NN) or support vector machine (SVM), the classification accuracy is also enhanced with the proposed framework. From these results, it can be concluded that the generalization property of the training data is enhanced and the classification accuracy is increased accordingly. We expect that this framework reduces the burden of the user from the excessive and time-consuming data acquisition.

A dominant hyperrectangle generation technique of classification using IG partitioning (정보이득 분할을 이용한 분류기법의 지배적 초월평면 생성기법)

  • Lee, Hyeong-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.1
    • /
    • pp.149-156
    • /
    • 2014
  • NGE(Nested Generalized Exemplar Method) can increase the performance of the noisy data at the same time, can reduce the size of the model. It is the optimal distance-based classification method using a matching rule. NGE cross or overlap hyperrectangles generated in the learning has been noted to inhibit the factors. In this paper, We propose the DHGen(Dominant Hyperrectangle Generation) algorithm which avoids the overlapping and the crossing between hyperrectangles, uses interval weights for mixed hyperrectangles to be splited based on the mutual information. The DHGen improves the classification performance and reduces the number of hyperrectangles by processing the training set in an incremental manner. The proposed DHGen has been successfully shown to exhibit comparable classification performance to k-NN and better result than EACH system which implements the NGE theory using benchmark data sets from UCI Machine Learning Repository.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

Prediction of Blast Vibration in Quarry Using Machine Learning Models (머신러닝 모델을 이용한 석산 개발 발파진동 예측)

  • Jung, Dahee;Choi, Yosoon
    • Tunnel and Underground Space
    • /
    • v.31 no.6
    • /
    • pp.508-519
    • /
    • 2021
  • In this study, a model was developed to predict the peak particle velocity (PPV) that affects people and the surrounding environment during blasting. Four machine learning models using the k-nearest neighbors (kNN), classification and regression tree (CART), support vector regression (SVR), and particle swarm optimization (PSO)-SVR algorithms were developed and compared with each other to predict the PPV. Mt. Yogmang located in Changwon-si, Gyeongsangnam-do was selected as a study area, and 1048 blasting data were acquired to train the machine learning models. The blasting data consisted of hole length, burden, spacing, maximum charge per delay, powder factor, number of holes, ratio of emulsion, monitoring distance and PPV. To evaluate the performance of the trained models, the mean absolute error (MAE), mean square error (MSE), and root mean square error (RMSE) were used. The PSO-SVR model showed superior performance with MAE, MSE and RMSE of 0.0348, 0.0021 and 0.0458, respectively. Finally, a method was proposed to predict the degree of influence on the surrounding environment using the developed machine learning models.

Classification of Imbalanced Data Based on MTS-CBPSO Method: A Case Study of Financial Distress Prediction

  • Gu, Yuping;Cheng, Longsheng;Chang, Zhipeng
    • Journal of Information Processing Systems
    • /
    • v.15 no.3
    • /
    • pp.682-693
    • /
    • 2019
  • The traditional classification methods mostly assume that the data for class distribution is balanced, while imbalanced data is widely found in the real world. So it is important to solve the problem of classification with imbalanced data. In Mahalanobis-Taguchi system (MTS) algorithm, data classification model is constructed with the reference space and measurement reference scale which is come from a single normal group, and thus it is suitable to handle the imbalanced data problem. In this paper, an improved method of MTS-CBPSO is constructed by introducing the chaotic mapping and binary particle swarm optimization algorithm instead of orthogonal array and signal-to-noise ratio (SNR) to select the valid variables, in which G-means, F-measure, dimensionality reduction are regarded as the classification optimization target. This proposed method is also applied to the financial distress prediction of Chinese listed companies. Compared with the traditional MTS and the common classification methods such as SVM, C4.5, k-NN, it is showed that the MTS-CBPSO method has better result of prediction accuracy and dimensionality reduction.

APPLICATION OF NEURAL NETWORK FOR THE CLOUD DETECTION FROM GEOSTATIONARY SATELLITE DATA

  • Ahn, Hyun-Jeong;Ahn, Myung-Hwan;Chung, Chu-Yong
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.34-37
    • /
    • 2005
  • An efficient and robust neural network-based scheme is introduced in this paper to perform automatic cloud detection. Unlike many existing cloud detection schemes which use thresholding and statistical methods, we used the artificial neural network methods, the multi-layer perceptrons (MLP) with back-propagation algorithm and radial basis function (RBF) networks for cloud detection from Geostationary satellite images. We have used a simple scene (a mixed scene containing only cloud and clear sky). The main results show that the neural networks are able to handle complex atmospheric and meteorological phenomena. The experimental results show that two methods performed well, obtaining a classification accuracy reaching over 90 percent. Moreover, the RBF model is the most effective method for the cloud classification.

  • PDF

A Study of the Feature Classification and the Predictive Model of Main Feed-Water Flow for Turbine Cycle (주급수 유량의 형상 분류 및 추정 모델에 대한 연구)

  • Yang, Hac Jin;Kim, Seong Kun;Choi, Kwang Hee
    • Journal of Energy Engineering
    • /
    • v.23 no.4
    • /
    • pp.263-271
    • /
    • 2014
  • Corrective thermal performance analysis is required for thermal power plants to determine performance status of turbine cycle. We developed classification method for main feed water flow to make precise correction for performance analysis based on ASME (American Society of Mechanical Engineers) PTC (Performance Test Code). The classification is based on feature identification of status of main water flow. Also we developed predictive algorithms for corrected main feed-water through Support Vector Machine (SVM) Model for each classified feature area. The results was compared to estimations using Neural Network(NN) and Kernel Regression(KR). The feature classification and predictive model of main feed-water flow provides more practical methods for corrective thermal performance analysis of turbine cycle.

Determining the optimal number of cases to combine in a case-based reasoning system for eCRM

  • Hyunchul Ahn;Kim, Kyoung-jae;Ingoo Han
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.178-184
    • /
    • 2003
  • Case-based reasoning (CBR) often shows significant promise for improving effectiveness of complex and unstructured decision making. Consequently, it has been applied to various problem-solving areas including manufacturing, finance and marketing. However, the design of appropriate case indexing and retrieval mechanisms to improve the performance of CBR is still challenging issue. Most of previous studies to improve the effectiveness for CBR have focused on the similarity function or optimization of case features and their weights. However, according to some of prior researches, finding the optimal k parameter for k-nearest neighbor (k-NN) is also crucial to improve the performance of CBR system. Nonetheless, there have been few attempts which have tried to optimize the number of neighbors, especially using artificial intelligence (AI) techniques. In this study, we introduce a genetic algorithm (GA) to optimize the number of neighbors to combine. This study applies the new model to the real-world case provided by an online shopping mall in Korea. Experimental results show that a GA-optimized k-NN approach outperforms other AI techniques for purchasing behavior forecasting.

  • PDF

Wind Power Pattern Forecasting Based on Projected Clustering and Classification Methods

  • Lee, Heon Gyu;Piao, Minghao;Shin, Yong Ho
    • ETRI Journal
    • /
    • v.37 no.2
    • /
    • pp.283-294
    • /
    • 2015
  • A model that precisely forecasts how much wind power is generated is critical for making decisions on power generation and infrastructure updates. Existing studies have estimated wind power from wind speed using forecasting models such as ANFIS, SMO, k-NN, and ANN. This study applies a projected clustering technique to identify wind power patterns of wind turbines; profiles the resulting characteristics; and defines hourly and daily power patterns using wind power data collected over a year-long period. A wind power pattern prediction stage uses a time interval feature that is essential for producing representative patterns through a projected clustering technique along with the existing temperature and wind direction from the classifier input. During this stage, this feature is applied to the wind speed, which is the most significant input of a forecasting model. As the test results show, nine hourly power patterns and seven daily power patterns are produced with respect to the Korean wind turbines used in this study. As a result of forecasting the hourly and daily power patterns using the temperature, wind direction, and time interval features for the wind speed, the ANFIS and SMO models show an excellent performance.