• Title/Summary/Keyword: Incremental Machine Learning

Search Result 35, Processing Time 0.03 seconds

Incremental Support Vector Learning Method for Function Approximation (함수 근사를 위한 점증적 서포트 벡터 학습 방법)

  • 임채환;박주영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.135-138
    • /
    • 2002
  • This paper addresses incremental learning method for regression. SVM(support vector machine) is a recently proposed learning method. In general training a support vector machine requires solving a QP (quadratic programing) problem. For very large dataset or incremental dataset, solving QP problems may be inconvenient. So this paper presents an incremental support vector learning method for function approximation problems.

  • PDF

A novel visual tracking system with adaptive incremental extreme learning machine

  • Wang, Zhihui;Yoon, Sook;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.451-465
    • /
    • 2017
  • This paper presents a novel discriminative visual tracking algorithm with an adaptive incremental extreme learning machine. The parameters for an adaptive incremental extreme learning machine are initialized at the first frame with a target that is manually assigned. At each frame, the training samples are collected and random Haar-like features are extracted. The proposed tracker updates the overall output weights for each frame, and the updated tracker is used to estimate the new location of the target in the next frame. The adaptive learning rate for the update of the overall output weights is estimated by using the confidence of the predicted target location at the current frame. Our experimental results indicate that the proposed tracker can manage various difficulties and can achieve better performance than other state-of-the-art trackers.

Stepwise Constructive Method for Neural Networks Using a Flexible Incremental Algorithm (Flexible Incremental 알고리즘을 이용한 신경망의 단계적 구축 방법)

  • Park, Jin-Il;Jung, Ji-Suk;Cho, Young-Im;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.574-579
    • /
    • 2009
  • There have been much difficulties to construct an optimized neural network in complex nonlinear regression problems such as selecting the networks structure and avoiding overtraining problem generated by noise. In this paper, we propose a stepwise constructive method for neural networks using a flexible incremental algorithm. When the hidden nodes are added, the flexible incremental algorithm adaptively controls the number of hidden nodes by a validation dataset for minimizing the prediction residual error. Here, the ELM (Extreme Learning Machine) was used for fast training. The proposed neural network can be an universal approximator without user intervene in the training process, but also it has faster training and smaller number of hidden nodes. From the experimental results with various benchmark datasets, the proposed method shows better performance for real-world regression problems than previous methods.

Pseudoinverse Matrix Decomposition Based Incremental Extreme Learning Machine with Growth of Hidden Nodes

  • Kassani, Peyman Hosseinzadeh;Kim, Euntai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.125-130
    • /
    • 2016
  • The proposal of this study is a fast version of the conventional extreme learning machine (ELM), called pseudoinverse matrix decomposition based incremental ELM (PDI-ELM). One of the main problems in ELM is to determine the number of hidden nodes. In this study, the number of hidden nodes is automatically determined. The proposed model is an incremental version of ELM which adds neurons with the goal of minimization the error of the ELM network. To speed up the model the information of pseudoinverse from previous step is taken into account in the current iteration. To show the ability of the PDI-ELM, it is applied to few benchmark classification datasets in the University of California Irvine (UCI) repository. Compared to ELM learner and two other versions of incremental ELM, the proposed PDI-ELM is faster.

SVM-Based Incremental Learning Algorithm for Large-Scale Data Stream in Cloud Computing

  • Wang, Ning;Yang, Yang;Feng, Liyuan;Mi, Zhenqiang;Meng, Kun;Ji, Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3378-3393
    • /
    • 2014
  • We have witnessed the rapid development of information technology in recent years. One of the key phenomena is the fast, near-exponential increase of data. Consequently, most of the traditional data classification methods fail to meet the dynamic and real-time demands of today's data processing and analyzing needs--especially for continuous data streams. This paper proposes an improved incremental learning algorithm for a large-scale data stream, which is based on SVM (Support Vector Machine) and is named DS-IILS. The DS-IILS takes the load condition of the entire system and the node performance into consideration to improve efficiency. The threshold of the distance to the optimal separating hyperplane is given in the DS-IILS algorithm. The samples of the history sample set and the incremental sample set that are within the scope of the threshold are all reserved. These reserved samples are treated as the training sample set. To design a more accurate classifier, the effects of the data volumes of the history sample set and the incremental sample set are handled by weighted processing. Finally, the algorithm is implemented in a cloud computing system and is applied to study user behaviors. The results of the experiment are provided and compared with other incremental learning algorithms. The results show that the DS-IILS can improve training efficiency and guarantee relatively high classification accuracy at the same time, which is consistent with the theoretical analysis.

An Incremental Rule Extraction Algorithm Based on Recursive Partition Averaging (재귀적 분할 평균에 기반한 점진적 규칙 추출 알고리즘)

  • Han, Jin-Chul;Kim, Sang-Kwi;Yoon, Chung-Hwa
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.11-17
    • /
    • 2007
  • One of the popular methods used for pattern classification is the MBR (Memory-Based Reasoning) algorithm. Since it simply computes distances between a test pattern and training patterns or hyperplanes stored in memory, and then assigns the class of the nearest training pattern, it cannot explain how the classification result is obtained. In order to overcome this problem, we propose an incremental teaming algorithm based on RPA (Recursive Partition Averaging) to extract IF-THEN rules that describe regularities inherent in training patterns. But rules generated by RPA eventually show an overfitting phenomenon, because they depend too strongly on the details of given training patterns. Also RPA produces more number of rules than necessary, due to over-partitioning of the pattern space. Consequently, we present the IREA (Incremental Rule Extraction Algorithm) that overcomes overfitting problem by removing useless conditions from rules and reduces the number of rules at the same time. We verify the performance of proposed algorithm using benchmark data sets from UCI Machine Learning Repository.

Unsupervised Incremental Learning of Associative Cubes with Orthogonal Kernels

  • Kang, Hoon;Ha, Joonsoo;Shin, Jangbeom;Lee, Hong Gi;Wang, Yang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.97-104
    • /
    • 2015
  • An 'associative cube', a class of auto-associative memories, is revisited here, in which training data and hidden orthogonal basis functions such as wavelet packets or Fourier kernels, are combined in the weight cube. This weight cube has hidden units in its depth, represented by a three dimensional cubic structure. We develop an unsupervised incremental learning mechanism based upon the adaptive least squares method. Training data are mapped into orthogonal basis vectors in a least-squares sense by updating the weights which minimize an energy function. Therefore, a prescribed orthogonal kernel is incrementally assigned to an incoming data. Next, we show how a decoding procedure finds the closest one with a competitive network in the hidden layer. As noisy test data are applied to an associative cube, the nearest one among the original training data are restored in an optimal sense. The simulation results confirm robustness of associative cubes even if test data are heavily distorted by various types of noise.

Construction of Incremental Federated Learning System using Flower (Flower을 사용한 점진적 연합학습시스템 구성)

  • Yun-Hee Kang;Myungju Kang
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.80-88
    • /
    • 2023
  • To construct a learning model in the field of artificial intelligence, a dataset should be collected and be delivered to the central server where the learning model is constructed. Federated learning is a machine learning method building a global learning model without transmitting data located in a client side in a collaborative manner. It can be used to protect privacy, and after constructing a local trained model on individual clients, the parameters of the local model are aggregated centrally to update the global model. In this paper, we reuse the existing learning parameter to improve federated learning, describe incremental federated learning. For this work, we do experiments using the federated learning framework named Flower, and evaluate the experiment results with regard to elapsed time and precision when executing optimization algorithms.

  • PDF

Projection spectral analysis: A unified approach to PCA and ICA with incremental learning

  • Kang, Hoon;Lee, Hyun Su
    • ETRI Journal
    • /
    • v.40 no.5
    • /
    • pp.634-642
    • /
    • 2018
  • Projection spectral analysis is investigated and refined in this paper, in order to unify principal component analysis and independent component analysis. Singular value decomposition and spectral theorems are applied to nonsymmetric correlation or covariance matrices with multiplicities or singularities, where projections and nilpotents are obtained. Therefore, the suggested approach not only utilizes a sum-product of orthogonal projection operators and real distinct eigenvalues for squared singular values, but also reduces the dimension of correlation or covariance if there are multiple zero eigenvalues. Moreover, incremental learning strategies of projection spectral analysis are also suggested to improve the performance.

An Incremental Multi Partition Averaging Algorithm Based on Memory Based Reasoning (메모리 기반 추론 기법에 기반한 점진적 다분할평균 알고리즘)

  • Yih, Hyeong-Il
    • Journal of IKEEE
    • /
    • v.12 no.1
    • /
    • pp.65-74
    • /
    • 2008
  • One of the popular methods used for pattern classification is the MBR (Memory-Based Reasoning) algorithm. Since it simply computes distances between a test pattern and training patterns or hyperplanes stored in memory, and then assigns the class of the nearest training pattern, it is notorious for memory usage and can't learn additional information from new data. In order to overcome this problem, we propose an incremental learning algorithm (iMPA). iMPA divides the entire pattern space into fixed number partitions, and generates representatives from each partition. Also, due to the fact that it can not learn additional information from new data, we present iMPA which can learn additional information from new data and not require access to the original data, used to train. Proposed methods have been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory using benchmark data sets from UCI Machine Learning Repository.

  • PDF