• Title/Summary/Keyword: 점진적 학습법

Search Result 19, Processing Time 0.025 seconds

Building an Ensemble Machine by Constructive Selective Learning Neural Networks (건설적 선택학습 신경망을 이용한 앙상블 머신의 구축)

  • Kim, Seok-Jun;Jang, Byeong-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.12
    • /
    • pp.1202-1210
    • /
    • 2000
  • 본 논문에서는 효과적인 앙상블 머신의 구축을 위한 새로운 방안을 제시한다. 효과적인 앙상블의 구축을 위해서는 앙상블 멤버들간의 상관관계가 아주 낮아야 하며 또한 각 앙상블 멤버들은 전체 문제를 어느 정도는 정확하게 학습하면서도 서로들간의 불일치 하는 부분이 존재해야 한다는 것이 여러 논문들에 발표되었다. 본 논문에서는 주어진 문제의 다양한 면을 학습한 다수의 앙상블 후보 네트웍을 생성하기 위하여 건설적 학습 알고리즘과 능동 학습 알고리즘을 결합한 형태의 신경망 학습 알고리즘을 이용한다. 이 신경망의 학습은 최소 은닉 노드에서 최대 은닉노드까지 점진적으로 은닉노드를 늘려나감과 동시에 후보 데이타 집합에서 학습에 사용할 훈련 데이타를 점진적으로 선택해 나가면서 이루어진다. 은닉 노드의 증가시점에서 앙상블의 후부 네트웍이 생성된다. 이러한 한 차례의 학습 진행을 한 chain이라 정의한다. 다수의 chain을 통하여 다양한 형태의 네트웍 크기와 다양한 형태의 데이타 분포를 학습한 후보 내트웍들이 생성된다. 이렇게 생성된 후보 네트웍들은 확률적 비례 선택법에 의해 선택된 후 generalized ensemble method (GEM)에 의해 결합되어 최종적인 앙상블 성능을 보여준다. 제안된 알고리즘은 한개의 인공 데이타와 한 개의 실세계 데이타에 적용되었다. 실험을 통하여 제안된 알고리즘에 의해 구성된 앙상블의 최대 일반화 성능은 다른 알고리즘에 의한 그것보다 우수함을 알 수 있다.

  • PDF

Face Detection Based on Incremental Learning from Very Large Size Training Data (대용량 훈련 데이타의 점진적 학습에 기반한 얼굴 검출 방법)

  • 박지영;이준호
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.949-958
    • /
    • 2004
  • race detection using a boosting based algorithm requires a very large size of face and nonface data. In addition, the fact that there always occurs a need for adding additional training data for better detection rates demands an efficient incremental teaming algorithm. In the design of incremental teaming based classifiers, the final classifier should represent the characteristics of the entire training dataset. Conventional methods have a critical problem in combining intermediate classifiers that weight updates depend solely on the performance of individual dataset. In this paper, for the purpose of application to face detection, we present a new method to combine an intermediate classifier with previously acquired ones in an optimal manner. Our algorithm creates a validation set by incrementally adding sampled instances from each dataset to represent the entire training data. The weight of each classifier is determined based on its performance on the validation set. This approach guarantees that the resulting final classifier is teamed by the entire training dataset. Experimental results show that the classifier trained by the proposed algorithm performs better than by AdaBoost which operates in batch mode, as well as by ${Learn}^{++}$.

Gradient Estimation for Progressive Photon Mapping (점진적 광자 매핑을 위한 기울기 계산 기법)

  • Donghee Jeon;Jeongmin Gu;Bochang Moon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.141-147
    • /
    • 2024
  • Progressive photon mapping is a widely adopted rendering technique that conducts a kernel-density estimation on photons progressively generated from lights. Its hyperparameter, which controls the reduction rate of the density estimation, highly affects the quality of its rendering image due to the bias-variance tradeoff of pixel estimates in photon-mapped results. We can minimize the errors of rendered pixel estimates in progressive photon mapping by estimating the optimal parameters based on gradient-based optimization techniques. To this end, we derived the gradients of pixel estimates with respect to the parameters when performing progressive photon mapping and compared our estimated gradients with finite differences to verify estimated gradients. The gradient estimated in this paper can be applied in an online learning algorithm that simultaneously performs progressive photon mapping and parameter optimization in future work.

Accelerated Loarning of Latent Topic Models by Incremental EM Algorithm (점진적 EM 알고리즘에 의한 잠재토픽모델의 학습 속도 향상)

  • Chang, Jeong-Ho;Lee, Jong-Woo;Eom, Jae-Hong
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1045-1055
    • /
    • 2007
  • Latent topic models are statistical models which automatically captures salient patterns or correlation among features underlying a data collection in a probabilistic way. They are gaining an increased popularity as an effective tool in the application of automatic semantic feature extraction from text corpus, multimedia data analysis including image data, and bioinformatics. Among the important issues for the effectiveness in the application of latent topic models to the massive data set is the efficient learning of the model. The paper proposes an accelerated learning technique for PLSA model, one of the popular latent topic models, by an incremental EM algorithm instead of conventional EM algorithm. The incremental EM algorithm can be characterized by the employment of a series of partial E-steps that are performed on the corresponding subsets of the entire data collection, unlike in the conventional EM algorithm where one batch E-step is done for the whole data set. By the replacement of a single batch E-M step with a series of partial E-steps and M-steps, the inference result for the previous data subset can be directly reflected to the next inference process, which can enhance the learning speed for the entire data set. The algorithm is advantageous also in that it is guaranteed to converge to a local maximum solution and can be easily implemented just with slight modification of the existing algorithm based on the conventional EM. We present the basic application of the incremental EM algorithm to the learning of PLSA and empirically evaluate the acceleration performance with several possible data partitioning methods for the practical application. The experimental results on a real-world news data set show that the proposed approach can accomplish a meaningful enhancement of the convergence rate in the learning of latent topic model. Additionally, we present an interesting result which supports a possible synergistic effect of the combination of incremental EM algorithm with parallel computing.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Distributed Neural Network Optimization Study using Adaptive Approach for Multi-Agent Collaborative Learning Application (다중 에이전트 협력학습 응용을 위한 적응적 접근법을 이용한 분산신경망 최적화 연구)

  • Junhak Yun;Sanghun Jeon;Yong-Ju Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.442-445
    • /
    • 2023
  • 최근 딥러닝 및 로봇기술의 발전으로 인해 대량의 데이터를 빠르게 수집하고 처리하는 연구 분야들로 확대되었다. 이와 관련된 한 가지 분야로써 다중 로봇을 이용한 분산학습 연구가 있으며, 이는 단일 에이전트를 이용할 때보다 대량의 데이터를 빠르게 수집 및 처리하는데 용이하다. 본 연구에서는 기존 Distributed Neural Network Optimization (DiNNO) 알고리즘에서 제안한 정적 분산 학습방법과 달리 단계적 분산학습 방법을 새롭게 제안하였으며, 모델 성능을 향상시키기 위해 원시 변수를 근사하는 단계수를 상수로 고정하는 기존의 방식에서 통신회차가 늘어남에 따라 점진적으로 근사 횟수를 높이는 방법을 고안하여 새로운 알고리즘을 제안하였다. 기존 알고리즘과 제안된 알고리즘의 정성 및 정량적 성능 평가를 수행하기 MNIST 분류와 2 차원 평면도 지도화 실험을 수행하였으며, 그 결과 제안된 알고리즘이 기존 DiNNO 알고리즘보다 동일한 통신회차에서 높은 정확도를 보임과 함께 전역 최적점으로 빠르게 수렴하는 것을 입증하였다.

Real Image Super-Resolution based on Easy-to-Hard Tansfer-Learning (실제 이미지 초해상도를 위한 학습 난이도 조절 기반 전이학습)

  • Cho, Sunwoo;Soh, Jae Woong;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.701-704
    • /
    • 2020
  • 이미지 초해상도는 딥러닝의 발전과 함께 이를 활용하며 눈에 띄는 성능향상을 이루었다. 딥러닝을 기반으로 한 대부분의 이미지 초해상도 연구는 딥러닝 네트워크 모델의 구조에 대한 연구 위주로 진행되어 왔다. 그러나 최근 들어 딥러닝 기반의 이미지 초해상도가 합성된 데이터에 대해서는 높은 성능을 보이지만 실제 데이터에 대해서는 높은 성능을 보이지 못한다는 사실이 주목받고 있다. 이에 따라 모델 구조를 바꿔 성능을 향상 시키는 것에는 한계가 있어 데이터의 활용이나 학습 방법에 대한 연구의 필요성이 증대되고 있다. 따라서 본 논문은 이미지 초해상도를 위한 난이도 조절 기반 전이학습법(transfer learning)을 제안한다. 제안된 방법에서는 이미지 초해상도를 배율을 난이도가 쉬운 낮은 배율부터 순차적으로 전이학습을 진행한다. 이는 이미지 초해상도의 배율이 높아질수록 학습이 어렵기 때문이다. 결과적으로 본 논문에서는 높은 배율의 이미지 초해상도를 진행하기 위해 낮은 배율의 이미지 초해상도, 즉 난이도가 쉬운 학습부터 점진적으로 학습을 진행하였을 때 더욱 빠르고 효과적으로 학습할 수 있음을 보여준다. 제안된 전이학습 방법을 통해 적은 횟수의 업데이트로 학습을 진행하였을 때 일반적인 학습방법 대비 약 0.18 dB 의 PSNR 상승을 얻어, RealSR [9] 데이터셋에서 28.56 dB의 성능으로 파라미터 수 대비 높은 성능을 얻을 수 있었다.

  • PDF

Biological Early Warning System for Toxicity Detection (독성 감지를 위한 생물 조기 경보 시스템)

  • Kim, Sung-Yong;Kwon, Ki-Yong;Lee, Won-Don
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.1979-1986
    • /
    • 2010
  • Biological early warning system detects toxicity by looking at behavior of organisms in water. The system uses classifier for judgement about existence and amount of toxicity in water. Boosting algorithm is one of possible application method for improving performance in a classifier. Boosting repetitively change training example set by focusing on difficult examples in basic classifier. As a result, prediction performance is improved for the events which are difficult to classify, but the information contained in the events which can be easily classified are discarded. In this paper, an incremental learning method to overcome this shortcoming is proposed by using the extended data expression. In this algorithm, decision tree classifier define class distribution information using the weight parameter in the extended data expression by exploiting the necessary information not only from the well classified, but also from the weakly classified events. Experimental results show that the new algorithm outperforms the former Learn++ method without using the weight parameter.

Fine-Grain Weighted Logistic Regression Model (가중치 세분화 기반의 로지스틱 회귀분석 모델)

  • Lee, Chang-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.9
    • /
    • pp.77-81
    • /
    • 2016
  • Logistic regression (LR) has been widely used for predicting the relationships among variables in various fields. We propose a new logistic regression model with a fine-grained weighting method, called value weighted logistic regression, by assigning different weights to each feature value. A gradient approach is utilized to obtain the optimal weights of feature values. We conduct experiments on several data sets and the experimental results show that the proposed method shows meaningful improvement in prediction accuracy.

Utilizing Minimal Label Data for Tomato Leaf Disease Classification: An Approach through Recursive Learning Based on YOLOv8 (토마토 잎 병해 분류를 위한 최소 라벨 데이터 활용: YOLOv8 기반 재귀적 학습 방식을 통한 접근)

  • Junhyuk Lee;Namhyoung Kim
    • The Journal of Bigdata
    • /
    • v.9 no.1
    • /
    • pp.61-73
    • /
    • 2024
  • Class imbalance is one of the significant challenges in deep learning tasks, particularly pronounced in areas with limited data. This study proposes a new approach that utilizes minimal labeled data for effectively classifying tomato leaf diseases. We introduced a recursive learning method using the YOLOv8 model. By utilizing the detection predictions of images on the training data as additional training data, the number of labeled data is progressively increased. Unlike conventional data augmentation and up-down sampling techniques, this method seeks to fundamentally solve the class imbalance problem by maximizing the utility of actual data. Based on the secured labeled data, tomato leaves were extracted, and diseases were classified using the EfficientNet model. This process achieved a high accuracy of 98.92%. Notably, a 12.9% improvement compared to the baseline was observed in the detection of Late blight diseases, which has the least amount of data. This research presents a methodology that addresses data imbalance issues while offering high-precision disease classification, with the expectation of application to other crops.