• 제목/요약/키워드: Lazy Learning

검색결과 10건 처리시간 0.023초

사례기반 추론에서 사례별 속성 가중치 부여 방법 (A Case-Specific Feature Weighting Method in Case-Based Reasoning)

  • 이재식;전용준
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 1999년도 추계학술대회-지능형 정보기술과 미래조직 Information Technology and Future Organization
    • /
    • pp.391-398
    • /
    • 1999
  • 사례기반 추론을 포함한 Lazy Learning 방법들은 인공신경망이나 의사결정 나무와 같은 Eager Learning 방법들과 비교하여 여러 가지 상대적인 장점을 가지고 있다. 그러나 Lazy Learning 방법은 역시 상대적인 단점들도 가지고 있다. 첫째로 사례를 저장하기 위하여 많은 공간이 필요하며, 둘째로 문제해결 시점에서 시간이 많이 소요된다. 그러나 보다 심각한 문제점은 사례가 관련성이 낮은 속성들을 많이 가지고 있는 경우에 Lazy Learning 방법은 사례를 비교할 때에 혼란을 겪을 수 있다는 점이며, 이로 인하여 분류 정확도가 크게 저하될 수 있다. 이러한 문제점을 해결하기 위하여 Lazy Learning 방법을 위한 속성 가중치 부여 방법들이 많이 연구되어 왔다. 그러나 기존에 발표된 대부분의 방법들이 속성 가중치의 유효 범위를 전역적으로 하는 것들이었다. 이에 본 연구에서는 새로운 지역적 속성 가중치 부여 방법을 제안한다. 본 연구에서 제안하는 속성 가중치 부여 방법(CBDFW : 사례기반 동적 속성 가중치 부여)은 사례별로 속성 가중치를 다르게 부여하는 방법으로서 사례기반 추론의 원리를 속성 가중치 부여 과정에 적용하는 것이다. CBDFW의 장점으로서 (1) 수행 방법이 간단하며, (2) 논리적인 처리 비용이 기존 방법들에 비해 낮으며, (3) 신축적이라는 점을 들 수 있다. 본 연구에서는 신용 평가 문제에 CBDFW의 적용을 시도하였고, 다른 기법들과 비교에서 비교적 우수한 결과를 얻었다.

  • PDF

퍼지 k-Nearest Neighbors 와 Reconstruction Error 기반 Lazy Classifier 설계 (Design of Lazy Classifier based on Fuzzy k-Nearest Neighbors and Reconstruction Error)

  • 노석범;안태천
    • 한국지능시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.101-108
    • /
    • 2010
  • 본 논문에서는 퍼지 k-NN과 reconstruction error에 기반을 둔 feature selection을 이용한 lazy 분류기 설계를 제안하였다. Reconstruction error는 locally linear reconstruction의 평가 지수이다. 새로운 입력이 주어지면, 퍼지 k-NN은 local 분류기가 유효한 로컬 영역을 정의하고, 로컬 영역 안에 포함된 데이터 패턴에 하중 값을 할당한다. 로컬 영역과 하중 값을 정의한 우에, feature space의 차원을 감소시키기 위하여 feature selection이 수행된다. Reconstruction error 관점에서 우수한 성능을 가진 여러 개의 feature들이 선택 되어 지면, 다항식의 일종인 분류기가 하중 최소자승법에 의해 결정된다. 실험 결과는 기존의 분류기인 standard neural networks, support vector machine, linear discriminant analysis, and C4.5 trees와 비교 결과를 보인다.

Avoiding collaborative paradox in multi-agent reinforcement learning

  • Kim, Hyunseok;Kim, Hyunseok;Lee, Donghun;Jang, Ingook
    • ETRI Journal
    • /
    • 제43권6호
    • /
    • pp.1004-1012
    • /
    • 2021
  • The collaboration productively interacting between multi-agents has become an emerging issue in real-world applications. In reinforcement learning, multi-agent environments present challenges beyond tractable issues in single-agent settings. This collaborative environment has the following highly complex attributes: sparse rewards for task completion, limited communications between each other, and only partial observations. In particular, adjustments in an agent's action policy result in a nonstationary environment from the other agent's perspective, which causes high variance in the learned policies and prevents the direct use of reinforcement learning approaches. Unexpected social loafing caused by high dispersion makes it difficult for all agents to succeed in collaborative tasks. Therefore, we address a paradox caused by the social loafing to significantly reduce total returns after a certain timestep of multi-agent reinforcement learning. We further demonstrate that the collaborative paradox in multi-agent environments can be avoided by our proposed effective early stop method leveraging a metric for social loafing.

Chatting Pattern Based Game BOT Detection: Do They Talk Like Us?

  • Kang, Ah Reum;Kim, Huy Kang;Woo, Jiyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권11호
    • /
    • pp.2866-2879
    • /
    • 2012
  • Among the various security threats in online games, the use of game bots is the most serious problem. Previous studies on game bot detection have proposed many methods to find out discriminable behaviors of bots from humans based on the fact that a bot's playing pattern is different from that of a human. In this paper, we look at the chatting data that reflects gamers' communication patterns and propose a communication pattern analysis framework for online game bot detection. In massive multi-user online role playing games (MMORPGs), game bots use chatting message in a different way from normal users. We derive four features; a network feature, a descriptive feature, a diversity feature and a text feature. To measure the diversity of communication patterns, we propose lightly summarized indices, which are computationally inexpensive and intuitive. For text features, we derive lexical, syntactic and semantic features from chatting contents using text mining techniques. To build the learning model for game bot detection, we test and compare three classification models: the random forest, logistic regression and lazy learning. We apply the proposed framework to AION operated by NCsoft, a leading online game company in Korea. As a result of our experiments, we found that the random forest outperforms the logistic regression and lazy learning. The model that employs the entire feature sets gives the highest performance with a precision value of 0.893 and a recall value of 0.965.

Instance Based Learning Revisited: Feature Weighting and its Applications

  • Song Doo-Heon;Lee Chang-Hun
    • 한국멀티미디어학회논문지
    • /
    • 제9권6호
    • /
    • pp.762-772
    • /
    • 2006
  • Instance based learning algorithm is the best known lazy learner and has been successfully used in many areas such as pattern analysis, medical analysis, bioinformatics and internet applications. However, its feature weighting scheme is too naive that many other extensions are proposed. Our version of IB3 named as eXtended IBL (XIBL) improves feature weighting scheme by backward stepwise regression and its distance function by VDM family that avoids overestimating discrete valued attributes. Also, XIBL adopts leave-one-out as its noise filtering scheme. Experiments with common artificial domains show that XIBL is better than the original IBL in terms of accuracy and noise tolerance. XIBL is applied to two important applications - intrusion detection and spam mail filtering and the results are promising.

  • PDF

사례기반 추론을 위한 동적 속성 가중치 부여 방법 (A Dynamic feature Weighting Method for Case-based Reasoning)

  • 이재식;전용준
    • 지능정보연구
    • /
    • 제7권1호
    • /
    • pp.47-61
    • /
    • 2001
  • 사례기반 추론과 같은 사후학습 기법은 인공신경망이나 의사결정나무와 같은 사전학습 기법에 비해서 여러 장점을 가지고 있다. 하지만, 사후학습 기법은 사례 표현에 관련성이 적은 속성이 포함된 경우에는 성능이 저하되는 단점을 가지고 있다. 이러한 단점을 극복하기 위해서, 속성 가중치 부여 방법들이 연구되었다. 기존의 속성 가중치 부여 방법들은 대부분 전역적으로 속성 가중치를 부여하는 것이었다. 본 연구에서는 새로운 지역적 속성 가중치 부여 방법인 CBDFW를 제안한다. CBDFW 기법은 무작위로 생성된 속성 가중치들의 분류 성공 여부를 저장하고 있다가, 새로운 사례가 주어졌을 때에 성공적인 분류 결과를 보인 가중치들을 검색하여 동적으로 새로운 가중치들을 생성해낸다. 신용평가 데이터로 CBDFW의 성능을 실험한 결과, 기존의 연구들에서 제시된 분류 적중률보다 우수한 성능을 보였다.

  • PDF

Plurality Rule-based Density and Correlation Coefficient-based Clustering for K-NN

  • Aung, Swe Swe;Nagayama, Itaru;Tamaki, Shiro
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권3호
    • /
    • pp.183-192
    • /
    • 2017
  • k-nearest neighbor (K-NN) is a well-known classification algorithm, being feature space-based on nearest-neighbor training examples in machine learning. However, K-NN, as we know, is a lazy learning method. Therefore, if a K-NN-based system very much depends on a huge amount of history data to achieve an accurate prediction result for a particular task, it gradually faces a processing-time performance-degradation problem. We have noticed that many researchers usually contemplate only classification accuracy. But estimation speed also plays an essential role in real-time prediction systems. To compensate for this weakness, this paper proposes correlation coefficient-based clustering (CCC) aimed at upgrading the performance of K-NN by leveraging processing-time speed and plurality rule-based density (PRD) to improve estimation accuracy. For experiments, we used real datasets (on breast cancer, breast tissue, heart, and the iris) from the University of California, Irvine (UCI) machine learning repository. Moreover, real traffic data collected from Ojana Junction, Route 58, Okinawa, Japan, was also utilized to lay bare the efficiency of this method. By using these datasets, we proved better processing-time performance with the new approach by comparing it with classical K-NN. Besides, via experiments on real-world datasets, we compared the prediction accuracy of our approach with density peaks clustering based on K-NN and principal component analysis (DPC-KNN-PCA).

유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습 (Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm)

  • 김상훈;정병희;이건호
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제7권9호
    • /
    • pp.351-360
    • /
    • 2018
  • 전통적으로 나태한 학습에 해당하는 국소가중회귀(LWR: Locally Weighted Regression)모델은 입력변수인 질의지점에 따라 예측의 해를 얻기 위해 일정구간 범위내의 학습 데이터를 대상으로 질의지점의 거리에 따라 가중값을 달리 부여하여 학습 한 결과로 얻은 짧은 구간내의 회귀식이다. 본 연구는 메모리 기반학습의 형태에 해당하는 LWR을 위한 점진적 앙상블 학습과정을 제안한다. LWR를 위한 본 연구의 점진적 앙상블 학습법은 유전알고리즘을 이용하여 시간에 따라 LWR모델들을 순차적으로 생성하고 통합하는 것이다. 기존의 LWR 한계는 인디케이터 함수와 학습 데이터의 선택에 따라 다중의 LWR모델이 생성될 수 있으며 이 모델에 따라 예측 해의 질도 달라질 수 있다. 하지만 다중의 LWR 모델의 선택이나 결합의 문제 해결을 위한 연구가 수행되지 않았다. 본 연구에서는 인디케이터 함수와 학습 데이터에 따라 초기 LWR 모델을 생성한 후 진화 학습 과정을 반복하여 적절한 인디케이터 함수를 선택하며 또한 다른 학습 데이터에 적용한 LWR 모델의 평가와 개선을 통하여 학습 데이터로 인한 편향을 극복하고자 한다. 모든 구간에 대해 데이터가 발생 되면 점진적으로 LWR모델을 생성하여 보관하는 열심학습(Eager learning)방식을 취하고 있다. 특정 시점에 예측의 해를 얻기 위해 일정구간 내에 신규로 발생된 데이터들을 기반으로 LWR모델을 생성한 후 유전자 알고리즘을 이용하여 구간 내의 기존 LWR모델들과 결합하는 방식이다. 제안하는 학습방법은 기존 단순평균법을 이용한 다중 LWR모델들의 선택방법 보다 적합도 평가에서 우수한 결과를 보여주고 있다. 특정지역의 시간 별 교통량, 고속도로 휴게소의 시간별 매출액 등의 실제 데이터를 적용하여 본 연구의 LWR에 의한 결과들의 연결된 패턴과 다중회귀분석을 이용한 예측결과를 비교하고 있다.

Prediction of concrete compressive strength using non-destructive test results

  • Erdal, Hamit;Erdal, Mursel;Simsek, Osman;Erdal, Halil Ibrahim
    • Computers and Concrete
    • /
    • 제21권4호
    • /
    • pp.407-417
    • /
    • 2018
  • Concrete which is a composite material is one of the most important construction materials. Compressive strength is a commonly used parameter for the assessment of concrete quality. Accurate prediction of concrete compressive strength is an important issue. In this study, we utilized an experimental procedure for the assessment of concrete quality. Firstly, the concrete mix was prepared according to C 20 type concrete, and slump of fresh concrete was about 20 cm. After the placement of fresh concrete to formworks, compaction was achieved using a vibrating screed. After 28 day period, a total of 100 core samples having 75 mm diameter were extracted. On the core samples pulse velocity determination tests and compressive strength tests were performed. Besides, Windsor probe penetration tests and Schmidt hammer tests were also performed. After setting up the data set, twelve artificial intelligence (AI) models compared for predicting the concrete compressive strength. These models can be divided into three categories (i) Functions (i.e., Linear Regression, Simple Linear Regression, Multilayer Perceptron, Support Vector Regression), (ii) Lazy-Learning Algorithms (i.e., IBk Linear NN Search, KStar, Locally Weighted Learning) (iii) Tree-Based Learning Algorithms (i.e., Decision Stump, Model Trees Regression, Random Forest, Random Tree, Reduced Error Pruning Tree). Four evaluation processes, four validation implements (i.e., 10-fold cross validation, 5-fold cross validation, 10% split sample validation & 20% split sample validation) are used to examine the performance of predictive models. This study shows that machine learning regression techniques are promising tools for predicting compressive strength of concrete.

Control of pH Neutralization Process using Simulation Based Dynamic Programming in Simulation and Experiment (ICCAS 2004)

  • Kim, Dong-Kyu;Lee, Kwang-Soon;Yang, Dae-Ryook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.620-626
    • /
    • 2004
  • For general nonlinear processes, it is difficult to control with a linear model-based control method and nonlinear controls are considered. Among the numerous approaches suggested, the most rigorous approach is to use dynamic optimization. Many general engineering problems like control, scheduling, planning etc. are expressed by functional optimization problem and most of them can be changed into dynamic programming (DP) problems. However the DP problems are used in just few cases because as the size of the problem grows, the dynamic programming approach is suffered from the burden of calculation which is called as 'curse of dimensionality'. In order to avoid this problem, the Neuro-Dynamic Programming (NDP) approach is proposed by Bertsekas and Tsitsiklis (1996). To get the solution of seriously nonlinear process control, the interest in NDP approach is enlarged and NDP algorithm is applied to diverse areas such as retailing, finance, inventory management, communication networks, etc. and it has been extended to chemical engineering parts. In the NDP approach, we select the optimal control input policy to minimize the value of cost which is calculated by the sum of current stage cost and future stages cost starting from the next state. The cost value is related with a weight square sum of error and input movement. During the calculation of optimal input policy, if the approximate cost function by using simulation data is utilized with Bellman iteration, the burden of calculation can be relieved and the curse of dimensionality problem of DP can be overcome. It is very important issue how to construct the cost-to-go function which has a good approximate performance. The neural network is one of the eager learning methods and it works as a global approximator to cost-to-go function. In this algorithm, the training of neural network is important and difficult part, and it gives significant effect on the performance of control. To avoid the difficulty in neural network training, the lazy learning method like k-nearest neighbor method can be exploited. The training is unnecessary for this method but requires more computation time and greater data storage. The pH neutralization process has long been taken as a representative benchmark problem of nonlin ar chemical process control due to its nonlinearity and time-varying nature. In this study, the NDP algorithm was applied to pH neutralization process. At first, the pH neutralization process control to use NDP algorithm was performed through simulations with various approximators. The global and local approximators are used for NDP calculation. After that, the verification of NDP in real system was made by pH neutralization experiment. The control results by NDP algorithm was compared with those by the PI controller which is traditionally used, in both simulations and experiments. From the comparison of results, the control by NDP algorithm showed faster and better control performance than PI controller. In addition to that, the control by NDP algorithm showed the good results when it applied to the cases with disturbances and multiple set point changes.

  • PDF