• Title, Summary, Keyword: naive bayesian

Search Result 106, Processing Time 0.051 seconds

An Information-theoretic Approach for Value-Based Weighting in Naive Bayesian Learning (나이브 베이시안 학습에서 정보이론 기반의 속성값 가중치 계산방법)

  • Lee, Chang-Hwan
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.285-291
    • /
    • 2010
  • In this paper, we propose a new paradigm of weighting methods for naive Bayesian learning. We propose more fine-grained weighting methods, called value weighting method, in the context of naive Bayesian learning. While the current weighting methods assign a weight to an attribute, we assign a weight to an attribute value. We develop new methods, using Kullback-Leibler function, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general naive bayesian. The proposed method shows better performance in most of the cases.

Learning Distribution Graphs Using a Neuro-Fuzzy Network for Naive Bayesian Classifier (퍼지신경망을 사용한 네이브 베이지안 분류기의 분산 그래프 학습)

  • Tian, Xue-Wei;Lim, Joon S.
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.409-414
    • /
    • 2013
  • Naive Bayesian classifiers are a powerful and well-known type of classifiers that can be easily induced from a dataset of sample cases. However, the strong conditional independence assumptions can sometimes lead to weak classification performance. Normally, naive Bayesian classifiers use Gaussian distributions to handle continuous attributes and to represent the likelihood of the features conditioned on the classes. The probability density of attributes, however, is not always well fitted by a Gaussian distribution. Another eminent type of classifier is the neuro-fuzzy classifier, which can learn fuzzy rules and fuzzy sets using supervised learning. Since there are specific structural similarities between a neuro-fuzzy classifier and a naive Bayesian classifier, the purpose of this study is to apply learning distribution graphs constructed by a neuro-fuzzy network to naive Bayesian classifiers. We compare the Gaussian distribution graphs with the fuzzy distribution graphs for the naive Bayesian classifier. We applied these two types of distribution graphs to classify leukemia and colon DNA microarray data sets. The results demonstrate that a naive Bayesian classifier with fuzzy distribution graphs is more reliable than that with Gaussian distribution graphs.

Gradient Descent Approach for Value-Based Weighting (점진적 하강 방법을 이용한 속성값 기반의 가중치 계산방법)

  • Lee, Chang-Hwan;Bae, Joo-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.381-388
    • /
    • 2010
  • Naive Bayesian learning has been widely used in many data mining applications, and it performs surprisingly well on many applications. However, due to the assumption that all attributes are equally important in naive Bayesian learning, the posterior probabilities estimated by naive Bayesian are sometimes poor. In this paper, we propose more fine-grained weighting methods, called value weighting, in the context of naive Bayesian learning. While the current weighting methods assign a weight to each attribute, we assign a weight to each attribute value. We investigate how the proposed value weighting effects the performance of naive Bayesian learning. We develop new methods, using gradient descent method, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general Naive bayesian, and the value weighting method showed better in most cases.

Calculating the Importance of Attributes in Naive Bayesian Classification Learning (나이브 베이시안 분류학습에서 속성의 중요도 계산방법)

  • Lee, Chang-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.5
    • /
    • pp.83-87
    • /
    • 2011
  • Naive Bayesian learning has been widely used in machine learning. However, in traditional naive Bayesian learning, we make two assumptions: (1) each attribute is independent of each other (2) each attribute has same importance in terms of learning. However, in reality, not all attributes are the same with respect to their importance. In this paper, we propose a new paradigm of calculating the importance of attributes for naive Bayesian learning. The performance of the proposed methods has been compared with those of other methods including SBC and general naive Bayesian. The proposed method shows better performance in most cases.

PERFORMANCE EVALUATION OF INFORMATION CRITERIA FOR THE NAIVE-BAYES MODEL IN THE CASE OF LATENT CLASS ANALYSIS: A MONTE CARLO STUDY

  • Dias, Jose G.
    • Journal of the Korean Statistical Society
    • /
    • v.36 no.3
    • /
    • pp.435-445
    • /
    • 2007
  • This paper addresses for the first time the use of complete data information criteria in unsupervised learning of the Naive-Bayes model. A Monte Carlo study sets a large experimental design to assess these criteria, unusual in the Bayesian network literature. The simulation results show that complete data information criteria underperforms the Bayesian information criterion (BIC) for these Bayesian networks.

User and Item based Collaborative Filtering Using Classification Property Naive Bayesian (분류 속성과 Naive Bayesian을 이용한 사용자와 아이템 기반의 협력적 필터링)

  • Kim, Jong-Hun;Kim, Yong-Jip;Rim, Kee-Wook;Lee, Jung-Hyun;Chung, Kyung-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.11
    • /
    • pp.23-33
    • /
    • 2007
  • The collaborative filtering has used the nearest neighborhood method based on the preference and the similarity using the Pearson correlation coefficient. Therefore, it does not reflect content of the items and has the problems of the sparsity and scalability as well. the item-based collaborative filtering has been practically used to improve these defects, but it still does not reflect attributes of the item. In this paper, we propose the user and item based collaborative filtering using the classification property and Naive Bayesian to supplement the defects in the existing recommendation system. The proposed method complexity refers to the item similarity based on explicit data and the user similarity based on implicit data for handing the sparse problem. It applies to the Naive Bayesian to the result of reference. Also, it can enhance the accuracy as computation of the item similarity reflects on the correlative rank among the classification property to reflect attributes.

Recommendation Method using Naive Bayesian algorithm in Hybrid User and Item based Collaborative Filtering (사용자와 아이템의 혼합 협력적 필터링에서 Naive Bayesian 알고리즘을 이용한 추천 방법)

  • 김용집;정경용;한승진;고종철;이정현
    • Proceedings of the Korean Information Science Society Conference
    • /
    • /
    • pp.184-186
    • /
    • 2003
  • 기존의 사용자 기반 협력적 필터링이 가지는 단점으로 지적되었던 희박성과 확장성의 문제를 아이템 기반 협력적 필터링 기법을 통하여 개선하려는 연구가 진행되어 왔다. 실제로 많은 성과가 있었지만. 여전히 명시적 데이터를 기반으로 하기 때문에 희박성이 존재하며, 아이템의 속성이 반영되지 않는 문제점이 있다. 본 논문에서는 기존의 아이템 기반 협력적 필터링의 문제점을 보완하기 위하여 사용자와 아이템의 혼합 협력적 필터링에서 Naive Bayesian 알고리즘을 이용한 추천 방법을 제안한다. 제안된 방법에서는 각 사용자와 아이템에 대한 유사도 검색 테이블을 생성한 후, Naive Bayesian 알고리즘으로 아이템을 예측 및 추천함으로써, 성능을 개선하였다. 성능 평가를 위해 기존의 아이템 기반 협력적 필터링 기술과 비교 평가하였다.

  • PDF

Performance Comparison of Naive Bayesian Learning and Centroid-Based Classification for e-Mail Classification (전자메일 분류를 위한 나이브 베이지안 학습과 중심점 기반 분류의 성능 비교)

  • Kim, Kuk-Pyo;Kwon, Young-S.
    • IE interfaces
    • /
    • v.18 no.1
    • /
    • pp.10-21
    • /
    • 2005
  • With the increasing proliferation of World Wide Web, electronic mail systems have become very widely used communication tools. Researches on e-mail classification have been very important in that e-mail classification system is a major engine for e-mail response management systems which mine unstructured e-mail messages and automatically categorize them. In this research we compare the performance of Naive Bayesian learning and Centroid-Based Classification using the different data set of an on-line shopping mall and a credit card company. We analyze which method performs better under which conditions. We compared classification accuracy of them which depends on structure and size of train set and increasing numbers of class. The experimental results indicate that Naive Bayesian learning performs better, while Centroid-Based Classification is more robust in terms of classification accuracy.

A Design of FHIDS(Fuzzy logic based Hybrid Intrusion Detection System) using Naive Bayesian and Data Mining (나이브 베이지안과 데이터 마이닝을 이용한 FHIDS(Fuzzy Logic based Hybrid Intrusion Detection System) 설계)

  • Lee, Byung-Kwan;Jeong, Eun-Hee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.5 no.3
    • /
    • pp.158-163
    • /
    • 2012
  • This paper proposes an FHIDS(Fuzzy logic based Hybrid Intrusion Detection System) design that detects anomaly and misuse attacks by using a Naive Bayesian algorithm, Data Mining, and Fuzzy Logic. The NB-AAD(Naive Bayesian based Anomaly Attack Detection) technique using a Naive Bayesian algorithm within the FHIDS detects anomaly attacks. The DM-MAD(Data Mining based Misuse Attack Detection) technique using Data Mining within it analyzes the correlation rules among packets and detects new attacks or transformed attacks by generating the new rule-based patterns or by extracting the transformed rule-based patterns. The FLD(Fuzzy Logic based Decision) technique within it judges the attacks by using the result of the NB-AAD and DM-MAD. Therefore, the FHIDS is the hybrid attack detection system that improves a transformed attack detection ratio, and reduces False Positive ratio by making it possible to detect anomaly and misuse attacks.

A Naive Bayesian-based Model of the Opponent's Policy for Efficient Multiagent Reinforcement Learning (효율적인 멀티 에이전트 강화 학습을 위한 나이브 베이지만 기반 상대 정책 모델)

  • Kwon, Ki-Duk
    • Journal of Internet Computing and Services
    • /
    • v.9 no.6
    • /
    • pp.165-177
    • /
    • 2008
  • An important issue in Multiagent reinforcement learning is how an agent should learn its optimal policy in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for Multiagent reinforcement learning tend to apply single-agent reinforcement learning techniques without any extensions or require some unrealistic assumptions even though they use explicit models of other agents. In this paper, a Naive Bayesian based policy model of the opponent agent is introduced and then the Multiagent reinforcement learning method using this model is explained. Unlike previous works, the proposed Multiagent reinforcement learning method utilizes the Naive Bayesian based policy model, not the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper, the Cat and Mouse game is introduced as an adversarial Multiagent environment. And then effectiveness of the proposed Naive Bayesian based policy model is analyzed through experiments using this game as test-bed.

  • PDF