• Title/Summary/Keyword: naive Bayes

Search Result 238, Processing Time 0.022 seconds

Text-independent Speaker Identification Using Soft Bag-of-Words Feature Representation

  • Jiang, Shuangshuang;Frigui, Hichem;Calhoun, Aaron W.
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.240-248
    • /
    • 2014
  • We present a robust speaker identification algorithm that uses novel features based on soft bag-of-word representation and a simple Naive Bayes classifier. The bag-of-words (BoW) based histogram feature descriptor is typically constructed by summarizing and identifying representative prototypes from low-level spectral features extracted from training data. In this paper, we define a generalization of the standard BoW. In particular, we define three types of BoW that are based on crisp voting, fuzzy memberships, and possibilistic memberships. We analyze our mapping with three common classifiers: Naive Bayes classifier (NB); K-nearest neighbor classifier (KNN); and support vector machines (SVM). The proposed algorithms are evaluated using large datasets that simulate medical crises. We show that the proposed soft bag-of-words feature representation approach achieves a significant improvement when compared to the state-of-art methods.

Selecting Machine Learning Model Based on Natural Language Processing for Shanghanlun Diagnostic System Classification (자연어 처리 기반 『상한론(傷寒論)』 변병진단체계(辨病診斷體系) 분류를 위한 기계학습 모델 선정)

  • Young-Nam Kim
    • 대한상한금궤의학회지
    • /
    • v.14 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • Objective : The purpose of this study is to explore the most suitable machine learning model algorithm for Shanghanlun diagnostic system classification using natural language processing (NLP). Methods : A total of 201 data items were collected from 『Shanghanlun』 and 『Clinical Shanghanlun』, 'Taeyangbyeong-gyeolhyung' and 'Eumyangyeokchahunobokbyeong' were excluded to prevent oversampling or undersampling. Data were pretreated using a twitter Korean tokenizer and trained by logistic regression, ridge regression, lasso regression, naive bayes classifier, decision tree, and random forest algorithms. The accuracy of the models were compared. Results : As a result of machine learning, ridge regression and naive Bayes classifier showed an accuracy of 0.843, logistic regression and random forest showed an accuracy of 0.804, and decision tree showed an accuracy of 0.745, while lasso regression showed an accuracy of 0.608. Conclusions : Ridge regression and naive Bayes classifier are suitable NLP machine learning models for the Shanghanlun diagnostic system classification.

  • PDF

Identifying the Optimal Machine Learning Algorithm for Breast Cancer Prediction

  • ByungJoo Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.80-88
    • /
    • 2024
  • Breast cancer remains a significant global health burden, necessitating accurate and timely detection for improved patient outcomes. Machine learning techniques have demonstrated remarkable potential in assisting breast cancer diagnosis by learning complex patterns from multi-modal patient data. This study comprehensively evaluates several popular machine learning models, including logistic regression, decision trees, random forests, support vector machines (SVMs), naive Bayes, k-nearest neighbors (KNN), XGBoost, and ensemble methods for breast cancer prediction using the Wisconsin Breast Cancer Dataset (WBCD). Through rigorous benchmarking across metrics like accuracy, precision, recall, F1-score, and area under the ROC curve (AUC), we identify the naive Bayes classifier as the top-performing model, achieving an accuracy of 0.974, F1-score of 0.979, and highest AUC of 0.988. Other strong performers include logistic regression, random forests, and XGBoost, with AUC values exceeding 0.95. Our findings showcase the significant potential of machine learning, particularly the robust naive Bayes algorithm, to provide highly accurate and reliable breast cancer screening from fine needle aspirate (FNA) samples, ultimately enabling earlier intervention and optimized treatment strategies.

An Active Learning-based Method for Composing Training Document Set in Bayesian Text Classification Systems (베이지언 문서분류시스템을 위한 능동적 학습 기반의 학습문서집합 구성방법)

  • 김제욱;김한준;이상구
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.12
    • /
    • pp.966-978
    • /
    • 2002
  • There are two important problems in improving text classification systems based on machine learning approach. The first one, called "selection problem", is how to select a minimum number of informative documents from a given document collection. The second one, called "composition problem", is how to reorganize selected training documents so that they can fit an adopted learning method. The former problem is addressed in "active learning" algorithms, and the latter is discussed in "boosting" algorithms. This paper proposes a new learning method, called AdaBUS, which proactively solves the above problems in the context of Naive Bayes classification systems. The proposed method constructs more accurate classification hypothesis by increasing the valiance in "weak" hypotheses that determine the final classification hypothesis. Consequently, the proposed algorithm yields perturbation effect makes the boosting algorithm work properly. Through the empirical experiment using the Routers-21578 document collection, we show that the AdaBUS algorithm more significantly improves the Naive Bayes-based classification system than other conventional learning methodson system than other conventional learning methods

Development of Incident Detection Algorithm Using Naive Bayes Classification (나이브 베이즈 분류기를 이용한 돌발상황 검지 알고리즘 개발)

  • Kang, Sunggwan;Kwon, Bongkyung;Kwon, Cheolwoo;Park, Sangmin;Yun, Ilsoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.6
    • /
    • pp.25-39
    • /
    • 2018
  • The purpose of this study is to develop an efficient incident detection algorithm by applying machine learning, which is being widely used in the transport sector. As a first step, network of the target site was constructed with micro-simulation model. Secondly, data has been collected under various incident scenarios produced with combination of variables that are expected to affect the incident situation. And, detection results from both McMaster algorithm, a well known incident detection algorithm, and the Naive Bayes algorithm, developed in this study, were compared. As a result of comparison, Naive Bayes algorithm showed less negative effect and better detect rate (DR) than the McMaster algorithm. However, as DR increases, so did false alarm rate (FAR). Also, while McMaster algorithm detected in four cycles, Naive Bayes algorithm determine the situation with just one cycle, which increases DR but also seems to have increased FAR. Consequently it has been identified that the Naive Bayes algorithm has a great potential in traffic incident detection.

Effective Fingerprint Classification using Subsumed One-Vs-All Support Vector Machines and Naive Bayes Classifiers (포섭구조 일대다 지지벡터기계와 Naive Bayes 분류기를 이용한 효과적인 지문분류)

  • Hong, Jin-Hyuk;Min, Jun-Ki;Cho, Ung-Keun;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.10
    • /
    • pp.886-895
    • /
    • 2006
  • Fingerprint classification reduces the number of matches required in automated fingerprint identification systems by categorizing fingerprints into a predefined class. Support vector machines (SVMs), widely used in pattern classification, have produced a high accuracy rate when performing fingerprint classification. In order to effectively apply SVMs to multi-class fingerprint classification systems, we propose a novel method in which SVMs are generated with the one-vs-all (OVA) scheme and dynamically ordered with $na{\ddot{i}}ve$ Bayes classifiers. More specifically, it uses representative fingerprint features such as the FingerCode, singularities and pseudo ridges to train the OVA SVMs and $na{\ddot{i}}ve$ Bayes classifiers. The proposed method has been validated on the NIST-4 database and produced a classification accuracy of 90.8% for 5-class classification. Especially, it has effectively managed tie problems usually occurred in applying OVA SVMs to multi-class classification.

Improving Accuracy of Multi-label Naive Bayes Classifier (다중 레이블 나이브 베이지안 분류기의 정확도 개선 연구)

  • Kim, Hae-Choen;Lee, Jae-Sung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.01a
    • /
    • pp.147-148
    • /
    • 2018
  • 다중 레이블 분류 문제는 다중 레이블 데이터를 입력받았을 때 연관된 다수의 레이블을 추측하는 문제이다. 본 논문에서는 다중 레이블 분류 문제의 기법 중 하나인 나이브 베이지안 분류기에 레이블 의존성을 계산하여 결과에 반영한 결과 다중 레이블 분류 문제의 성능이 개선됨을 확인하였다.

  • PDF

Empirical Bayes Confidence Intervals of the Burr Type XII Failure Model

  • Choi, Dal-Woo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.1
    • /
    • pp.155-162
    • /
    • 1999
  • This paper is concerned with the empirical Bayes estimation of one of the two shape parameters(${\theta}$) in the Burr(${\beta},\;{\theta}$) type XII failure model based on type-II censored data. We obtain the bootstrap empirical Bayes confidence intervals of ${\theta}$ by the parametric bootstrap introduced by Laird and Louis(1987). The comparisons among the bootstrap and the naive empirical Bayes confidence intervals through Monte Carlo study are also presented.

  • PDF

Naive Bayes Learner for Propositionalized Attribute Taxonomy (명제화된 어트리뷰트 택소노미를 이용하는 나이브 베이스 학습 알고리즘)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.406-409
    • /
    • 2008
  • We consider the problem of exploiting a taxonomy of propositionalized attributes in order to learn compact and robust classifiers. We introduce Propositionalized Attribute Taxonomy guided Naive Bayes Learner (PAT-NBL), an inductive learning algorithm that exploits a taxonomy of propositionalized attributes as prior knowledge to generate compact and accurate classifiers. PAT-NBL uses top-down and bottom-up search to find a locally optimal cut that corresponds to the instance space from propositionalized attribute taxonomy and data. Our experimental results on University of California-Irvine (UCI) repository data sets show that the proposed algorithm can generate a classifier that is sometimes comparably compact and accurate to those produced by standard Naive Bayes learners.

  • PDF

Propositionalized Attribute Taxonomy Guided Naive Bayes Learning Algorithm (명제화된 어트리뷰트 택소노미를 이용하는 나이브 베이스 학습 알고리즘)

  • Kang, Dae-Ki;Cha, Kyung-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2357-2364
    • /
    • 2008
  • In this paper, we consider the problem of exploiting a taxonomy of propositionalized attributes in order to generate compact and robust classifiers. We introduce Propositionalized Attribute Taxonomy guided Naive Bayes Learner (PAT-NBL), an inductive learning algorithm that exploits a taxonomy of propositionalized attributes as prior knowledge to generate compact and accurate classifiers. PAT-NBL uses top-down and bottom-up search to find a locally optimal cut that corresponds to the instance space from propositionalized attribute taxonomy and data. Our experimental results on University of California-Irvine (UCI) repository data set, show that the proposed algorithm can generate a classifier that is sometimes comparably compact and accurate to those produced by standard Naive Bayes learners.