• Title/Summary/Keyword: the number of training documents

Search Result 40, Processing Time 0.027 seconds

Classification Accuracy by Deviation-based Classification Method with the Number of Training Documents (학습문서의 개수에 따른 편차기반 분류방법의 분류 정확도)

  • Lee, Yong-Bae
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.325-332
    • /
    • 2014
  • It is generally accepted that classification accuracy is affected by the number of learning documents, but there are few studies that show how this influences automatic text classification. This study is focused on evaluating the deviation-based classification model which is developed recently for genre-based classification and comparing it to other classification algorithms with the changing number of training documents. Experiment results show that the deviation-based classification model performs with a superior accuracy of 0.8 from categorizing 7 genres with only 21 training documents. This exceeds the accuracy of Bayesian and SVM. The Deviation-based classification model obtains strong feature selection capability even with small number of training documents because it learns subject information within genre while other methods use different learning process.

A Study on Incremental Learning Model for Naive Bayes Text Classifier (Naive Bayes 문서 분류기를 위한 점진적 학습 모델 연구)

  • 김제욱;김한준;이상구
    • The Journal of Information Technology and Database
    • /
    • v.8 no.1
    • /
    • pp.95-104
    • /
    • 2001
  • In the text classification domain, labeling the training documents is an expensive process because it requires human expertise and is a tedious, time-consuming task. Therefore, it is important to reduce the manual labeling of training documents while improving the text classifier. Selective sampling, a form of active learning, reduces the number of training documents that needs to be labeled by examining the unlabeled documents and selecting the most informative ones for manual labeling. We apply this methodology to Naive Bayes, a text classifier renowned as a successful method in text classification. One of the most important issues in selective sampling is to determine the criterion when selecting the training documents from the large pool of unlabeled documents. In this paper, we propose two measures that would determine this criterion : the Mean Absolute Deviation (MAD) and the entropy measure. The experimental results, using Renters 21578 corpus, show that this proposed learning method improves Naive Bayes text classifier more than the existing ones.

  • PDF

Automatic Text Categorization based on Semi-Supervised Learning (준지도 학습 기반의 자동 문서 범주화)

  • Ko, Young-Joong;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.5
    • /
    • pp.325-334
    • /
    • 2008
  • The goal of text categorization is to classify documents into a certain number of pre-defined categories. The previous studies in this area have used a large number of labeled training documents for supervised learning. One problem is that it is difficult to create the labeled training documents. While it is easy to collect the unlabeled documents, it is not so easy to manually categorize them for creating training documents. In this paper, we propose a new text categorization method based on semi-supervised learning. The proposed method uses only unlabeled documents and keywords of each category, and it automatically constructs training data from them. Then a text classifier learns with them and classifies text documents. The proposed method shows a similar degree of performance, compared with the traditional supervised teaming methods. Therefore, this method can be used in the areas where low-cost text categorization is needed. It can also be used for creating labeled training documents.

The Study on the Effective Automatic Classification of Internet Document Using the Machine Learning (기계학습을 기반으로 한 인터넷 학술문서의 효과적 자동분류에 관한 연구)

  • 노영희
    • Journal of Korean Library and Information Science Society
    • /
    • v.32 no.3
    • /
    • pp.307-330
    • /
    • 2001
  • This study experimented the performance of categorization methods using the kNN classifier. Most sample based automatic text categorization techniques like the kNN classifier reduces the feature set of the training documents. We sought to find out which percentage reductions in the feature set would result in high performances. In addition, the kNN classifier has to find the k number of training documents most similar to the test documents in the training documents. We sought to verify the most appropriate k value through experiments.

  • PDF

Optimization of Number of Training Documents in Text Categorization (문헌범주화에서 학습문헌수 최적화에 관한 연구)

  • Shim, Kyung
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.4 s.62
    • /
    • pp.277-294
    • /
    • 2006
  • This paper examines a level of categorization performance in a real-life collection of abstract articles in the fields of science and technology, and tests the optimal size of documents per category in a training set using a kNN classifier. The corpus is built by choosing categories that hold more than 2,556 documents first, and then 2,556 documents per category are randomly selected. It is further divided into eight subsets of different size of training documents : each set is randomly selected to build training documents ranging from 20 documents (Tr-20) to 2,000 documents (Tr-2000) per category. The categorization performances of the 8 subsets are compared. The average performance of the eight subsets is 30% in $F_1$ measure which is relatively poor compared to the findings of previous studies. The experimental results suggest that among the eight subsets the Tr-100 appears to be the most optimal size for training a km classifier In addition, the correctness of subject categories assigned to the training sets is probed by manually reclassifying the training sets in order to support the above conclusion by establishing a relation between and the correctness and categorization performance.

An Analysis on the Factors Affectingy Online Search Effect (온라인 정보탐색의 효과변인 분석)

  • Kim Sun-Ho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.22
    • /
    • pp.361-396
    • /
    • 1992
  • The purpose of this study is to verify the correlations between the amount of the online searcher's search experience and their search effect. In order to achieve this purpose, the 28 online searchers working at the chosen libraries and information centers have participated in the study as subjects. The subjects have been classified into the two types of cognitive style by Group Embedded Figure Test. As the result of the GEFT, two groups have been identified: the 15 Field Independance ( FI ) searchers and the 13 Field Dependance ( FD ) searchers. The subject's search experience consists of the 3 elements: disciplinary, training, and working experience. In order to get the data of these empirical elements, a questionnaire have been sent to the 28 subjects. An online searching request form prepared by a practical user was sent to all subjects, who conducted searches of the oversea databases through Dialog to retrieve what was requested. The resultant outcomes were collected and sent back to the user to evaluate relevance and pertinence of the search effect by the individual. In this study, the search effect has been divide into relevance and pertinence. The relevance has been then subdivided into the 3 elements : the number of the relevant documents, recall ratio, and the cost per a relevant document. The relevance has been subdivided into the 3 elements: the number of the pertinent documents, utility ratio, and the cost per a pertinent document. The correlations between the 3 elements of the subject's experience and the 6 elements of the search effect has been analysed in the FI and in the FD searchers separately. At the standard of the 0.01 significance level, findings and conclusions made in the study are summarised as follows : 1. There are strong correlations between the amount of training and the recall ratio, the number of the pertinent documents, and the utility ratio on the part of FI searchers. 2. There are strong correlations between the amount of working experience and the number of the relevant documents, the recall ratio on the part of FD searchers. However, there is also a significant converse correlation between the amount of working experience and the search cost per a pertinent document on the part of FD searchers. 3. The amount of working experience has stronger correlations with the number of the pertinent documents and the utility ratio on the part of FD searchers than the amount of training. 4. There is a strong correlation between the amount of training and the pertinence on both part of FI and FD searchers.

  • PDF

An Enhanced Feature Selection Method Based on the Impurity of Words Considering Unbalanced Distribution of Documents (문서의 불균등 분포를 고려한 단어 불순도 기반 특징 선택 방법)

  • Kang, Jin-Beom;Yang, Jae-Young;Choi, Joong-Min
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.9
    • /
    • pp.804-816
    • /
    • 2007
  • Sample training data for machine learning often contain irrelevant information or redundant concept. It is also the case that the original data may include noise. If the information collected for constructing learning model is not reliable, it is difficult to obtain accurate information. So the system attempts to find relations or regulations between features and categories in the teaming phase. The feature selection is to remove irrelevant or redundant information before constructing teaming model. for improving its performance. Existing feature selection methods assume that the distribution of documents is balanced in terms of the number of documents for each class and the length of each document. In practice, however, it is difficult not only to prepare a set of documents with almost equal length, but also to define a number of classes with fixed number of document elements. In this paper, we propose a new feature selection method that considers the impurities among the words and unbalanced distribution of documents in categories. We could obtain feature candidates using the word impurity and eventually select the features through unbalanced distribution of documents. We demonstrate that our method performs better than other existing methods via some experiments.

Case Report on the Survey Results of Educational Satisfaction According to the Operation of Occupational Safety and Health Training Institute (산업안전보건교육 기관의 운영에 따른 교육 만족도 조사결과 사례 보고)

  • Kim, Hyeon-Yeong;Heo, Mi-Jin;Shin, In-Jae
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.29 no.4
    • /
    • pp.603-609
    • /
    • 2019
  • Objectives: This case report aims to contribute to the enhancement of training quality for occupational accident prevention by conducting surveys on customers' satisfaction with the training course management by the Occupational Safety and Health Training Institute (OSHTI) of KOSHA. Methods: Surveys were conducted through phone calls, customer service documents, and questionnaires from January 1, 2018 to December 31, 2018. Results: The surveys showed an answer rate of 97.36% and handling rate of 97.47% in 2018, an increase of 1.15% compared to 2017. The number of monthly inbound calls in 2018 was 5,902, rising 0.10% year-on-year, and the average inbound calls per day in 2018 was 289, a decline from 291 the year before. The number of provisions of customer service in 2018 was 68,952, increasing 1.89% year-on-year. The number of inquiries on the training curriculum was 58,744 in 2018, an increase of 3.98% compared to the 56,498 recorded in 2017. Inquiries on job training centers were the most common, numbering 27,114 (39.32%), followed by e-learning 18,470 (26.79%) and expert courses 13,160 (19.09%). Of the 149 answers to the customer service survey, 'Nothing to complain about' accounted for 86 (56.72%) and 'Diversifying training time and programs' numbered 22 (14.77%). Conclusions: Customer satisfaction in 2018 increased compared to that in 2017. However, there is a need to reflect the demands of customers for diversifying training time and programs, offer practice-centered training, and collect opinions on providing information in order to maintain high quality training course management.

Reinforcement Method for Automated Text Classification using Post-processing and Training with Definition Criteria (학습방법개선과 후처리 분석을 이용한 자동문서분류의 성능향상 방법)

  • Choi, Yun-Jeong;Park, Seung-Soo
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.811-822
    • /
    • 2005
  • Automated text categorization is to classify free text documents into predefined categories automatically and whose main goals is to reduce considerable manual process required to the task. The researches to improving the text categorization performance(efficiency) in recent years, focused on enhancing existing classification models and algorithms itself, but, whose range had been limited by feature based statistical methodology. In this paper, we propose RTPost system of different style from i.ny traditional method, which takes fault tolerant system approach and data mining strategy. The 2 important parts of RTPost system are reinforcement training and post-processing part. First, the main point of training method deals with the problem of defining category to be classified before selecting training sample documents. And post-processing method deals with the problem of assigning category, not performance of classification algorithms. In experiments, we applied our system to documents getting low classification accuracy which were laid on a decision boundary nearby. Through the experiments, we shows that our system has high accuracy and stability in actual conditions. It wholly did not depend on some variables which are important influence to classification power such as number of training documents, selection problem and performance of classification algorithms. In addition, we can expect self learning effect which decrease the training cost and increase the training power with employing active learning advantage.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.