• Title/Summary/Keyword: Document Frequency

Search Result 297, Processing Time 0.031 seconds

Comparison of Document Features Extraction Methods for Automatic Classification of Real World FAQ Mails (실세계의 FAQ 메일 자동분류를 위한 문서 특징추출 방법의 성능 비교)

  • 홍진혁;류중원;조성배
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.271-273
    • /
    • 2001
  • 최근 문서 자동분류의 중요성이 널리 인식되어 다양한 연구가 진행되고 있다. 본 논문에서는 한글 문서의 효과적인 자동분류를 위한 다양한 특징추출 방법들을 구현하고 실제 질의메일에 대한 효율적인 특징주출 방법을 제시한다. 실험을 위해 문서 빈도(document frequency), 정보획득(information gain), 상호 정보량(mutual information), x$^2$등 7가지 특징추출 방법을 사용하였으며 463개의 실제 테스트 질의메일에 적용한 결과, x$^2$ 방법이 74.7%의 인식률을 내어 성능이 가장 좋음을 알 수 있었다. 반면에 x$^2$와 함께 가장 자주 쓰이는 방법 중의 하나인 정보 이득은 인식률이 최대 40.6%밖에 되지 않았다.

  • PDF

Neural Based Approach to Keyword Extraction from Documents (문서의 키워드 추출에 대한 신경망 접근)

  • 조태호;서정현
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.317-319
    • /
    • 2000
  • 문서는 자연어로 구성된 비정형화된 데이터이다. 이를 처리하기 위하여 문서를 정형화된 데이터로 표현하여 저장할 필요가 있는데, 이를 문서 대용물(Document Surrogate)라 한다. 문서 대용물은 대표적으로 인덱싱 과정에 의해 추출된 단어 리스트를 나타낸다. 문서 내의 모든 단어가 내용을 반영하지 않는다. 문서의 내용을 반영하는 중요한 단어만을 선택할 필요가 있다. 이러한 단어를 키워드라 하며, 기존에는 단어의 빈도와 역문서 빈도(Inverse Document Frequency)에 근거한 공식에 의해 키워드를 선택하였다. 실제로 문서내 빈도와 역문서 빈도뿐만 아니라 제목에 포함 여부, 단어의 위치 등도 고려하여야 한다. 이러한 인자를 추가할 경우 이를 수식으로 표현하기에는 복잡하다. 이 논문에서는 이를 단어의 특징으로 추출하여 특징벡터를 형성하고 이를 학습하여 키워드를 선택하는 신경망 모델인 역전파의 접근을 제안한다. 역전파를 이용하여 키워드를 판별한 결과 수식에 의한 경우보다 그 성능이 향상되었음을 보여주고 있다.

  • PDF

A Study of Research on Methods of Automated Biomedical Document Classification using Topic Modeling and Deep Learning (토픽모델링과 딥 러닝을 활용한 생의학 문헌 자동 분류 기법 연구)

  • Yuk, JeeHee;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.2
    • /
    • pp.63-88
    • /
    • 2018
  • This research evaluated differences of classification performance for feature selection methods using LDA topic model and Doc2Vec which is based on word embedding using deep learning, feature corpus sizes and classification algorithms. In addition to find the feature corpus with high performance of classification, an experiment was conducted using feature corpus was composed differently according to the location of the document and by adjusting the size of the feature corpus. Conclusionally, in the experiments using deep learning evaluate training frequency and specifically considered information for context inference. This study constructed biomedical document dataset, Disease-35083 which consisted biomedical scholarly documents provided by PMC and categorized by the disease category. Throughout the study this research verifies which type and size of feature corpus produces the highest performance and, also suggests some feature corpus which carry an extensibility to specific feature by displaying efficiency during the training time. Additionally, this research compares the differences between deep learning and existing method and suggests an appropriate method by classification environment.

Gathering Common-word and Document Reclassification to improve Accuracy of Document Clustering (문서 군집화의 정확률 향상을 위한 범용어 수집과 문서 재분류 알고리즘)

  • Shin, Joon-Choul;Ock, Cheol-Young;Lee, Eung-Bong
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.53-62
    • /
    • 2012
  • Clustering technology is used to deal efficiently with many searched documents in information retrieval system. But the accuracy of the clustering is satisfied to the requirement of only some domains. This paper proposes two methods to increase accuracy of the clustering. We define a common-word, that is frequently used but has low weight during clustering. We propose the method that automatically gathers the common-word and calculates its weight from the searched documents. From the experiments, the clustering error rates using the common-word is reduced to 34% compared with clustering using a stop-word. After generating first clusters using average link clustering from the searched documents, we propose the algorithm that reevaluates the similarity between document and clusters and reclassifies the document into more similar clusters. From the experiments using Naver JiSikIn category, the accuracy of reclassified clusters is increased to 1.81% compared with first clusters without reclassification.

A Study on Classification of Medical Information Documents using Word Correlation (색인어 연관성을 이용한 의료정보문서 분류에 관한 연구)

  • Lim, Hyeong-Geon;Jang, Duk-Sung
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.469-476
    • /
    • 2001
  • As the service of information through web system increases in modern society, many questions and consultations are going on through Home page and E-mail in the hospital. But there are some burdens for the management and postponements for answering the questions. In this paper, we investigate the document classification methods as a primary research of the auto-answering system. On the basis of 1200 documents which are questions of patients, 66% are used for the learning documents and 34% for test documents. All of are also used for the document classification using NBC (Naive Bayes Classifier), common words and coefficient of correlation. As the result of the experiments, the two methods proposed in this paper, that is, common words and coefficient of correlation are higher as much as 3% and 5% respectively than the basic NBC methods. This result shows that the correlation between indexes and categories is more effective than the word frequency in the document classification.

  • PDF

A Proofreader Matching Method Based on Topic Modeling Using the Importance of Documents (문서 중요도를 고려한 토픽 기반의 논문 교정자 매칭 방법론)

  • Son, Yeonbin;An, Hyeontae;Choi, Yerim
    • Journal of Internet Computing and Services
    • /
    • v.19 no.4
    • /
    • pp.27-33
    • /
    • 2018
  • In the process of submitting a manuscript to a journal in order to present the results of the research at the research institution, researchers often proofread the manuscript because it can manuscripts to communicate the results more effectively. Currently, most of the manuscript proofreading companies use the manual proofreader assignment method according to the subjective judgment of the matching manager. Therefore, in this paper, we propose a topic-based proofreader matching method for effective proofreading results. The proposed method is categorized into two steps. First, a topic modeling is performed by using Latent Dirichlet Allocation. In this process, the frequency of each document constituting the representative document of a user is determined according to the importance of the document. Second, the user similarity is calculated based on the cosine similarity method. In addition, we confirmed through experiments by using real-world dataset. The performance of the proposed method is superior to the comparative method, and the validity of the matching results was verified using qualitative evaluation.

Adjusting Weights of Single-word and Multi-word Terms for Keyphrase Extraction from Article Text

  • Kang, In-Su
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.8
    • /
    • pp.47-54
    • /
    • 2021
  • Given a document, keyphrase extraction is to automatically extract words or phrases which topically represent the content of the document. In unsupervised keyphrase extraction approaches, candidate words or phrases are first extracted from the input document, and scores are calculated for keyphrase candidates, and final keyphrases are selected based on the scores. Regarding the computation of the scores of candidates in unsupervised keyphrase extraction, this study proposes a method of adjusting the scores of keyphrase candidates according to the types of keyphrase candidates: word-type or phrase-type. For this, type-token ratios of word-type and phrase-type candidates as well as information content of high-frequency word-type and phrase-type candidates are collected from the input document, and those values are employed in adjusting the scores of keyphrase candidates. In experiments using four keyphrase extraction evaluation datasets which were constructed for full-text articles in English, the proposed method performed better than a baseline method and comparison methods in three datasets.

Analysis of ICT Education Trends using Keyword Occurrence Frequency Analysis and CONCOR Technique (키워드 출현 빈도 분석과 CONCOR 기법을 이용한 ICT 교육 동향 분석)

  • Youngseok Lee
    • Journal of Industrial Convergence
    • /
    • v.21 no.1
    • /
    • pp.187-192
    • /
    • 2023
  • In this study, trends in ICT education were investigated by analyzing the frequency of appearance of keywords related to machine learning and using conversion of iteration correction(CONCOR) techniques. A total of 304 papers from 2018 to the present published in registered sites were searched on Google Scalar using "ICT education" as the keyword, and 60 papers pertaining to ICT education were selected based on a systematic literature review. Subsequently, keywords were extracted based on the title and summary of the paper. For word frequency and indicator data, 49 keywords with high appearance frequency were extracted by analyzing frequency, via the term frequency-inverse document frequency technique in natural language processing, and words with simultaneous appearance frequency. The relationship degree was verified by analyzing the connection structure and centrality of the connection degree between words, and a cluster composed of words with similarity was derived via CONCOR analysis. First, "education," "research," "result," "utilization," and "analysis" were analyzed as main keywords. Second, by analyzing an N-GRAM network graph with "education" as the keyword, "curriculum" and "utilization" were shown to exhibit the highest correlation level. Third, by conducting a cluster analysis with "education" as the keyword, five groups were formed: "curriculum," "programming," "student," "improvement," and "information." These results indicate that practical research necessary for ICT education can be conducted by analyzing ICT education trends and identifying trends.

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

A Study on Change in Perception of Community Service and Demand Prediction based on Big Data

  • Chun-Ok, Jang
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.230-237
    • /
    • 2022
  • The Community Social Service Investment project started as a state subsidy project in 2007 and has grown very rapidly in quantitative terms in a short period of time. It is a bottom-up project that discovers the welfare needs of people and plans and provides services suitable for them. The purpose of this study is to analyze using big data to determine the social response to local community service investment projects. For this, data was collected and analyzed by crawling with a specific keyword of community service investment project on Google and Naver sites. As for the analysis contents, monthly search volume, related keywords, monthly search volume, search rate by age, and gender search rate were conducted. As a result, 10 items were found as related keywords in Google, and 3 items were found in Naver. The overall results of Google and Naver sites were slightly different, but they increased and decreased at almost the same time. Therefore, it can be seen that the community service investment project continues to attract users' interest.