• Title/Summary/Keyword: Term Frequency-Inverse Document Frequency

Search Result 91, Processing Time 0.037 seconds

Comparison of Term-Weighting Schemes for Environmental Big Data Analysis (환경 빅데이터 이슈 분석을 위한 용어 가중치 기법 비교)

  • Kim, JungJin;Jeong, Hanseok
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.236-236
    • /
    • 2021
  • 최근 텍스트와 같은 비정형 데이터의 생성 속도가 급격하게 증가함에 따라, 이를 분석하기 위한 기술들의 필요성이 커지고 있다. 텍스트 마이닝은 자연어 처리기술을 사용하여 비정형 텍스트를 정형화하고, 문서에서 가치있는 정보를 획득할 수 있는 기법 중 하나이다. 텍스트 마이닝 기법은 일반적으로 각각의 분서별로 특정 용어의 사용 빈도를 나타내는 문서-용어 빈도행렬을 사용하여 용어의 중요도를 나타내고, 다양한 연구 분야에서 이를 활용하고 있다. 하지만, 문서-용어 빈도 행렬에서 나타내는 용어들의 빈도들은 문서들의 차별성과 그에 따른 용어들의 중요도를 나타내기 어렵기때문에, 용어 가중치를 적용하여 문서가 가지고 있는 특징을 분류하는 방법이 필수적이다. 다양한 용어 가중치를 적용하는 방법들이 개발되어 적용되고 있지만, 환경 분야에서는 용어 가중치 기법 적용에 따른 효율성 평가 연구가 미비한 상황이다. 또한, 환경 이슈 분석의 경우 단순히 문서들에 특징을 파악하고 주어진 문서들을 분류하기보다, 시간적 분포도에 따른 각 문서의 특징을 반영하는 것도 상대적으로 중요하다. 따라서, 본 연구에서는 텍스트 마이닝을 이용하여 2015-2020년의 서울지역 환경뉴스 데이터를 사용하여 환경 이슈 분석에 적합한 용어 가중치 기법들을 비교분석하였다. 용어 가중치 기법으로는 TF-IDF (Term frequency-inverse document frquency), BM25, TF-IGM (TF-inverse gravity moment), TF-IDF-ICSDF (TF-IDF-inverse classs space density frequency)를 적용하였다. 본 연구를 통해 환경문서 및 개체 분류에 대한 최적화된 용어 가중치 기법을 제시하고, 서울지역의 환경 이슈와 관련된 핵심어 추출정보를 제공하고자 한다.

  • PDF

Empirical Analysis & Comparisons of Web Document Classification Methods (문서분류 기법을 이용한 웹 문서 분류의 실험적 비교)

  • Lee, Sang-Soon;Choi, Jung-Min;Jang, Geun;Lee, Byung-Soo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.154-156
    • /
    • 2002
  • 인터넷의 발전으로 우리는 많은 정보와 지식을 인터넷에서 제공받을 수 있으며 HTML, 뉴스그룹 문서, 전자메일 등의 웹 문서로 존재한다. 이러한 웹 문서들은 여러가지 목적으로 분류해야 할 필요가 있으며 이를 적용한 시스템으로는 Personal WebWatcher, InfoFinder, Webby, NewT 등이 있다. 웹 문서 분류 시스템에서는 문서분류 기법을 사용하여 웹 문서의 소속 클래스를 결정하는데 문서분류를 위한 기법 중 대표적인 알고리즘으로 나이브 베이지안(Naive Baysian), k-NN(k-Nearest Neighbor), TFIDF(Term Frequency Inverse Document Frequency)방법을 이용한다. 본 논문에서는 웹 문서를 대상으로 이러한 문서분류 알고리즘 각각의 성능을 비교 및 평가하고자 한다.

  • PDF

Dynamic Analytic Data Preprocessing Techniques for Malware Detection (악성코드 탐지를 위한 동적 분석 데이터 전처리 기법)

  • Hae-Soo Kim;Mi-Hui Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.230-231
    • /
    • 2023
  • 악성코드를 탐지하는 기법 중 동적 분석데이터와 같은 시계열 데이터는 프로그램마다 호출되는 API의 수가 모두 다르다. 하지만 딥러닝 모델을 통해 분석할 때는 모델의 입력이 되는 데이터의 크기가 모두 같아야 한다. 이에 본 논문은 TF-IDF(Term Frequency-Inverse Document Frequency)와 슬라이딩 윈도우 기법을 이용해 프로그램의 동적 특성을 유지하면서 데이터의 길이를 일정하게 만들 수 있는 전처리 기법과 LSTM(Long Short-Term Memory) 모델을 통해 정확도(Accuracy) 95.89%, 재현율(Recall) 97.08%, 정밀도(Precision) 95.9%, F1-score 96.48%를 달성했다.

Text Summarization on Large-scale Vietnamese Datasets

  • Ti-Hon, Nguyen;Thanh-Nghi, Do
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.309-316
    • /
    • 2022
  • This investigation is aimed at automatic text summarization on large-scale Vietnamese datasets. Vietnamese articles were collected from newspaper websites and plain text was extracted to build the dataset, that included 1,101,101 documents. Next, a new single-document extractive text summarization model was proposed to evaluate this dataset. In this summary model, the k-means algorithm is used to cluster the sentences of the input document using different text representations, such as BoW (bag-of-words), TF-IDF (term frequency - inverse document frequency), Word2Vec (Word-to-vector), Glove, and FastText. The summary algorithm then uses the trained k-means model to rank the candidate sentences and create a summary with the highest-ranked sentences. The empirical results of the F1-score achieved 51.91% ROUGE-1, 18.77% ROUGE-2 and 29.72% ROUGE-L, compared to 52.33% ROUGE-1, 16.17% ROUGE-2, and 33.09% ROUGE-L performed using a competitive abstractive model. The advantage of the proposed model is that it can perform well with O(n,k,p) = O(n(k+2/p)) + O(nlog2n) + O(np) + O(nk2) + O(k) time complexity.

A Study on Automatic Indexing of Korean Texts based on Statistical Criteria (통계적기법에 의한 한글자동색인의 연구)

  • Woo, Dong-Chin
    • Journal of the Korean Society for information Management
    • /
    • v.4 no.1
    • /
    • pp.47-86
    • /
    • 1987
  • The purpose of this study is to present an effective automatic indexing method of Korean texts based on statistical criteria. Titles and abstracts of the 299 documents randomly selected from ETRI's DOCUMENT data base are used as the experimental data in this study the experimental data is divided into 4 word groups and these 4 word groups are respectively analyzed and evaluated by applying 3 automatic indexing methods including Transition Phenomena of Word Occurrence, Inverse Document Frequency Weighting Technique, and Term Discrimination Weighting Technique.

  • PDF

Chatbot Design Method Using Hybrid Word Vector Expression Model Based on Real Telemarketing Data

  • Zhang, Jie;Zhang, Jianing;Ma, Shuhao;Yang, Jie;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1400-1418
    • /
    • 2020
  • In the development of commercial promotion, chatbot is known as one of significant skill by application of natural language processing (NLP). Conventional design methods are using bag-of-words model (BOW) alone based on Google database and other online corpus. For one thing, in the bag-of-words model, the vectors are Irrelevant to one another. Even though this method is friendly to discrete features, it is not conducive to the machine to understand continuous statements due to the loss of the connection between words in the encoded word vector. For other thing, existing methods are used to test in state-of-the-art online corpus but it is hard to apply in real applications such as telemarketing data. In this paper, we propose an improved chatbot design way using hybrid bag-of-words model and skip-gram model based on the real telemarketing data. Specifically, we first collect the real data in the telemarketing field and perform data cleaning and data classification on the constructed corpus. Second, the word representation is adopted hybrid bag-of-words model and skip-gram model. The skip-gram model maps synonyms in the vicinity of vector space. The correlation between words is expressed, so the amount of information contained in the word vector is increased, making up for the shortcomings caused by using bag-of-words model alone. Third, we use the term frequency-inverse document frequency (TF-IDF) weighting method to improve the weight of key words, then output the final word expression. At last, the answer is produced using hybrid retrieval model and generate model. The retrieval model can accurately answer questions in the field. The generate model can supplement the question of answering the open domain, in which the answer to the final reply is completed by long-short term memory (LSTM) training and prediction. Experimental results show which the hybrid word vector expression model can improve the accuracy of the response and the whole system can communicate with humans.

A Study of the Influence of Choice of Record Fields on Retrieval Performance in the Bibliographic Database (서지 데이터베이스에서의 레코드 필드 선택이 검색 성능에 미치는 영향에 관한 연구)

  • Heesop Kim
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.35 no.4
    • /
    • pp.97-122
    • /
    • 2001
  • This empirical study investigated the effect of choice of record field(s) upon which to search on retrieval performance for a large operational bibliographic database. The query terms used in the study were identified algorithmically from each target set in four different ways: (1) controlled terms derived from index term frequency weights, (2) uncontrolled terms derived from index term frequency weights. (3) controlled terms derived from inverse document frequency weights, and (4) uncontrolled terms based on universe document frequency weights. Su potable choices of record field were recognised. Using INSPEC terminology, these were the fields: (1) Abstract. (2) 'Anywhere'(i.e., ail fields). (3) Descriptors. (4) Identifiers, (5) 'Subject'(i.e., 'Descriptors' plus Identifiers'). and (6) Title. The study was undertaken in an operational web-based IR environment using the INSPEC bibliographic database. The retrieval performances were evaluated using D measure (bivariate in Recall and Precision). The main findings were that: (1) there exist significant differences in search performance arising from choice of field, using 'mean performance measure' as the criterion statistic; (2) the rankings of field-choices for each of these performance measures is sensitive to the choice of query : and (3) the optimal choice of field for the D-measure is Title.

  • PDF

Reviews Analysis of Korean Clinics Using LDA Topic Modeling (토픽 모델링을 활용한 한의원 리뷰 분석과 마케팅 제언)

  • Kim, Cho-Myong;Jo, A-Ram;Kim, Yang-Kyun
    • The Journal of Korean Medicine
    • /
    • v.43 no.1
    • /
    • pp.73-86
    • /
    • 2022
  • Objectives: In the health care industry, the influence of online reviews is growing. As medical services are provided mainly by providers, those services have been managed by hospitals and clinics. However, direct promotions of medical services by providers are legally forbidden. Due to this reason, consumers, like patients and clients, search a lot of reviews on the Internet to get any information about hospitals, treatments, prices, etc. It can be determined that online reviews indicate the quality of hospitals, and that analysis should be done for sustainable hospital marketing. Method: Using a Python-based crawler, we collected reviews, written by real patients, who had experienced Korean medicine, about more than 14,000 reviews. To extract the most representative words, reviews were divided by positive and negative; after that reviews were pre-processed to get only nouns and adjectives to get TF(Term Frequency), DF(Document Frequency), and TF-IDF(Term Frequency - Inverse Document Frequency). Finally, to get some topics about reviews, aggregations of extracted words were analyzed by using LDA(Latent Dirichlet Allocation) methods. To avoid overlap, the number of topics is set by Davis visualization. Results and Conclusions: 6 and 3 topics extracted in each positive/negative review, analyzed by LDA Topic Model. The main factors, consisting of topics were 1) Response to patients and customers. 2) Customized treatment (consultation) and management. 3) Hospital/Clinic's environments.

An Investigation of Automatic Term Weighting Techniques

  • Kim, Hyun-Hee
    • Journal of the Korean Society for information Management
    • /
    • v.1 no.1
    • /
    • pp.43-62
    • /
    • 1984
  • The present study has two main objectives. The first objective is to devise a new term weighting technique which can be used to weight the significance value of each word stem in a test collection of documents on the subject of "enteral hyperalimentation." The next objective is to evaluate retrieval performance of proposed term weighting technique, together with four other term weighting techniques, by conducting a set of experiments. The experimental results have shown that the performance of Sparck Jones's inverse document frequency weighting and the proposed term significance weighting techniques produced better recall and precision ratios than the other three complex weighting techniques.

  • PDF

A Study on Fashion Startup Ecosystem Trends in Korea Using Big Data Analysis - Focusing on Newspaper Articles in 2012-2022 - (빅데이터 분석을 활용한 우리나라 패션 스타트업 생태계의 추세 연구 - 2012~2022년 신문기사를 중심으로 -)

  • Soojung Lim;Sunjin Hwang
    • Journal of Fashion Business
    • /
    • v.27 no.1
    • /
    • pp.1-15
    • /
    • 2023
  • This study divided articles into two time periods, from 2012 to 2022, with the aim of using big data analysis to look at patterns in the ecosystem of fashion start-ups. The research method extracted top keywords based on TF(Term Frequency) and TF-IDF(Term Frequency-Inverse Document Frequency), analyzed the network, and derived centrality values. As a result of comparing the first and second fashion startup ecosystems, elements of policy, support, market, finance, and human capital were derived in the first period. In addition, in the second period, elements of policy, support, market, finance, and culture were derived. In the first period, the fashion startup ecosystem focused on fostering new designer startups by emphasizing support, finance, and human capital factors and focusing on policies. Meanwhile, in the second period, online-based fashion platform startups and fashion tech startups appeared with the support of digital transformation and fulfillment services triggered by COVID-19(Corona Virus Disease 19), private finances were emphasized, and cultural factors were derived along with success stories of fashion startups. This study is meaningful in that it helps in developing strategies for fashion startups to grow into sustainable companies.