• 제목/요약/키워드: Text Network

검색결과 1,103건 처리시간 0.028초

가변적 클러스터 개수에 대한 문서군집화 평가방법 (The Evaluation Measure of Text Clustering for the Variable Number of Clusters)

  • 조태호
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2006년도 가을 학술발표논문집 Vol.33 No.2 (B)
    • /
    • pp.233-237
    • /
    • 2006
  • This study proposes an innovative measure for evaluating the performance of text clustering. In using K-means algorithm and Kohonen Networks for text clustering, the number clusters is fixed initially by configuring it as their parameter, while in using single pass algorithm for text clustering, the number of clusters is not predictable. Using labeled documents, the result of text clustering using K-means algorithm or Kohonen Network is able to be evaluated by setting the number of clusters as the number of the given target categories, mapping each cluster to a target category, and using the evaluation measures of text. But in using single pass algorithm, if the number of clusters is different from the number of target categories, such measures are useless for evaluating the result of text clustering. This study proposes an evaluation measure of text clustering based on intra-cluster similarity and inter-cluster similarity, what is called CI (Clustering Index) in this article.

  • PDF

Multi-layered attentional peephole convolutional LSTM for abstractive text summarization

  • Rahman, Md. Motiur;Siddiqui, Fazlul Hasan
    • ETRI Journal
    • /
    • 제43권2호
    • /
    • pp.288-298
    • /
    • 2021
  • Abstractive text summarization is a process of making a summary of a given text by paraphrasing the facts of the text while keeping the meaning intact. The manmade summary generation process is laborious and time-consuming. We present here a summary generation model that is based on multilayered attentional peephole convolutional long short-term memory (MAPCoL; LSTM) in order to extract abstractive summaries of large text in an automated manner. We added the concept of attention in a peephole convolutional LSTM to improve the overall quality of a summary by giving weights to important parts of the source text during training. We evaluated the performance with regard to semantic coherence of our MAPCoL model over a popular dataset named CNN/Daily Mail, and found that MAPCoL outperformed other traditional LSTM-based models. We found improvements in the performance of MAPCoL in different internal settings when compared to state-of-the-art models of abstractive text summarization.

텍스트마이닝 기법을 활용한 국내 음식관광 연구 동향 분석 (Analyzing Research Trends of Food Tourism Using Text Mining Techniques)

  • 신서영;이범준
    • 한국식생활문화학회지
    • /
    • 제35권1호
    • /
    • pp.65-78
    • /
    • 2020
  • The objective of this study was to review and evaluate the growing subject of food tourism research, and thus identify the trend of food tourism research. Using a Text mining technique, this paper discovered the trends of the literature on food tourism that was published from 2004 to 2018. The study reviewed 201 articles that include the words 'food' and 'tourism' in their abstracts in the KCI database. The Wordscloud analysis results presented that the research subjects were predominantly 'Festival', 'Region', 'Culture', 'Tourist', but there was a slight difference in frequency according to the time period. Based on the main path analysis, we extracted the meaningful paths between the cited references published domestically, resulting in a total of 12 networks from 2004 to 2018. The Text network analysis indicated that the words with high centrality showed similarities and differences in the food tourism literature according to the time period, displaying them in a sociogram, a visualization tool. This study has implications that it offers a new perspective of comprehending the overall flow of relevant research.

Urdu News Classification using Application of Machine Learning Algorithms on News Headline

  • Khan, Muhammad Badruddin
    • International Journal of Computer Science & Network Security
    • /
    • 제21권2호
    • /
    • pp.229-237
    • /
    • 2021
  • Our modern 'information-hungry' age demands delivery of information at unprecedented fast rates. Timely delivery of noteworthy information about recent events can help people from different segments of life in number of ways. As world has become global village, the flow of news in terms of volume and speed demands involvement of machines to help humans to handle the enormous data. News are presented to public in forms of video, audio, image and text. News text available on internet is a source of knowledge for billions of internet users. Urdu language is spoken and understood by millions of people from Indian subcontinent. Availability of online Urdu news enable this branch of humanity to improve their understandings of the world and make their decisions. This paper uses available online Urdu news data to train machines to automatically categorize provided news. Various machine learning algorithms were used on news headline for training purpose and the results demonstrate that Bernoulli Naïve Bayes (Bernoulli NB) and Multinomial Naïve Bayes (Multinomial NB) algorithm outperformed other algorithms in terms of all performance parameters. The maximum level of accuracy achieved for the dataset was 94.278% by multinomial NB classifier followed by Bernoulli NB classifier with accuracy of 94.274% when Urdu stop words were removed from dataset. The results suggest that short text of headlines of news can be used as an input for text categorization process.

Enhancing the Text Mining Process by Implementation of Average-Stochastic Gradient Descent Weight Dropped Long-Short Memory

  • Annaluri, Sreenivasa Rao;Attili, Venkata Ramana
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.352-358
    • /
    • 2022
  • Text mining is an important process used for analyzing the data collected from different sources like videos, audio, social media, and so on. The tools like Natural Language Processing (NLP) are mostly used in real-time applications. In the earlier research, text mining approaches were implemented using long-short memory (LSTM) networks. In this paper, text mining is performed using average-stochastic gradient descent weight-dropped (AWD)-LSTM techniques to obtain better accuracy and performance. The proposed model is effectively demonstrated by considering the internet movie database (IMDB) reviews. To implement the proposed model Python language was used due to easy adaptability and flexibility while dealing with massive data sets/databases. From the results, it is seen that the proposed LSTM plus weight dropped plus embedding model demonstrated an accuracy of 88.36% as compared to the previous models of AWD LSTM as 85.64. This result proved to be far better when compared with the results obtained by just LSTM model (with 85.16%) accuracy. Finally, the loss function proved to decrease from 0.341 to 0.299 using the proposed model

조현병 관련 주요 일간지 기사에 대한 텍스트 마이닝 분석 (Text-Mining Analyses of News Articles on Schizophrenia)

  • 남희정;류승형
    • 대한조현병학회지
    • /
    • 제23권2호
    • /
    • pp.58-64
    • /
    • 2020
  • Objectives: In this study, we conducted an exploratory analysis of the current media trends on schizophrenia using text-mining methods. Methods: First, web-crawling techniques extracted text data from 575 news articles in 10 major newspapers between 2018 and 2019, which were selected by searching "schizophrenia" in the Naver News. We had developed document-term matrix (DTM) and/or term-document matrix (TDM) through pre-processing techniques. Through the use of DTM and TDM, frequency analysis, co-occurrence network analysis, and topic model analysis were conducted. Results: Frequency analysis showed that keywords such as "police," "mental illness," "admission," "patient," "crime," "apartment," "lethal weapon," "treatment," "Jinju," and "residents" were frequently mentioned in news articles on schizophrenia. Within the article text, many of these keywords were highly correlated with the term "schizophrenia" and were also interconnected with each other in the co-occurrence network. The latent Dirichlet allocation model presented 10 topics comprising a combination of keywords: "police-Jinju," "hospital-admission," "research-finding," "care-center," "schizophrenia-symptom," "society-issue," "family-mind," "woman-school," and "disabled-facilities." Conclusion: The results of the present study highlight that in recent years, the media has been reporting violence in patients with schizophrenia, thereby raising an important issue of hospitalization and community management of patients with schizophrenia.

코로나19 판데믹 이후 컨테이너선 운임 상승 요인분석: 텍스트 분석을 중심으로 (Analysis of Factors Affecting Surge in Container Shipping Rates in the Era of Covid19 Using Text Analysis)

  • 나진성
    • 한국산업정보학회논문지
    • /
    • 제27권1호
    • /
    • pp.111-123
    • /
    • 2022
  • 코로나19 판데믹 상황에서 컨테이너선 운임은 유례없는 큰 폭의 상승세를 보이고 있다. 컨테이너선 운임 상승 요인에 대해서 다양한 분석이 이루어지고 있으나, 비정형 데이터인 텍스트 자료를 활용한 분석은 전무한 상황이다. 따라서 본 연구에서는 관련 기사들을 대상으로 최근의 컨테이너선 운임 상승의 요인들을 텍스트 마이닝 기법중 하나인 네트워크 텍스트 분석과 LDA 토픽 모델링을 통해 파악하였다. 2020년 1월부터 2021년 7월까지 로이즈리스트에 게재된 기사들을 대상으로 텍스트 분석을 하였다. 분석 결과, 중국과 미국의 무역마찰, 글로벌 생산감소를 예측한 글로벌 선사들의 급격한 기항 횟수의 감소와 임시결항의 증가, 터미널 혼잡, 수에즈 운하 봉쇄와 같은 예기치 못한 사고들이 주요 원인으로 분석되었다.

사회네트워크분석과 텍스트마이닝을 이용한 배구 경기력 분석 (Performance analysis of volleyball games using the social network and text mining techniques)

  • 강병욱;허만규;최승배
    • Journal of the Korean Data and Information Science Society
    • /
    • 제26권3호
    • /
    • pp.619-630
    • /
    • 2015
  • 본 연구의 목적은 '사회네트워크분석'과 '텍스트마이닝'을 이용하여 국내 남자프로배구 구단의 공격, 패스 패턴을 찾아내고, 배구경기력과 관련된 핵심 키워드 추출하여 경기력을 평가하여 향후 구단의 경기 전력을 수립하는데 기초자료로 활용하는데 있다. 본 연구에서는 '사회네트워크분석'을 통해 도출된 그룹변수들을 '텍스트마이닝' 기법의 결과인 경기의 '승패'에 차이를 검정하기 위해 '0' 그룹 (6명)과 '1' 그룹 (11명)으로 재구성하였다. 연구의 결과로서 '사회네트워크분석'의 연결중심성과 중개중심성의 순위로 판단하면, '0' 그룹 보다 '1' 그룹이 우수한 경기력을 보였다. '사회네트워크분석'에 의해서 재구성된 '0' 그룹과 '1' 그룹에 따라서 '텍스트마이닝'에 의해서 생성된 '승패' 그룹에 대한 유의성 검정 결과 유의한 차이가 있는 것으로 나타났다 (p값: 0.001). '그룹별' 클러스터링 결과, '0' 그룹의 경우 'D' 선수와 'E' 선수가 '세트' 플레이를 통하여 정확하게 득점한다고 할 수 있다. '1' 그룹의 경우 'K' 선수가 '디그'에 의해서 '공격'을 하는 경우 실패하는 경우가 많고, 'C' 선수와 'P' 선수는 '세트' 정확한 플레이를 한 것으로 나타났다.

Data Dictionary 기반의 R Programming을 통한 비정형 Text Mining Algorithm 연구 (A study on unstructured text mining algorithm through R programming based on data dictionary)

  • 이종화;이현규
    • 한국산업정보학회논문지
    • /
    • 제20권2호
    • /
    • pp.113-124
    • /
    • 2015
  • 미리 선언된 구조를 이용하여 수집 저장된 정형적 데이터와는 달리 웹 2.0의 시대에서 일반 사용자들이 평상시에 사용하는 자연어 형태로 작성된 비정형 데이터 분석은 과거보다 훨씬 더 넓은 응용범위를 가지고 있다. 데이터 양이 폭발적으로 증가하고 있다는 특성뿐 만 아니라 인간의 감성이 그대로 표현된 특성을 가진 텍스트에서 의미 있는 정보를 추출하는 빅데이터 분석 기법을 텍스트마이닝(Text Mining)이라 하며 본 연구는 이를 주제로 하고 있다. 본 연구를 위해 오픈 소스인 통계분석용 소프트웨어 R 프로그램을 이용하였으며, 비정형 텍스트 문서를 웹 환경에서 수집, 저장, 전처리, 분석 작업과 시각화(Frequency Analysis, Cluster Analysis, Word Cloud, Social Network Analysis)작업 등의 과정에 관한 알고리즘 구현을 연구하였다. 특히, 연구자의 연구 영역 분석에 초점을 더욱 높이기 위해 Data Dictionary를 참조한 키워드 추출 기법을 사용하였다. 실제 사례에 적용한 R은 다양한 OS 구동, 일반적 언어와의 인터페이스 지원 등 통계 분석용 소프트웨어로써 매우 유용하다는 점을 발견할 수 있었다.

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • 제14권1호
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.