• Title/Summary/Keyword: Document clustering

Search Result 224, Processing Time 0.03 seconds

A Method of Descriptor Extraction for Automatic Document Clustering (자동 문서 클러스터링을 위한 디스크립터 추출 방안)

  • Yun, Bo-Hyun;Kang, Hyun-Kyu;Ko, Hyung-Dae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.04a
    • /
    • pp.230-233
    • /
    • 2000
  • 기존의 검색엔진은 검색결과를 적합도 순서로 나열하여 사용자가 원하는 문서를 찾는데 어려움이 있다. 이러한 문제의 해결책으로 검색결과 문서에 대해 자동 클러스터링을 수행하여 문서 내용이 유사한 문서가 하나의 클러스터내에 존재하도록 한다. 본 논문에서는 검색 결과 문서의 클러스터링에서 필요한 디스크립터 추출 방안을 제안한다. 각 클러스터 내에서 디스크립터를 추출하기 위해 정보검색의 색인과정에서 사용하는 용어 가중치 계산 방법을 이용한다.

  • PDF

Design and implementation of web document clustering system using on incremental algorithm (점진적 알고리즘을 이용한 웹 문서 클러스터링 시스템의 설계 및 구현)

  • 황태호;손기락
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10a
    • /
    • pp.207-209
    • /
    • 1999
  • 클러스터 분석은 관측의 대상이 되는 집합에 맞는 분류 구조를 생성하는데 이용되는 통계학적인 기술이다. 정보검색 응용에서 전형적으로 발견되는 높은 차원을 가진 많은 데이터 집합을 클러스터하기 위하여, 많은 공간과 시간이 필요하다. SLINK 알고리즘은 O(n2)의 시간과 O(n)의 공간의 성능을 갖으며 점진성을 반영할 수 있는 알고리즘이다. SLINK알고리즘을 이용하여 검색 엔진의 검색결과에 온라인으로 클러스터 분류를 수행하는 시스템을 구현하였다. 구현된 시스템은 상대적으로 높은 정확도와 각 클러스터를 저장하고 표현하는데 있어서의 장점을 제공하며, 상대적으로 느린 수행 속도는 온라인으로 문서들이 다운로드 되는 속도가 느리므로 문제가 되지 않음을 알 수 있었다.

  • PDF

Document Clustering using Generic Algorithm and Cluster Measurement (클러스터 측정과 유전자 알고리즘을 이용한 문서 클러스터링)

  • Choi, Lim Cheon;Park, Soon Cheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.490-493
    • /
    • 2010
  • 본 논문에서는 클러스터 측정(Cluster Measurement)과 유전자 알고리즘을 이용한 문서 클러스링 알고리즘을 제안한다. 유전자 알고리즘의 요소를 클러스터링에 대입하고 클러스터 측정을 적합도 함수에 대입하여 문서 클러스터링을 구현하였다. 성능 평가를 위하여 한국일보-20000/한국일보-40075 문서범주화 실험문서집합의 데이터 셋을 이용하였다. 클러스터링 성능 평가 결과 AS Index가 DB Index, RS Index 보다 좋은 성능을 보여준다. 또한 제안한 알고리즘이 K-means 클러스터링 알고리즘에 비교해 안정적으로 좋은 성능을 보여준다.

A Semantic Text Model with Wikipedia-based Concept Space (위키피디어 기반 개념 공간을 가지는 시멘틱 텍스트 모델)

  • Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.19 no.3
    • /
    • pp.107-123
    • /
    • 2014
  • Current text mining techniques suffer from the problem that the conventional text representation models cannot express the semantic or conceptual information for the textual documents written with natural languages. The conventional text models represent the textual documents as bag of words, which include vector space model, Boolean model, statistical model, and tensor space model. These models express documents only with the term literals for indexing and the frequency-based weights for their corresponding terms; that is, they ignore semantical information, sequential order information, and structural information of terms. Most of the text mining techniques have been developed assuming that the given documents are represented as 'bag-of-words' based text models. However, currently, confronting the big data era, a new paradigm of text representation model is required which can analyse huge amounts of textual documents more precisely. Our text model regards the 'concept' as an independent space equated with the 'term' and 'document' spaces used in the vector space model, and it expresses the relatedness among the three spaces. To develop the concept space, we use Wikipedia data, each of which defines a single concept. Consequently, a document collection is represented as a 3-order tensor with semantic information, and then the proposed model is called text cuboid model in our paper. Through experiments using the popular 20NewsGroup document corpus, we prove the superiority of the proposed text model in terms of document clustering and concept clustering.

An Improved Combined Content-similarity Approach for Optimizing Web Query Disambiguation

  • Kamal, Shahid;Ibrahim, Roliana;Ghani, Imran
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.79-88
    • /
    • 2015
  • The web search engines are exposed to the issue of uncertainty because of ambiguous queries, being input for retrieving the accurate results. Ambiguous queries constitute a significant fraction of such instances and pose real challenges to web search engines. Moreover, web search has created an interest for the researchers to deal with search by considering context in terms of location perspective. Our proposed disambiguation approach is designed to improve user experience by using context in terms of location relevance with the document relevance. The aim is that providing the user a comprehensive location perspective of a topic is informative than retrieving a result that only contains temporal or context information. The capacity to use this information in a location manner can be, from a user perspective, potentially useful for several tasks, including user query understanding or clustering based on location. In order to carry out the approach, we developed a Java based prototype to derive the contextual information from the web results based on the queries from the well-known datasets. Among those results, queries are further classified in order to perform search in a broad way. After the result provision to users and the selection made by them, feedback is recorded implicitly to improve the web search based on contextual information. The experiment results demonstrate the outstanding performance of our approach in terms of precision 75%, accuracy 73%; recall 81% and f-measure 78% when compared with generic temporal evaluation approach and furthermore achieved precision 86%, accuracy 71%; recall 67% and f-measure 75% when compared with web document clustering approach.

Analysis of the abstracts of research articles in food related to climate change using a text-mining algorithm (텍스트 마이닝 기법을 활용한 기후변화관련 식품분야 논문초록 분석)

  • Bae, Kyu Yong;Park, Ju-Hyun;Kim, Jeong Seon;Lee, Yung-Seop
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.6
    • /
    • pp.1429-1437
    • /
    • 2013
  • Research articles in food related to climate change were analyzed by implementing a text-mining algorithm, which is one of nonstructural data analysis tools in big data analysis with a focus on frequencies of terms appearing in the abstracts. As a first step, a term-document matrix was established, followed by implementing a hierarchical clustering algorithm based on dissimilarities among the selected terms and expertise in the field to classify the documents under consideration into a few labeled groups. Through this research, we were able to find out important topics appearing in the field of food related to climate change and their trends over past years. It is expected that the results of the article can be utilized for future research to make systematic responses and adaptation to climate change.

Automatic Response and Conceptual Browsing of Internet FAQs Using Self-Organizing Maps (자기구성 지도를 이용한 인터넷 FAQ의 자동응답 및 개념적 브라우징)

  • Ahn, Joon-Hyun;Ryu, Jung-Won;Cho, Sung-Bae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.5
    • /
    • pp.432-441
    • /
    • 2002
  • Though many services offer useful information on internet, computer users are not so familiar with such services that they need an assistant system to use the services easily In the case of web sites, for example, the operators answer the users e-mail questions, but the increasing number of users makes it hard to answer the questions efficiently. In this paper, we propose an assistant system which responds to the users questions automatically and helps them browse the Hanmail Net FAQ (Frequently Asked Question) conceptually. This system uses two-level self-organizing map (SOM): the keyword clustering SOM and document classification SOM. The keyword clustering SOM reduces a variable length question to a normalized vector and the document classification SOM classifies the question into an answer class. Experiments on the 2,206 e-mail question data collected for a month from the Hanmail net show that this system is able to find the correct answers with the recognition rate of 95% and also the browsing based on the map is conceptual and efficient.

Word Image Decomposition from Image Regions in Document Images using Statistical Analyses (문서 영상의 그림 영역에서 통계적 분석을 이용한 단어 영상 추출)

  • Jeong, Chang-Bu;Kim, Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.591-600
    • /
    • 2006
  • This paper describes the development and implementation of a algorithm to decompose word images from image regions mixed text/graphics in document images using statistical analyses. To decompose word images from image regions, the character components need to be separated from graphic components. For this process, we propose a method to separate them with an analysis of box-plot using a statistics of structural components. An accuracy of this method is not sensitive to the changes of images because the criterion of separation is defined by the statistics of components. And then the character regions are determined by analyzing a local crowdedness of the separated character components. finally, we devide the character regions into text lines and word images using projection profile analysis, gap clustering, special symbol detection, etc. The proposed system could reduce the influence resulted from the changes of images because it uses the criterion based on the statistics of image regions. Also, we made an experiment with the proposed method in document image processing system for keyword spotting and showed the necessity of studying for the proposed method.

A Study of using Emotional Features for Information Retrieval Systems (감정요소를 사용한 정보검색에 관한 연구)

  • Kim, Myung-Gwan;Park, Young-Tack
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.579-586
    • /
    • 2003
  • In this paper, we propose a novel approach to employ emotional features to document retrieval systems. Fine emotional features, such as HAPPY, SAD, ANGRY, FEAR, and DISGUST, have been used to represent Korean document. Users are allowed to use these features for retrieving their documents. Next, retrieved documents are learned by classification methods like cohesion factor, naive Bayesian, and, k-nearest neighbor approaches. In order to combine various approaches, voting method has been used. In addition, k-means clustering has been used for our experimentation. The performance of our approach proved to be better in accuracy than other methods, and be better in short texts rather than large documents.

사용자 의도 정보를 사용한 웹문서 분류

  • Jang, Yeong-Cheol
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2008.10b
    • /
    • pp.292-297
    • /
    • 2008
  • 복잡한 시맨틱을 포함한 웹 문서를 정확히 범주화하고 이 과정을 자동화하기 위해서는 인간의 지식체계를 수용할 수 있는 표준화, 지능화, 자동화된 문서표현 및 분류기술이 필요하다. 이를 위해 키워드 빈도수, 문서내 키워드들의 관련성, 시소러스의 활용, 확률기법 적용 등에 사용자의도(intention) 정보를 활용한 범주화와 조정 프로세스를 도입하였다. 웹 문서 분류과정에서 시소러스 등을 사용하는 지식베이스 문서분류와 비 감독 학습을 하는 사전 지식체계(a priori)가 없는 유사성 문서분류 방법에 의도정보를 사용할 수 있도록 기반체계를 설계하였고 다시 이 두 방법의 차이는 Hybrid조정프로세스에서 조정하였다. 본 연구에서 설계된 HDCI(Hybrid Document Classification with Intention) 모델은 위의 웹 문서 분류과정과 이를 제어 및 보조하는 사용자 의도 분석과정으로 구성되어 있다. 의도분석과정에 키워드와 함께 제공된 사용자 의도는 도메인 지식(domain Knowledge)을 이용하여 의도간 계층트리(intention hierarchy tree)를 구성하고 이는 문서 분류시 제약(constraint) 또는 가이드의 역할로 사용자 의도 프로파일(profile) 또는 문서 특성 대표 키워드를 추출하게 된다. HDCI는 문서간 유사성에 근거한 상향식(bottom-up)의 확률적인 접근에서 통제 및 안내의 역할을 수행하고 지식베이스(시소러스) 접근 방식에서 다양성에 한계가 있는 키워들 간 관계설정의 정확도를 높인다.

  • PDF