• Title/Summary/Keyword: Document information retrieval

Search Result 411, Processing Time 0.043 seconds

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Fast, Flexible Text Search Using Genomic Short-Read Mapping Model

  • Kim, Sung-Hwan;Cho, Hwan-Gue
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.518-528
    • /
    • 2016
  • The searching of an extensive document database for documents that are locally similar to a given query document, and the subsequent detection of similar regions between such documents, is considered as an essential task in the fields of information retrieval and data management. In this paper, we present a framework for such a task. The proposed framework employs the method of short-read mapping, which is used in bioinformatics to reveal similarities between genomic sequences. In this paper, documents are considered biological objects; consequently, edit operations between locally similar documents are viewed as an evolutionary process. Accordingly, we are able to apply the method of evolution tracing in the detection of similar regions between documents. In addition, we propose heuristic methods to address issues associated with the different stages of the proposed framework, for example, a frequency-based fragment ordering method and a locality-aware interval aggregation method. Extensive experiments covering various scenarios related to the search of an extensive document database for documents that are locally similar to a given query document are considered, and the results indicate that the proposed framework outperforms existing methods.

A Model of Natural Language Information Retrieval Using Main Keywords and Sub-keywords (주 키워드와 부 키워드를 이용한 자연언어 정보 검색 모델)

  • Kang, Hyun-Kyu;Park, Se-Young
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3052-3062
    • /
    • 1997
  • An Information Retrieval (IR) is to retrieve relevant information that satisfies user's information needs. However a major role of IR systems is not just the generation of sets of relevant documents, but to help determine which documents are most likely to be relevant to the given requirements. Various attempts have been made in the recent past to use syntactic analysis methods for the generation of complex construction that are essential for content identification in various automatic text analysis systems. Unfortunately, it is known that methods based on syntactic understanding alone are not sufficiently powerful to Produce complete analyses of arbitrary text samples. In this paper, we present a document ranking method based on two-level ranking. The first level is used to retrieve the documents, and the second level to reorder the retrieved documents. The main keywords used in the first level can be defined as nouns and/or compound nouns that possess good document discrimination powers. The sub-keywords used in the second level can be also defined as adjectives, adverbs, and/or verbs that are not main keywords, and function words. An empirical study was conducted from a Korean encyclopedia with 23,113 entries and 161 Korean natural language queries collected by end users. 850% of the natural language queries contained sub-keywords. The two-level document ranking methods provides significant improvement in retrieval effectiveness over traditional ranking methods.

  • PDF

A Study on Paper Retrieval System based on OWL Ontology (OWL 온톨로지를 기반으로 하는 논문 검색 시스템에 관한 연구)

  • Sun, Bok-Keun;We, Da-Hyun;Han, Kwang-Rok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.169-180
    • /
    • 2009
  • The conventional paper retrieval is the keyword-based search and as a huge amount of data be published, this search becomes more difficult in retrieving information that user want to find. In order to search for information to the user's intent, we need to introduce semantic Web that represents semantics of Web document resources on the Internet environment as ontology and enables the computer to understand this ontology. Therefore, we describe a paper retrieval system through OWL(Ontology Web Language) ontology-based reason in this paper. We build the paper ontology based on OWL which is new popular ontology language for semantic Web and represent the correlation among diverse paper properties as the DL(description logic) query, and then this system infers the correct results from the paper ontology by using the DL query and makes it possible to retrieve information intelligently. Finally, we compared our experimental result with the conventional retrieval.

Collection and Extraction Algorithm of Field-Associated Terms (분야연상어의 수집과 추출 알고리즘)

  • Lee, Sang-Kon;Lee, Wan-Kwon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.347-358
    • /
    • 2003
  • VSField-associated term is a single or compound word whose terms occur in any document, and which makes it possible to recognize a field of text by using common knowledge of human. For example, human recognizes the field of document such as or , a field name of text, when she encounters a word 'Pitcher' or 'election', respectively We Proposes an efficient construction method of field-associated terms (FTs) for specializing field to decide a field of text. We could fix document classification scheme from well-classified document database or corpus. Considering focus field we discuss levels and stability ranks of field-associated terms. To construct a balanced FT collection, we construct a single FTs. From the collections we could automatically construct FT's levels, and stability ranks. We propose a new extraction algorithms of FT's for document classification by using FT's concentration rate, its occurrence frequencies.

Word Extraction from Table Regions in Document Images (문서 영상 내 테이블 영역에서의 단어 추출)

  • Jeong, Chang-Bu;Kim, Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.369-378
    • /
    • 2005
  • Document image is segmented and classified into text, picture, or table by a document layout analysis, and the words in table regions are significant for keyword spotting because they are more meaningful than the words in other regions. This paper proposes a method to extract words from table regions in document images. As word extraction from table regions is practically regarded extracting words from cell regions composing the table, it is necessary to extract the cell correctly. In the cell extraction module, table frame is extracted first by analyzing connected components, and then the intersection points are extracted from the table frame. We modify the false intersections using the correlation between the neighboring intersections, and extract the cells using the information of intersections. Text regions in the individual cells are located by using the connected components information that was obtained during the cell extraction module, and they are segmented into text lines by using projection profiles. Finally we divide the segmented lines into words using gap clustering and special symbol detection. The experiment performed on In table images that are extracted from Korean documents, and shows $99.16\%$ accuracy of word extraction.

Implementation of an XML-Based Editor/Transformer for Large Volume of Similar Documents (XML 기반의 대용량 유사 문서 편집기/변환기 구현)

  • 황인준
    • The Journal of Society for e-Business Studies
    • /
    • v.9 no.1
    • /
    • pp.21-38
    • /
    • 2004
  • With its recent popularity, Web is now considered as a huge repository of information. Most documents on the web have been created using HTML(Hyper Text Markup Language). Even though HTML is simple and easy to learn, it has several features that are obstacles to the efficient information retrieval. XML(eXtensible Markup Language) can provide a solution to such problems and in fact, has already been used in many applications, XML is a standard markup language for exchanging data on the web. It can describe a document structure freely by defining its DTD, which enables efficient integration and retrieval of data on the web. In this paper, we propose a versatile and efficient XML document manager. Its features include (i) form-based XML editor that enables easy creation of new XML documents, (ii) automatic document converter that can transform HTML documents with similar structure into XML documents automatically, and (iii) GUI-based DTD editor.

  • PDF

Keyword Extraction from News Corpus using Modified TF-IDF (TF-IDF의 변형을 이용한 전자뉴스에서의 키워드 추출 기법)

  • Lee, Sung-Jick;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.59-73
    • /
    • 2009
  • Keyword extraction is an important and essential technique for text mining applications such as information retrieval, text categorization, summarization and topic detection. A set of keywords extracted from a large-scale electronic document data are used for significant features for text mining algorithms and they contribute to improve the performance of document browsing, topic detection, and automated text classification. This paper presents a keyword extraction technique that can be used to detect topics for each news domain from a large document collection of internet news portal sites. Basically, we have used six variants of traditional TF-IDF weighting model. On top of the TF-IDF model, we propose a word filtering technique called 'cross-domain comparison filtering'. To prove effectiveness of our method, we have analyzed usefulness of keywords extracted from Korean news articles and have presented changes of the keywords over time of each news domain.

  • PDF

A Design and Implementation of Ontology-based Retrieval System for the Electronic Records of Universities (대학 전자기록물을 위한 온톨로지 기반 검색시스템 설계 및 구현)

  • Lee, Jung-Hee;Kim, Hee-Sop
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.3
    • /
    • pp.343-362
    • /
    • 2007
  • The purpose of this study is to design and implement an ontology-based retrieval system for the electronic records of universities and to compare its performance with the existing keyword-based retrieval system. We used OntoStudio 1.4 for implementing an ontology-based retrieval system, and the test collection consisted of the following: (1) 5,099 electronic records of the 'personnel management notification' created by Korea Maritime University, (2) 20 topics (10 short-topics and 10 long-topics), and (3) the relevant assessments were conducted by the group of human experts. 10 university staff participated in the experiment of keyword-based searching and used the same test collection as used in the experiment of ontology-based searching. The ontology-based retrieval system outperformed to the keyword-based retrieval system in terms of Recall and Precision, and the same results showed in the test of the short-topics and long-topics comparison.

Retrieval Scheme of XML Documents Using Link Queries (링크 질의를 통한 XML 문서의 검색 기법)

  • Mun, Chan-Ho;Gang, Hyeon-Cheol
    • The KIPS Transactions:PartD
    • /
    • v.8D no.4
    • /
    • pp.313-326
    • /
    • 2001
  • The XML that was proposed as a next-generation standard for describing Web documents is widely used in various Web-based applications. In addition, XML documents on the Web link each other by hyperlinks. The current works on XML focus on the XML storage system that can efficiently store, manage, and retrieve XML documents. However, the research on the query language that supports the XML links and on the XML retrieval systems to process the XML links, is little conducted until now. In this paper, we propose an extension of an XML query language for expressing the XML link query and its processing scheme. A link query is to retrieve contents from an XML document (a query document) and from the XML documents (referenced documents) that are referred to by the links in the query document. As far as retrieving from the referenced documents is concerned, the current practice is to manually generate queries to get the partial results, and to repeat such a procedure. The purpose of link query processing in this paper is to eliminate the manual work altogether in getting the complete query result. The performance analysis shows that our link query processing strategy outperforms the conventional approach including the manual tasks. The more links to the referenced documents and the more referenced documents there are in the site storing the query document, the more query processing time decreases.

  • PDF