• Title/Summary/Keyword: Document information retrieval

Search Result 410, Processing Time 0.028 seconds

A Document Summary System based on Personalized Web Search Systems (개인화 웹 검색 시스템 기반의 문서 요약 시스템)

  • Kim, Dong-Wook;Kang, Soo-Yong;Kim, Han-Joon;Lee, Byung-Jeong;Chang, Jae-Young
    • Journal of Digital Contents Society
    • /
    • v.11 no.3
    • /
    • pp.357-365
    • /
    • 2010
  • Personalized web search engine provides personalized results to users by query expansion, re-ranking or other methods representing user's intention. The personalized result page includes URL, page title and small text fragment of each web document. which is known as snippet. The snippet is the summary of the document which includes the keywords issued by either user or search engine itself. Users can verify the relevancy of the whole document using only the snippet, easily. The document summary (snippet) is an important information which makes users determine whether or not to click the link to the whole document. Hence, if a search engine generates personalized document summaries, it can provide a more satisfactory search results to users. In this paper, we propose a personalized document summary system for personalized web search engines. The proposed system provides increased degree of satisfaction to users with marginal overhead.

An Automatic Classification System of Korean Documents Using Weight for Keywords of Document and Word Cluster (문서의 주제어별 가중치 부여와 단어 군집을 이용한 한국어 문서 자동 분류 시스템)

  • Hur, Jun-Hui;Choi, Jun-Hyeog;Lee, Jung-Hyun;Kim, Joong-Bae;Rim, Kee-Wook
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.447-454
    • /
    • 2001
  • The automatic document classification is a method that assigns unlabeled documents to the existing classes. The automatic document classification can be applied to a classification of news group articles, a classification of web documents, showing more precise results of Information Retrieval using a learning of users. In this paper, we use the weighted Bayesian classifier that weights with keywords of a document to improve the classification accuracy. If the system cant classify a document properly because of the lack of the number of words as the feature of a document, it uses relevance word cluster to supplement the feature of a document. The clusters are made by the automatic word clustering from the corpus. As the result, the proposed system outperformed existing classification system in the classification accuracy on Korean documents.

  • PDF

Conceptual Object Grouping for Multimedia Document Management

  • Lee, Chong-Deuk;Jeong, Taeg-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.9 no.3
    • /
    • pp.161-165
    • /
    • 2009
  • Increase of multimedia information in Web requires a new method to manage and service multimedia documents efficiently. This paper proposes a conceptual object grouping method by fuzzy filtering, which is automatically constituted based on increase of multimedia documents. The proposed method composes subsumption relations between conceptual objects automatically using fuzzy filtering of the document objects that are extracted from domains. Grouping of such conceptual objects is regarded as subsumption relation which is decided by $\mu$-cut. This paper proposes $\mu$-cut, FAS(Fuzzy Average Similarity) and DSR(Direct Subsumption Relation) to decide fuzzy filtering, which groups related document objects easily. This paper used about 1,000 conceptual objects in the performance test of the proposed method. The simulation result showed that the proposed method had better retrieval performance than those for OGM(Optimistic Genealogy Method) and BGM(Balanced Genealogy Method).

Question Analysis and Expansion based on Semantics (의미 기반의 질의 분석 및 확장)

  • Shin, Seung-Eun;Park, Hee-Guen;Seo, Young-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.7
    • /
    • pp.50-59
    • /
    • 2007
  • This paper describes a question analysis and expansion based on semantics for on efficient information retrieval. Results of all information retrieval systems include many non-relevant documents because the index cannot naturally reflect the contents of documents and because queries used in information retrieval systems cannot represent enough information in user's question. To solve this problem, we analyze user's question semantically, determine the answer type, and extract semantic features. And then we expand user's question using them and syntactic structures which are used to represent the answer. Our similarity is to rank documents which include expanded queries in high position. Especially, we found that an efficient document retrieval is possible by a question analysis and expansion based on semantics on natural language questions which are comparatively short but fully expressing the information demand of users.

A Development of the Test Set for Estimating the Retrieval Performance of an Automatic Indexer (자동색인기 성능시험을 위한 Test Set 개발)

  • 김성혁;서은경;이원규;김명철;김영환;김재군
    • Journal of the Korean Society for information Management
    • /
    • v.11 no.1
    • /
    • pp.81-102
    • /
    • 1994
  • Accordmg to the development of various information retneval system suitable for Korean database, many researchers have realized the need of R Test ColleAon which can be r d y used for evaluatmg a retneval system. Therefore, This study developed the TEST SET whch helps ob&vely evaluatmg the retrieval performance of an Hangul Automatic Indexer or Korean Information Retrieval System. The developed Test Set has four files such as: 1) Korean Document Set( * . all): 2) Natural Language Query Set(KTsetnq1): 3) Boolean Query Set(Ktset.bq1): 4) Query-Relevance Judgment Set ( KTsetrel) .

  • PDF

Service-centric Object Fragmentation Model for Efficient Retrieval and Management of Huge XML Documents (대용량 XML 문서의 효율적인 검색과 관리를 위한 SCOF 모델)

  • Jeong, Chang-Hoo;Choi, Yun-Soo;Jin, Du-Seok;Kim, Jin-Suk;Yoon, Hwa-Mook
    • Journal of Internet Computing and Services
    • /
    • v.9 no.1
    • /
    • pp.103-113
    • /
    • 2008
  • Vast amount of XML documents raise interests in how they will be used and how far their usage can be expanded, This paper has two central goals: 1) easy and fast retrieval of XML documents or relevant elements; and 2) efficient and stable management of large-size XML documents, The keys to develop such a practical system are how to segment a large XML document to smaller fragments and how to store them. In order to achieve these goals, we designed SCOF(Service-centric Object Fragmentation) model, which is a semi-decomposition method based on conversion rules provided by XML database managers. Keyword-based search using SCOF model then retrieves the specific elements or attributes of XML documents, just as typical XML query language does. Even though this approach needs the wisdom of managers in XML document collection, SCOF model makes it efficient both retrieval and management of massive XML documents.

  • PDF

A Knowledge Service Using Automatic Document Sharing based on Intelligent OMDR (지능형 OMDR 기반의 자동 문서 공유 에이전트를 이용한 지식서비스)

  • Su-Kyoung Kim;Kee-Hong Ahn
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.747-750
    • /
    • 2008
  • 본 연구는 온톨로지, 자연어 처리, 메타데이터 등의 시맨틱 웹 기반 기술들을 이용하여 시맨틱 웹 응용을 위한 전체적인 기술 적용과 그의 활용에 목적을 두고 있다. 이를 위해 OWL을 기반으로 조직이나 기관의 지식 주제별 도메인 온톨로지와, 기존 워드넷(WordNet)이나 더브린 코어 메타데이터(Dublin Core Meta Data)와 조직에 정의된 데이터베이스의 스키마를 MDR로 구축하여 상호 연결하여 온톨로지가 갖는 지능적 추론과 규칙 서비스와 표준화된 메타데이터의 결합 방법을 제공한다. 이는 기존에 온톨로지와 메타데이터의 재활용과 연결(Alignment)에 있어 연구적으로 높은 가치가 있다. 그리고 조직의 사용자가 문서를 작성할 때 문서의 내용에 대해 자연어 처리 기술과 온톨로지의 기술을 이용해 적합한 용어나 메타데이터를 자동으로 제공하여 작성된 문서의 공유와 재사용성을 높이고, 작성된 문서를 XML 형식으로 구성되는 XML 기반 지능 문서 데이터베이스(XMB Based Intelligent Document Database)에 저장하여 유사한 문서를 작성하거나 사용할 필요가 있는 사용자에게 문서 등록과 검색 에이전트(Document Registry and Retrieval Agent)를 통해 이러한 제공하여 문서 지식의 사유화를 최소화 하고, 유사 문서의 재작성과 또는 특정 문서의 작성에 필요한 시간이나 경비를 줄이게 된다. 또한 웹상이나 PDA 같은 개인 휴대장치를 통해서도 서 등록과 검색 에이전트를 통해 문서를 검색하고 사용할 수 있게 한다면 언제 어디서나 해당 서비스를 활용하는 유비쿼터스와 시맨틱 웹의 실질적 응용을 거둘 수도 있으리라 사료된다.

Design and Implementation of XML Documents Storage and Retrieval System based on Object-Relational Database (객체관계형 데이터베이스에 기반한 XML 문서 저장 및 검색 시스템의 설계 및 구현)

  • 이성대;곽용원;박휴찬
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.183-193
    • /
    • 2003
  • XML has emerged as the internet standard for information exchange among e-businesses and applications. Therefore, it becomes necessary to store XML documents in database for efficient management. This paper describes the design and implementation of XML documents storage and retrieval system based on object-relational database. The storage method first, decomposes XML document into Element, and then stores according to element types. The system also supports various search methods to retrieve XML documents from database.

Design and Implementation of an XML Document Management System Based on $O_2$ ($O_2$기반의 XML 문서관리 시스템 설계 및 구현)

  • 유재수
    • The Journal of Information Technology and Database
    • /
    • v.7 no.1
    • /
    • pp.27-39
    • /
    • 2000
  • In this paper, we design and implement a XML management system based on OODBMS that supports structured information retrieval of XML documents. We also propose an object oriented modeling to store and fetch XML documents, to manage image data, and to support versioning for the XML document management system(XMS). The XMS consists of a repository manager that maintains the interfaces for external application programs, a XML instance storage manager that stores XML documents in the database, a XML instance manager that fetches XML documents stored in the database, a XML index manager that creates index for the structure information and the contents of documents, and a query processor that processes various queries.

  • PDF

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.