• Title/Summary/Keyword: inverted file

Search Result 42, Processing Time 0.023 seconds

Using of CDS/ISIS on Information Processing (정보처리시(情報處理時)의 CDS/ISIS 이용(利用))

  • Seo, Jinn-Y
    • Journal of Information Management
    • /
    • v.23 no.2
    • /
    • pp.57-72
    • /
    • 1992
  • Look for definition and characters of information and data, CDS/ISIS which is one of the total system in information processing are introduced. Explain the main function, characters and advantages of CDS/ISIS and forethermore try to encourage of the function of current ISIS. We need program for Korean language are available.

  • PDF

A Search Method for Components Based-on XML Component Specification (XML 컴포넌트 명세서 기반의 컴포넌트 검색 기법)

  • Park, Seo-Young;Shin, Yoeng-Gil;Wu, Chi-Su
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.180-192
    • /
    • 2000
  • Recently, the component technology has played a main role in software reuse. It has changed the code-based reuse into the binary code-based reuse, because components can be easily combined into the developing software only through component interfaces. Since components and component users have increased rapidly, it is necessary that the users of components search for the most proper components for HTML among the enormous number of components on the Internet. It is desirable to use web-document-typed specifications for component specifications on the Internet. This paper proposes to use XML component specifications instead of HTML specifications, because it is impossible to represent the semantics of contexts using HTML. We also propose the XML context-search method based on XML component specifications. Component users use the contexts for the component properties and the terms for the values of component properties in their queries for searching components. The index structure for the context-based search method is the inverted file indexing structure of term-context-component specification. Not only an XML context-based search method but also a variety of search methods based on context-based search, such as keyword, search, faceted search, and browsing search method, are provided for the convenience of users. We use the 3-layer architecture, with an interface layer, a query expansion layer, and an XML search engine layer, of the search engine for the efficient index scheme. In this paper, an XML DTD(Document Type Definition) for component specification is defined and the experimental results of comparing search performance of XML with HTML are discussed.

  • PDF

Cost-based Optimization of Extended Boolean Queries (확장 불리언 질의에 대한 비용 기반 최적화)

  • 박병권
    • Journal of the Korean Society for information Management
    • /
    • v.18 no.3
    • /
    • pp.29-40
    • /
    • 2001
  • In this paper, we suggest a query optimization algorithm to select the optimal processing method of an extended boolean query on inverted files. There can be a lot of methods for processing an extended boolean query according to the processing sequence oh the keywords con tamed in the query, In this sense, the problem of optimizing an extended boolean query it essentially that of optimizing the keyword sequence in the query. In this paper, we show that the problem is basically analogous to the problem of finding the optimal join order in database query optimization, and apply the ideas in the area to the problem solving. We establish the cost model for processing an extended boolean query and develop an algorithm to filled the optimal keyword-processing sequence based on the concept of keyword rank using the keyword selectivity and the access costs of inverted file. We prove that the method selected by the optimization algorithm is really optimum, and show, through experiments, that the optimal method is superior to the others in performance We believe that the suggested optimization algorithm will contribute to the significant enhancement of the information retrieval performance.

  • PDF

Term Clustering and Duplicate Distribution for Efficient Parallel Information Retrieval (효율적인 병렬정보검색을 위한 색인어 군집화 및 분산저장 기법)

  • 강재호;양재완;정성원;류광렬;권혁철;정상화
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.129-139
    • /
    • 2003
  • The PC cluster architecture is considered as a cost-effective alternative to the existing supercomputers for realizing a high-performance information retrieval (IR) system. To implement an efficient IR system on a PC cluster, it is essential to achieve maximum parallelism by having the data appropriately distributed to the local hard disks of the PCs in such a way that the disk I/O and the subsequent computation are distributed as evenly as possible to all the PCs. If the terms in the inverted index file can be classified to closely related clusters, the parallelism can be maximized by distributing them to the PCs in an interleaved manner. One of the goals of this research is the development of methods for automatically clustering the terms based on the likelihood of the terms' co-occurrence in the same query. Also, in this paper, we propose a method for duplicate distribution of inverted index records among the PCs to achieve fault-tolerance as well as dynamic load balancing. Experiments with a large corpus revealed the efficiency and effectiveness of our method.

Light weight architecture for acoustic scene classification (음향 장면 분류를 위한 경량화 모형 연구)

  • Lim, Soyoung;Kwak, Il-Youp
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.979-993
    • /
    • 2021
  • Acoustic scene classification (ASC) categorizes an audio file based on the environment in which it has been recorded. This has long been studied in the detection and classification of acoustic scenes and events (DCASE). In this study, we considered the problem that ASC faces in real-world applications that the model used should have low-complexity. We compared several models that apply light-weight techniques. First, a base CNN model was proposed using log mel-spectrogram, deltas, and delta-deltas features. Second, depthwise separable convolution, linear bottleneck inverted residual block was applied to the convolutional layer, and Quantization was applied to the models to develop a low-complexity model. The model considering low-complexity was similar or slightly inferior to the performance of the base model, but the model size was significantly reduced from 503 KB to 42.76 KB.

Development of an Automatic Hypertext Indexer for Dynamic Information Storage (동적 정보 저장을 위한 자동 하이퍼텍스트 색인 기법의 개발)

  • Yi, Dong-Ae;Jang, Duk-Sung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.9
    • /
    • pp.2333-2341
    • /
    • 1997
  • The hyperlinks to related nodes should be changed when we insert, or modify an information in a hypertext database. We can find more informations by means of hyperlinks that are based upon hypertext indexes. Therefore, the management of the hypertext indexes is an important component for dynamic information storage. In this paper, we suggest a method to manage the hypertext indexes and to determine hyperlinks automatically by using a dynamic indexer. We also construct index, stopword, and postposition dictionaries, an inverted index file, and a thesaurus to help the dynamic indexer.

  • PDF

Resolving the Ambigities in World Sense by using Automatic Keyword Network in Information Retrieval (정보검색에서의 어의 중의성 해소를 위한 자동 키워드망의 이용)

  • Kim, Jung-Sae;Jang, Duk-Sung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.12
    • /
    • pp.3855-3865
    • /
    • 2000
  • The automatic indexing is a compulsory part for the text retrieval system. However it is impossible to rank the appropriate texts at top. Furthermore, it is more difficult to prevent to rank the inappropriate texts having homonyms at top by only the automatic indexing. In this paper, we proposed the two-level retrieval system to enhance the retrieval efficiency, in which Automatic Keyword Network (AKN) is used at the second-level process. The firsHevel search is carried out with an inverted index file generated by the automatic indexing. On the other hand the second-level search exploits AKN based on the degree of asslxiation between terms. We have developed several formulas for rearranging the rank of texts at second-level search, and evaluated the performance of the effects of them on resolving the word sense ambiguities.

  • PDF

A Study on the Performance Evaluation of Semantic Retrieval Engines (시맨틱검색엔진의 성능평가에 관한 연구)

  • Noh, Young-Hee
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.22 no.2
    • /
    • pp.141-160
    • /
    • 2011
  • This study suggested knowledge base and search engine for the libraries that have the largescaled data. For this purpose, 3 components of knowledge bases(triple ontology, concept-based knowledge base, inverted file) were constructed and 3 search engines(search engine JENA for rule-based reasoning, Concept-based search engine, keyword-based Lucene retrieval engine) were implemented to measure their performance. As a result, concept-based retrieval engine showed the best performance, followed by ontology-based Jena retrieval engine, and then by a normal keyword search engine.

온라인 정보검색 시스템의 명령어비교

  • 김정현
    • Journal of Korean Library and Information Science Society
    • /
    • v.15
    • /
    • pp.207-243
    • /
    • 1988
  • This study is an attempt to furnish some helpful data for the maximum use of online information retrieval systems based on the comparative analysis of commands and operating terms employed by DIALOG, ORBIT and BRS which are three largest information retrieval systems in the world. To begin with, the historical development, service contents and retrieval functions of three systems were overviewed in the second chapter, on the basis of which the concrete functions and contents of the commands and the operating terms of the three systems were compared and analyzed one another. Commands were explained divided into basic and special ones. The basic commands subdivided into 44 items were put into the form of a diagram(Table 1). The special commands were first subdivided into search commnad, output-related commands and su n.0, pplementary commands, and then 14 items of which were comparatively analyzed and were represented as a chart(Table 2). In addition, any stand-alone commands employed by each system were also analyzed as a chart(Table 3). The Operating terms other than commands were subdivided into 16 items and were represented as a chart(Table 4). Password, inverted file, search step number in search session, etc were explained to the very detailed extent. A self-evident, single word is that the through understanding of commands is essential to the maximm use of all the functions of each of three systems.

  • PDF

A Study on the DB-IR Integration: Per-Document Basis Online Index Maintenance

  • Jin, Du-Seok;Jung, Hoe-Kyung
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.3
    • /
    • pp.275-280
    • /
    • 2009
  • While database(DB) and information retrieval(IR) have been developed independently, there have been emerging requirements that both data management and efficient text retrieval should be supported simultaneously in an information system such as health care, customer support, XML data management, and digital libraries. The great divide between DB and IR has caused different manners in index maintenance for newly arriving documents. While DB has extended its SQL layer to cope with text fields due to lack of intact mechanism to build IR-like index, IR usually treats a block of new documents as a logical unit of index maintenance since it has no concept of integrity constraint. However, In the DB-IR integrations, a transaction on adding or updating a document should include maintenance of the posting lists accompanied by the document. Although DB-IR integration has been budded in the research filed, the issue will remain difficult and rewarding areas for a while. One of the primary reasons is lack of efficient online transactional index maintenance. In this paper, performance of a few strategies for per-document basis transactional index maintenance - direct index update, pulsing auxiliary index and posting segmentation index - will be evaluated. The result shows that the pulsing auxiliary strategy and posting segmentation indexing scheme, can be a challenging candidates for text field indexing in DB-IR integration.