• Title/Summary/Keyword: retrieval effectiveness

Search Result 257, Processing Time 0.022 seconds

A Buffer Management Scheme to Maximize the Utilization of System Resources for Variable Bit Rate Video-On-Demand Servers (가변 비트율 주문형 비디오 서버에서 자원 활용률을 높이기 위한 버퍼 관리 기법)

  • Kim Soon-Cheol
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.9 no.3
    • /
    • pp.1-10
    • /
    • 2004
  • Video-On-Demand servers use compression techniques to reduce the storage and bandwidth requirements. The compression techniques make the bit rates of compressed video data significantly variable from frame to frame. Consequently, Video-On-Demand servers with a constant bit rate retrieval can not maximize the utilization of resources. It is possible that when variable bit rate video data is stored, accurate description of the bit rate changes could be computed a priori. In this paper, I propose a buffer management scheme called MAX for Video-On-Demand server using variable bit rate continuous media. By caching and prefetching the data, MAX buffer management scheme reduces the variation of the compressed data and increases the number of clients simultaneously served and maximizes the utilization of system resources. Results of trace-driven simulations show the effectiveness of the scheme.

  • PDF

A New Semantic Distance Measurement Method using TF-IDF in Linked Open Data (링크드 오픈 데이터에서 TF-IDF를 이용한 새로운 시맨틱 거리 측정 기법)

  • Cho, Jung-Gil
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2020
  • Linked Data allows structured data to be published in a standard way that datasets from various domains can be interlinked. With the rapid evolution of Linked Open Data(LOD), researchers are exploiting it to solve particular problems such as semantic similarity assessment. In this paper, we propose a method, on top of the basic concept of Linked Data Semantic Distance (LDSD), for calculating the Linked Data semantic distance between resources that can be used in the LOD-based recommender system. The semantic distance measurement model proposed in this paper is based on a similarity measurement that combines the LOD-based semantic distance and a new link weight using TF-IDF, which is well known in the field of information retrieval. In order to verify the effectiveness of this paper's approach, performance was evaluated in the context of an LOD-based recommendation system using mixed data of DBpedia and MovieLens. Experimental results show that the proposed method shows higher accuracy compared to other similar methods. In addition, it contributed to the improvement of the accuracy of the recommender system by expanding the range of semantic distance calculation.

Name Disambiguation using Cycle Detection Algorithm Based on Social Networks (사회망 기반 순환 탐지 기법을 이용한 저자명 명확화 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Jeong, Ha-Na;Choi, Joong-Min
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.306-319
    • /
    • 2009
  • A name is a key feature for distinguishing people, but we often fail to discriminate people because an author may have multiple names or multiple authors may share the same name. Such name ambiguity problems affect the performance of document retrieval, web search and database integration. Especially, in bibliography information, a number of errors may be included since there are different authors with the same name or an author name may be misspelled or represented with an abbreviation. For solving these problems, it is necessary to disambiguate the names inputted into the database. In this paper, we propose a method to solve the name ambiguity by using social networks constructed based on the relations between authors. We evaluated the effectiveness of the proposed system based on DBLP data that offer computer science bibliographic information.

An Evaluation of an Information Sharing Workflow Using Data Provenance Semantics (데이터 생성의미를 활용한 정보공유구조의 효과성 비교 연구)

  • Lee, Choon Yeul
    • Journal of Digital Convergence
    • /
    • v.11 no.6
    • /
    • pp.175-185
    • /
    • 2013
  • For effective information sharing, data provenance semantics need to be managed effectively. Based on a scheme to represent data provenance semantics, we propose a model to calculate information sharing costs. Information sharing costs are derived from probabilities of type I and type II errors that occur in organizational information sharing, costs related to these errors, and information sharing distances between organizational units which are determined by information sharing workflows. We apply the model to various types of information sharing workflows including departmental information systems, hierarchical information systems, a hub and a stand-alone system. The calculated information sharing costs show that the hub with data standardization is best in information sharing; however without standardization its information sharing cost deteriorates to that of a departmental information system. And, any information sharing workflow is better than a stand-alone system. It is proved that the model is useful in analyzing effectiveness of information sharing workflows and their characteristics.

Design and Implementation of Contents based on XML for Efficient e-Learning System (e-Learning 시스템을 위한 XML기반 효율적인 교육 컨텐츠의 설계 및 구현)

  • Kim, Young-Gi;Han, Sun-Gwan
    • Journal of The Korean Association of Information Education
    • /
    • v.5 no.2
    • /
    • pp.279-287
    • /
    • 2001
  • In this paper, we have defined and designed the structure of standardized XML content for supplying efficient e-Learning contents. We have also implemented the prototype of XML contents generator to create the educational contents easily. In addition, we have suggested the contents searching method using Case Base Reasoning and Bayesian belief network to supply XML contents suitable to learners request. The existing e-Learning system based on HTML could not customize and standardize, but XML contents can be reused and made an intelligent learning by supplying an adaptive content according to learners level. For evaluating the efficiency of designed XML content, we make the standard XML content for learning JAVA program in e-Learning system as well as discussing about the integrity and expanding the educational content. Finally, we have shown the architecture and effectiveness of the knowledge-based XML contents retrieval manager.

  • PDF

Classification of Brain Magnetic Resonance Images using 2 Level Decision Tree Learning (2 단계 결정트리 학습을 이용한 뇌 자기공명영상 분류)

  • Kim, Hyung-Il;Kim, Yong-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.18-29
    • /
    • 2007
  • In this paper we present a system that classifies brain MR images by using 2 level decision tree learning. There are two kinds of information that can be obtained from images. One is the low-level features such as size, color, texture, and contour that can be acquired directly from the raw images, and the other is the high-level features such as existence of certain object, spatial relations between different parts that must be obtained through the interpretation of segmented images. Learning and classification should be performed based on the high-level features to classify images according to their semantic meaning. The proposed system applies decision tree learning to each level separately, and the high-level features are synthesized from the results of low-level classification. The experimental results with a set of brain MR images with tumor are discussed. Several experimental results that show the effectiveness of the proposed system are also presented.

Development of the Potential Query Recommendation System using User's Search History (사용자 검색이력 기반의 잠재적 질의어 추천 시스템 개발)

  • Park, Jeongbae;Park, Kinam;Lim, Heuiseok
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.193-199
    • /
    • 2013
  • In this paper, a user search history based potential query recommendation system is proposed to enable the user of information search system to represent one's potential desire for information in terms of query and to facilitate the desired information to be searched. The proposed system has analyzed the association with the existing users's search histories based on the users' search query, and it has extracted the users's potential desire for information. The extracted potential desire for information is represented in terms of recommended query and thereby made recommendations to users. In order to analyze the effectiveness of the system proposed in this paper, we conducted behavioral experiments by using search histories of 27656. As a result of behavioral experiments, the experiment subjects were found to show a statistically significant higher level of satisfaction when using the proposed system as compared to using general search engines.

A Study on Building Internal Tables in Christianity of the 5th Edition of Korean Decimal Classification (기독교 분야 내부보조표 설정에 관한 연구 - 한국십진분류법 제5판을 중심으로 -)

  • Jeong, Yu Na;Chung, Yeon-Kyoung
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.24 no.3
    • /
    • pp.29-51
    • /
    • 2013
  • The purpose of this study is to develop internal tables in Christian religion in the 5th edition of Korean Decimal Classification. The scope of the Christianity, its structure of various classification schemes, and the concepts of internal tables were analyzed. The contents of several textbooks were analyzed for the scope of the discipline and the classification schemes and internal tables of DDC, UDC, NDC, LCC, Classification of the Library of Union Theological Seminary and the Classification of the Korea Theological Library were compared. And then, internal tables in Bible, sermon, worship, church history were built and those tables were evaluated by librarians and experts in the fields. And finally, internal tables of the Christiainity and new headings were suggested. New internal tables in Christianity will increase the effectiveness of information retrieval and it will provide a foundation for developing internal tables in other disciplines.

Efficient Subsequence Searching in Sequence Databases : A Segment-based Approach (시퀀스 데이터베이스를 위한 서브시퀀스 탐색 : 세그먼트 기반 접근 방안)

  • Park, Sang-Hyun;Kim, Sang-Wook;Loh, Woong-Kee
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.344-356
    • /
    • 2001
  • This paper deals with the subsequence searching problem under time-warping in sequence databases. Our work is motivated by the observation that subsequence searches slow down quadratically as the average length of data sequences increases. To resolve this problem, the Segment-Based Approach for Subsequence Searches (SBSS) is proposed. The SBASS divides data and query sequences into a series of segments, and retrieves all data subsequences that satisfy the two conditions: (1) the number of segments is the same as the number of segments in a query sequence, and (2) the distance of every segment pair is less than or equal to a tolerance. Our segmentation scheme allows segments to have different lengths; thus we employ the time warping distance as a similarity measure for each segment pair. For efficient retrieval of similar subsequences, we extract feature vectors from all data segments exploiting their monotonically changing properties, and build a spatial index using feature vectors. Using this index, queries are processed with the four steps: (1) R-tree filtering, (2) feature filtering, (3) successor filtering, and (4) post-processing. The effectiveness of our approach is verified through extensive experiments.

  • PDF

An XML Tag Indexing Method Using on Lexical Similarity (XML 태그를 분류에 따른 가중치 결정)

  • Jeong, Hye-Jin;Kim, Yong-Sung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.1
    • /
    • pp.71-78
    • /
    • 2009
  • For more effective index extraction and index weight determination, studies of extracting indices are carried out by using document content as well as structure. However, most of studies are concentrating in calculating the importance of context rather than that of XML tag. These conventional studies determine its importance from the aspect of common sense rather than verifying that through an objective experiment. This paper, for the automatic indexing by using the tag information of XML document that has taken its place as the standard for web document management, classifies major tags of constructing a paper according to its importance and calculates the term weight extracted from the tag of low weight. By using the weight obtained, this paper proposes a method of calculating the final weight while updating the term weight extracted from the tag of high weight. In order to determine more objective weight, this paper tests the tag that user considers as important and reflects it in calculating the weight by classifying its importance according to the result. Then by comparing with the search performance while using the index weight calculated by applying a method of determining existing tag importance, it verifies effectiveness of the index weight calculated by applying the method proposed in this paper.