• Title/Summary/Keyword: Case Retrieval

Search Result 310, Processing Time 0.021 seconds

A Study on the Clustering Technique Associated with Statistical Term Relatedness in Information Retrieval (정보검색(情報檢索)에 있어서 용어(用語)의 통계적(統計的) 관련성(關聯性)을 응용(應用)한 클러스터링기법(技法))

  • Jeong, Jun-Min
    • Journal of Information Management
    • /
    • v.18 no.4
    • /
    • pp.98-117
    • /
    • 1985
  • At the present time, the role and importance of information retrieval has greatly increased for two main reasons: the coverage of the searchable collections is now extensive and collection size may exceed several million documents; further more, the search results can now be obtained more or less instantaneously using online procedures and computer terminal devices that provide interaction and communication between system and users. The large collection size make it plausible to the users that relevant information will in fact be retrieved as a result of a search operation, and the probability of obtaining the search output without delay creates a substantial user demand for the retrieval services.

  • PDF

Investigation of the Effect of Calculation Method of Offset Correction Factor on the GEMS Sulfur Dioxide Retrieval Algorithm (GEMS 이산화황 산출 현업 알고리즘에서 오프셋 보정 계수 산정 방법에 대한 영향 조사)

  • Park, Jeonghyeon;Yang, Jiwon;Choi, Wonei;Kim, Serin;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.2
    • /
    • pp.189-198
    • /
    • 2022
  • In this present study, we investigated the effect of the offset correction factor calculation method on the sulfur dioxide (SO2) column density in the SO2 retrieval algorithm of the Geostationary Environment Monitoring Spectrometer (GEMS) launched in February 2020. The GEMS operational SO2 retrieval algorithm is the Differential Optical Absorption Spectroscopy (DOAS) - Principal Component Analysis (PCA) Hybrid algorithm. In the GEMS Hybrid algorithm, the offset correction process is essential to correct the absorption effect of ozone appearing in the SO2 slant column density (SCD) obtained after spectral fitting using DOAS. Since the SO2 column density may depend on the conditions for calculating the offset correction factor, it is necessary to apply an appropriate offset correction value. In this present study, the offset correction values were calculated for days with many cloud pixels and few cloud pixels, respectively. And a comparison of the SO2 column density retrieved by applying each offset correction factor to the GEMS operational SO2 retrieval algorithm was performed. When the offset correction value was calculated using radiance data of GEMS on a day with many cloud pixels was used, the standard deviation of the SO2 column density around India and the Korean Peninsula, which are the edges of the GEMS observation area, was 1.27 DU, and 0.58 DU, respectively. And around Hong Kong, where there were many cloud pixels, the SO2 standard deviation was 0.77 DU. On the other hand, when the offset correction value calculated using the GEMS data on the day with few cloud pixels was used, the standard deviation of the SO2 column density slightly decreased around India (0.72 DU), Korean Peninsula (0.38 DU), and Hong Kong (0.44 DU). We found that the SO2 retrieval was relatively stable compared to the SO2 retrieval case using the offset correction value on the day with many cloud pixels. Accordingly, to minimize the uncertainty of the GEMS SO2 retrieval algorithm and to obtain a stable retrieval, it is necessary to calculate the offset correction factor under appropriate conditions.

Building a Conceptual Model Using Ontology for the Efficient Retrieval of Cases from Fuzzy-CBR of Collision Avoidance Support System

  • Park, Gyei-Kark;Benedictos, John Leslie RM;Shin, Sung-Chul;Im, Nam-Kyun;Yi, Mi-Ra
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.245-250
    • /
    • 2007
  • We have proposed Fuzzy-CBR to find a solution from past knowledge retrieved from the database and adapted to a new situation. However, ontology is needed in identifying concepts, relations and instances that are involved in a situation in order to improve and facilitate the efficient retrieval of similar cases from the CBR database. This paper proposes the way to apply ontology fur identifying the concepts involved in a new case, used as inputs, for a ship collision avoidance support system and in solving for similarity through document articulation and abstraction levels. These ontologies will be used to build a conceptual model of a maneuvering situation.

  • PDF

A Video Shot Verification System (비디오 샷 검증 시스템)

  • Chung, Ji-Moon
    • Journal of Digital Convergence
    • /
    • v.7 no.2
    • /
    • pp.93-102
    • /
    • 2009
  • Since video is composed of unstructured data with massive storage and linear forms, it is essential to conduct various research studies to provide the required contents for users who are accustomed to dealing with standardized data such as documents and images. Previous studies have shown the occurrence of undetected and false detected shots. This thesis suggested shot verification and video retrieval system using visual rhythm to reduce these kinds of errors. First, the system suggested in this paper is designed to detect the parts easily and quickly, which are assumed as shot boundaries, just by changing the visual rhythm without playing the image. Therefore, this enables to delete the false detected shot and to generate the unidentified shot and key frame. The following are the summaries of the research results of this study. Second, during the retrieving process, a thumbnail and keyword method of inquiry is possible and the user is able to put some more priorities on one part than the other between the color and shape. As a result, the corresponding shot or scene is displayed. However, in the case of not finding the preferred shot, the key picture frame of similar shot is supplied and can be used in the further inquiry of the next scene.

  • PDF

Evaluation of the Newspaper Library -With Emphasis on the Document Delivery Capability and Retrieval Effectivenss- (신문사 자료실에 대한 평가 -문헌전달능력과 검색효율을 중심으로-)

  • 노동조
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.7 no.1
    • /
    • pp.319-351
    • /
    • 1994
  • This rearch is a case study for the newspaper libraries in Seoul and the primary purpose of the this study are to investigate its document delivery capability. To achieve the above-mentioned purpose, representative rsers visited seven the newspaper library and checked their searching time. Document delivery capability was checked by units of hour, minute, second(searching time). Retrieval effectiveness was tested through the recall ratio and the precision ratio. The major findings of the study are summarized as follows: 1) Most of the newspaper libraries excellent to the document delivery capability; 6 newspaper libraries deliverived the data related subject. 2) The newspaper libraries were came out 50.1% the mean recall ratio and 84.8% the mean precision ratio about the all materials. 3) Concerned their own articles, the newspaper libraries showed 71.4% the recall ratio and 90.0% the precision ratio. That moaned their own articles were more effectived than others. 4) The Kookmin Ilbo library had the most excellent system, and the precision ratio of The Dong-A Ilbo library prior to the recall ratio. The Han Kyoreh Shinmun library had a excellent arragement in own articles, but The Segye Times library had problem in every parties.

  • PDF

A Study on Modeling of Bibliographic Framework Based on FRBR for Television Program Materials (방송영상자료의 FRBR기반 서지구조모형에 관한 연구)

  • Chung, Jin-Gyoo
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.41 no.1
    • /
    • pp.185-214
    • /
    • 2007
  • This study intends to design the bibliographic framework based on IFLA-FRBR model for television program materials and to evaluate this in terms of effectiveness of retrieval and usability of the system. The FRBR model supplies mote suitable bibliographic framework of audio-visual material which has a sufficient hierarchical relations and relative bibliographical records. The followings are research methods designed by this study; (1) The experimental metadata system named it FbCS based on FRBR was developed by using the entity-related database and composed of multi-layed and hierarchy. FbCS is developed through benchmarking of a case study for iMMix model in Netherlands based on FRBR. (2) To evaluate effectiveness of retrieval and usability of FbCS, this study made a experiment and survey by user groups of professionals.

Object Retrieval Using the Corners Area Variability Based on Correlogram (코너영역 분산치 기반 코렐로그램을 이용한 형태검출)

  • An, Young-Eun;Lee, Ji-Min;Yang, Won-Ii;Choi, Young-Il;Chang, Min-Hyuk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.6
    • /
    • pp.283-288
    • /
    • 2011
  • This paper have proposed an object retrieval using the corners area variability based on correlogram. The proposed algorithm is processed as follows. First, the corner points of the object in an image are extracted and then the feature vectors are obtained. It are rearranged according to the number dimension and consist of sequence vectors. And the similarity based on the maximum of sequence vectors is measured. The proposed technique is invariant to the rotation or the transfer of the objects and more efficient in case that the objects present simple structure. In simulation that use Wang's database, the method presents that the recall property is improved by 0.03% and more than the standard corner patch histogram.

Performance Evaluation of SSD-Index Maintenance Schemes in IR Applications

  • Jin, Du-Seok;Jung, Hoe-Kyung
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.4
    • /
    • pp.377-382
    • /
    • 2010
  • With the advent of flash memory based new storage device (SSD), there is considerable interest within the computer industry in using flash memory based storage devices for many different types of application. The dynamic index structure of large text collections has been a primary issue in the Information Retrieval Applications among them. Previous studies have proven the three approaches to be effective: In- Place, merge-based index structure and a combination of both. The above-mentioned strategies have been researched with the traditional storage device (HDD) which has a constraint on how keep the contiguity of dynamic data. However, in case of the new storage device, we don' have any constraint contiguity problems due to its low access latency time. But, although the new storage device has superiority such as low access latency and improved I/O throughput speeds, it is still not well suited for traditional dynamic index structures because of the poor random write throughput in practical systems. Therefore, using the experimental performance evaluation of various index maintenance schemes on the new storage device, we propose an efficient index structure for new storage device that improves significantly the index maintenance speed without degradation of query performance.

An efficient storing method of multiple streams based on fixed blocks in disk parititions (디스크 파티션내 고정 블록에 기반한 다중 스트림의 효율적 저장 방식)

  • 최성욱;박승규;최덕규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.9
    • /
    • pp.2080-2089
    • /
    • 1997
  • Recent evolution in compute technology makesthe multimedia processing widely availiable. Conventional storage systems do not meet the requirements of multimedia data. Several approaches were suggested to improve disk storing methods for them. Bocheck proposed a disk partitioning technique for multiple steams assuming that all steams have same retrieval intervals with the same amount data for each access. While Bocheck's one provides a good method for same period, it does not consider the case of different periods of continous media streams. This paper proposes a new partitioning technique in which a fixed number of blocks are assigned for stresms with different retrieval periodicity. The analysis shows this problem is the same as the one scheduling the steams into a given sequence. The simulation was done to compare the proposed m-sequence merge method with the conventional Scan-EDF and Partitioning methods.

  • PDF

Pathway Retrieval for Transcriptome Analysis using Fuzzy Filtering Technique andWeb Service

  • Lee, Kyung-Mi;Lee, Keon-Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.2
    • /
    • pp.167-172
    • /
    • 2012
  • In biology the advent of the high-throughput technology for sequencing, probing, or screening has produced huge volume of data which could not be manually handled. Biologists have resorted to software tools in order to effectively handle them. This paper introduces a bioinformatics tool to help biologists find potentially interesting pathway maps from a transcriptome data set in which the expression levels of genes are described for both case and control samples. The tool accepts a transcriptome data set, and then selects and categorizes some of genes into four classes using a fuzzy filtering technique where classes are defined by membership functions. It collects and edits the pathway maps related to those selected genes without analyst' intervention. It invokes a sequence of web service functions from KEGG, which an online pathway database system, in order to retrieve related information, locate pathway maps, and manipulate them. It maintains all retrieved pathway maps in a local database and presents them to the analysts with graphical user interface. The tool has been successfully used in identifying target genes for further analysis in transcriptome study of human cytomegalovirous. The tool is very helpful in that it can considerably save analysts' time and efforts by collecting and presenting the pathway maps that contain some interesting genes, once a transcriptome data set is just given.