• Title/Summary/Keyword: 베이스

Search Result 4,383, Processing Time 0.029 seconds

Concurrency Control Using the Update Graph in Replicated Database Systems (중복 데이터베이스 시스템에서 갱신그래프를 이용한 동시성제어)

  • Choe, Hui-Yeong;Lee, Gwi-Sang;Hwang, Bu-Hyeon
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.587-602
    • /
    • 2002
  • Replicated database system was emerged to resolve the problem of reduction of the availability and the reliability due to the communication failures and site errors generated at centralized database system. But if update transactions are many occurred, the update is equally executed for all replicated data. Therefore, there are many problems the same thing a message overhead generated by synchronization and the reduce of concurrency happened because of delaying the transaction. In this paper, I propose a new concurrency control algorithm for enhancing the degree of parallelism of the transaction in fully replicated database designed to improve the availability and the reliability. To improve the system performance in the replicated database should be performed the last operations in the submitted site of transactions and be independently executed update-only transactions composed of write-only transactions in all sites. I propose concurrency control method to maintain the consistency of the replicated database and reflect the result of update-only transactions in all sites. The superiority of the proposed method has been tested from the respondence and withdrawal rate. The results confirm the superiority of the proposed technique over classical correlation based method.

Design and Implementation of PS-Block Timing Model Using PS-Block Structue (PS-Block 구조를 사용한 PS-Block Timing Model의 설계 및 구현)

  • Kim Yun-Kwan;Shin Won;Chang Chun-Hyon;Kim Tae-Wan
    • The KIPS Transactions:PartD
    • /
    • v.13D no.3 s.106
    • /
    • pp.399-404
    • /
    • 2006
  • A real-time system is used for various systems from small embedded systems to distributed enterprise systems. Because it has a characteristic that provides a service on time, developers should make efforts to keep this property about time when developing real-time applications. As the result of research about real-time system indicates, TMO model supports various functions for time processing according to the real-time concept. And it guarantees response time which developers defined. So developers need a point of reference to define deadline and check the correctness of time. This paper proposes an improved PS-Block as an infrastructure of analysis tools for TMO to present a point of reference. There is a problem that the existing PS-Block has overhead caused by a policy making duplicated blocks. As such, this paper implements a PS-Block Timing Model to reduce the overhead due to block duplication, and defines a base class for searching in PS-Block. The PS-Block Timing Model, using an improved PS-Block structure, offers a point of reference of deadline and an infrastructure of execution time analysis according to the PS-Block configuration policy. Therefore, TMO developers can easily verify deadline of real-time methods, and improve reliability, and reduce development terms.

A Study on the Development of Standard Profiles Management System which supports the Technical Reference Model for Information Technology Architecture (정보기술 아키텍처를 위한 기술참조모델을 지원하는 표준프로파일 관리시스템 개발에 관한 연구)

  • Yang, Jin-Hyeok;Kim, Yeong-Do;Jeong, Hui-Jun;Yang, Jin-Yeong;Yu, Myeong-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.8D no.6
    • /
    • pp.665-672
    • /
    • 2001
  • ITA (Information Technology Architecture) satisfies the requirements of information system, supports the information used in the institution's business to guarantee the interoperability and security, and analyzes the components of information system. ITA consists of EA (Enterprise Architecture), TRM (Technical Reference Manual) and SP (Standard Profile). The SP, one of the major components of ITA, is a set of information technology standards. In this paper, to construct and utilize the ITA, we mention the applications of information technology about the SP system implementation based on the TRM. The SP management system implemented in this paper is the first trial in Korea, and designs the software with object oriented programming languages such as JSP and Java. Moreover the basic and detailed specification based on the UML notation, system design using the component and system design pattern consisting of software architectures enhance the software reusability. And the constructed system in this paper shows less maintenance cost by using the public softwares such as Linux system, Korean DBMS, Apache and Tomcat, etc. Finally, the system includes the SP reference system which is used in the other institutions and cannot be found in other institutions. Also it includes the additional diverse service modules which support the subsequent processing for the establishment and revision of standards via internet.

  • PDF

A Korean Homonym Disambiguation System Using Refined Semantic Information and Thesaurus (정제된 의미정보와 시소러스를 이용한 동형이의어 분별 시스템)

  • Kim Jun-Su;Ock Cheol-Young
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.829-840
    • /
    • 2005
  • Word Sense Disambiguation(WSD) is one of the most difficult problem in Korean information processing. We propose a WSD model with the capability to filter semantic information using the specific characteristics in dictionary dictions, and nth added information, useful to sense determination, such as statistical, distance and case information. we propose a model, which can resolve the issues resulting from the scarcity of semantic information data based on the word hierarchy system (thesaurus) developed by Ulsan University's UOU Word Intelligent Network, a dictionary-based toxicological database. Among the WSD models elaborated by this study, the one using statistical information, distance and case information along with the thesaurus (hereinafter referred to as 'SDJ-X model') performed the best. In an experiment conducted on the sense-tagged corpus consisting of 1,500,000 eojeols, provided by the Sejong project, the SDJ-X model recorded improvements over the maximum frequency word sense determination (maximum frequency determination, MFC, accuracy baseline) of $18.87\%$ ($21.73\%$ for nouns and inter-eojeot distance weights by $10.49\%$ ($8.84\%$ for nouns, $11.51\%$ for verbs). Finally, the accuracy level of the SDJ-X model was higher than that recorded by the model using only statistical information, distance and case information, without the thesaurus by a margin of $6.12\%$ ($5.29\%$ for nouns, $6.64\%$ for verbs).

Medical Image Compression Using JPEG International Standard (JPEG 표준안을 이용한 의료 영상 압축)

  • Ahn, Chang-Beom;Han, Sang-Woo;Kim, Il-Yoen
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.504-506
    • /
    • 1993
  • The Joint Photographic Experts Group (JPEG) standard was proposed by the International Standardization Organization (ISO/SC 29/WG 10) and the CCITT SG VIII as an international standard for digital continuous-tone still image compression. The JPEG standard has been widely accepted in electronic imaging, computer graphics, and multi-media applications, however, due to the lossy character of the JPEG compression its application in the field of medical imaging has been limited. In this paper, the JPEG standard was applied to a series of head sections of magnetic resonance (MR) images (256 gray levels, $256{\times}256$ size) and its performance was investigated. For this purpose, DCT-based sequential mode of the JPEG standard was implemented using the CL550 compression chip and progressive and lossless coding was implemented by software without additional hardware. From the experiment, it appears that the compression ratio of about 10 to 20 was obtained for the MR images without noticeable distortion. It is also noted that the error signal between the reconstructed image by the JPEG and the original image was nearly random noise without causing any special-pattern-related artifact. Although the coding efficiency of the progressive and hierarchical coding is identical to that of the sequential coding in compression ratio and SNR, it has useful features In fast search of patient Image from huge image data base and in remote diagnosis through slow public communication channel.

  • PDF

Development of Collaborative Environment for Community-driven Scientific Data Curation (커뮤니티 주도적 과학 데이터 큐레이션 협업 환경의 개발)

  • Choi, Dong-Hoon;Park, Jae-Won;Kim, ByungKyu;Shin, Jin-Sup
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.9
    • /
    • pp.1-11
    • /
    • 2017
  • The importance of data curation is increasingly recognized as the need of data reuse drastically grows. Due to recent data explosion, scientists invest almost 90% of their efforts in the retrieval and collection of data needed to their study. In this paper, we deal with the development and application of a collaborative environment for community-driven data curation which is essential to enhance scientific data reusability and citability. The collaborative scientific data curation environment focuses on the cross-linking between data (or data collections) and their associated literatures to capture and organize inter-relations among research results in a specific domain. Also, plenty of contextual information is provided as metadata in order to support users in understanding data. The cross-linking has been realized by using DOI system to guarantee global accessibility to data and their relationships to literatures. The curation environment has been adopted to build a community-driven curated DB by a globally well-known intrinsically-disorderd protein research group. The curated DB will drastically reduce researchers' efforts to retrieve and collect the data required for scientific discovery.

Source Tracking of Fecal Contamination at Ansan Stream Using Multiple Antibiotic Resistance Analysis (Multiple Antibiotic Resistance Analysis를 이용한 안산천 분변성 미생물 오염원 추적)

  • Lee, Sang-Min;Lee, Jin;Kim, Moon-Il;Yoon, Hyun-Sik
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.33 no.11
    • /
    • pp.827-833
    • /
    • 2011
  • In this study, fecal nonpoint pollutant sources tracking were conducted on Ansan stream. Multiple Antibiotic Resistance Analysis (MARA) method used in this study is based on the premise that fecal bacteria derived from intestine of human or animal has each different resistance for antibiotics. First of all, a database for known sources should be established to use the method and then, an unknown sample was applied on the database to find unknown sources by statistical analysis. The Ansan stream was considered with divided condition into three parts: upper (livestock farming area), mid (old section of the city), and downstream (new section of the city) to search an environmental influence of the stream basin. As results of the statistical analysis, it could be estimated that the upper stream area was influenced by animals due to the nature of influence for the livestock farms located in this area because livestock were classified as percentages of 45.8% in 3-way method divided into livestock, pet and human. In case of midstream and downstream, the human influence was remarkable as percentage of 60% and 80%, respectively. From these results, it could be judged that the MARA method is useful in source tracking the non-point pollutant sources because the MARA results correspond to which predictable non-point pollutant sources by a field study. Also, it is expected that a more effective source tracking will be possible as establishing database of each area.

In-network Aggregation Query Processing using the Data-Loss Correction Method in Data-Centric Storage Scheme (데이터 중심 저장 환경에서 소설 데이터 보정 기법을 이용한 인-네트워크 병합 질의 처리)

  • Park, Jun-Ho;Lee, Hyo-Joon;Seong, Dong-Ook;Yoo, Jae-Soo
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.315-323
    • /
    • 2010
  • In Wireless Sensor Networks (WSNs), various Data-Centric Storages (DCS) schemes have been proposed to store the collected data and to efficiently process a query. A DCS scheme assigns distributed data regions to sensor nodes and stores the collected data to the sensor which is responsible for the data region to process the query efficiently. However, since the whole data stored in a node will be lost when a fault of the node occurs, the accuracy of the query processing becomes low, In this paper, we propose an in-network aggregation query processing method that assures the high accuracy of query result in the case of data loss due to the faults of the nodes in the DCS scheme. When a data loss occurs, the proposed method creates a compensation model for an area of data loss using the linear regression technique and returns the result of the query including the virtual data. It guarantees the query result with high accuracy in spite of the faults of the nodes, To show the superiority of our proposed method, we compare E-KDDCS (KDDCS with the proposed method) with existing DCS schemes without the data-loss correction method. In the result, our proposed method increases accuracy and reduces query processing costs over the existing schemes.

Reordering Scheme of Location Identifiers for Indexing RFID Tags (RFID 태그의 색인을 위한 위치 식별자 재순서 기법)

  • Ahn, Sung-Woo;Hong, Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.36 no.3
    • /
    • pp.198-214
    • /
    • 2009
  • Trajectories of RFID tags can be modeled as a line, denoted by tag interval, captured by an RFID reader and indexed in a three-dimensional domain, with the axes being the tag identifier (TID), the location identifier (LID), and the time (TIME). Distribution of tag intervals in the domain space is an important factor for efficient processing of a query for tracing tags and is changed according to arranging coordinates of each domain. Particularly, the arrangement of LIDs in the domain has an effect on the performance of queries retrieving the traces of tags as times goes by because it provides the location information of tags. Therefore, it is necessary to determine the optimal ordering of LIDs in order to perform queries efficiently for retrieving tag intervals from the index. To do this, we propose LID proximity for reordering previously assigned LIDs to new LIDs and define the LID proximity function for storing tag intervals accessed together closely in index nodes when a query is processed. To determine the sequence of LIDs in the domain, we also propose a reordering scheme of LIDs based on LID proximity. Our experiments show that the proposed reordering scheme considerably improves the performance of Queries for tracing tag locations comparing with the previous method of assigning LIDs.

Declustering of High-dimensional Data by Cyclic Sliced Partitioning (주기적 편중 분할에 의한 다차원 데이터 디클러스터링)

  • Kim Hak-Cheol;Kim Tae-Wan;Li Ki-Joune
    • Journal of KIISE:Databases
    • /
    • v.31 no.6
    • /
    • pp.596-608
    • /
    • 2004
  • A lot of work has been done to reduce disk access time in I/O intensive systems, which store and handle massive amount of data, by distributing data across multiple disks and accessing them in parallel. Most of the previous work has focused on an efficient mapping from a grid cell to a disk number on the assumption that data space is regular grid-like partitioned. Although we can achieve good performance for low-dimensional data by grid-like partitioning, its performance becomes degenerate as grows the dimension of data even with a good disk allocation scheme. This comes from the fact that they partition entire data space equally regardless of distribution ratio of data objects. Most of the data in high-dimensional space exist around the surface of space. For that reason, we propose a new declustering algorithm based on the partitioning scheme which partition data space from the surface. With an unbalanced partitioning scheme, several experimental results show that we can remarkably reduce the number of data blocks touched by a query as grows the dimension of data and a query size. In this paper, we propose disk allocation schemes based on the layout of the resultant data blocks after partitioning. To show the performance of the proposed algorithm, we have performed several experiments with different dimensional data and for a wide range of number of disks. Our proposed disk allocation method gives a performance within 10 additive disk accesses compared with strictly optimal allocation scheme. We compared our algorithm with Kronecker sequence based declustering algorithm, which is reported to be the best among the grid partition and mapping function based declustering algorithms. We can improve declustering performance up to 14 times as grows dimension of data.