• Title/Summary/Keyword: 데이타 병렬

Search Result 116, Processing Time 0.024 seconds

Four Consistency Levels in Trigger Processing (트리거 처리 4 단계 일관성 레벨)

  • ;Eric Hanson
    • Journal of KIISE:Databases
    • /
    • v.29 no.6
    • /
    • pp.492-501
    • /
    • 2002
  • An asynchronous trigger processor (ATP) is a oftware system that processes triggers after update transactions to databases are complete. In an ATP, discrimination networks are used to check the trigger conditions efficiently. Discrimination networks store their internal states in memory nodes. TriggerMan is an ATP and uses Gator network as the .discrimination network. The changes in databases are delivered to TriggerMan in the form of tokens. Processing tokens against a Gator network updates the memory nodes of the network and checks the condition of a trigger for which the network is built. Parallel token processing is one of the methods that can improve the system performance. However, uncontrolled parallel processing breaks trigger processing semantic consistency. In this paper, we propose four trigger processing consistency levels that allow parallel token processing with minimal anomalies. For each consistency level, a parallel token processing technique is developed. The techniques are proven to be valid and are also applicable to materialized view maintenance.

개선된 병렬 논리프로그래밍 시스템의 복잡도 분석

  • 정인정
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.2 no.1
    • /
    • pp.59-77
    • /
    • 1992
  • 본 논문은 컴퓨터 시스템의 보호 및 데이타 통신의 암호등과 관계가 있는 논리 프로그래밍에 대한 것이다. 본 논문에서 우리는 병력 논리프로그래밍 시스템에 대한 개선된 conteol strategy를 제안하였으며, 이에대한 형식적인 syntax와 semantics등을 정의하였다. 병렬 논리프로그램에 대한 병력성등을 분류하여, 이들이 어떻게 활용되는지 설명하였다. 또한 병렬 논리프로그램의 유도과정과 계산방식이 같은 alternating Turing machine이란 병렬 계산모형을 제안하여, 논리프로그램의 복잡도의 분석과 제안된 idea에 대한 타당성을 ATM을 이용하여 하였다.

A $CST^+$ Tree Index Structure for Range Search (범위 검색을 위한 $CST^+$ 트리 인덱스 구조)

  • Lee, Jae-Won;Kang, Dae-Hee;Lee, Sang-Goo
    • Journal of KIISE:Databases
    • /
    • v.35 no.1
    • /
    • pp.17-28
    • /
    • 2008
  • Recently, main memory access is a performance bottleneck for many computer applications. Cache memory is introduced in order to reduce memory access latency. However, it is possible for cache memory to reduce memory access latency, when desired data are located on cache. EST tree is proposed to solve this problem by improving T tree. However, when doing a range search, EST tree has to search unnecessary nodes. Therefore, this paper proposes $CST^+$ tree which has the merit of CST tree and is possible to do a range search by linking data nodes with linked lists. By experiments, we show that $CST^+$ is $4{\sim}10$ times as fast as CST and $CSB^+$. In addition, rebuilding an index Is an essential step for the database recovery from system failure. In this paper, we propose a fast tree index rebuilding algorithm called MaxPL. MaxPL has no node-split overhead and employs a parallelism for reading the data records and inserting the keys into the index. We show that MaxPL is $2{\sim}11$ times as fast as sequential insert and batch insert.

On the Data Mining and Security (데이터 탐사와 보안성)

  • 심갑식
    • Review of KIISC
    • /
    • v.7 no.4
    • /
    • pp.73-79
    • /
    • 1997
  • 웨어하우스나 다른 데이타베이스에 있는 데이터를 어떤 유용한 정보로 변환하는 기술은 데이터 탐사이다. 즉, 데이터 탐사는 데이터베이스의 많은 데이터에서 이전에는 몰랐던 정보를 추출하기 위해 일련의 적당한 질의들을 취하는 과정이다. 데이타 탐사 기술은 통계, 기계 이해(machine learning), 데이타베이스 관리, 병렬처리 (preallel processing)등을 포함한 다양한 기술들의 혼합이다. 본 연구에서는 데이터 탐사에서 기인될 보안 위협, 이런 위협을 처리하기 위한 기법, 보안 문제점을 처리할 도구로서 데이터 탐사의 이용 등을 알아볼 것이다.

  • PDF

Verifying Validation for Metadata Document In RDF (RDF 기반 메타데이타 문서를 위한 유효성 검증 가능한 툴 개발)

  • 조성훈;김동혁;조현규;송병렬;이무훈;최의인
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2004.05a
    • /
    • pp.447-450
    • /
    • 2004
  • 인터넷의 보급으로 인해 수많은 데이터가 생성되고 있고 이러한 데이터를 효율적으로 관리하고자 메타데이타 표준이 수차례에 걸쳐 발표되었다. 그러나 기존 발표된 표준들은 데이터 관리를 위해 특정 도메인에 제한적이거나 너무 광범위하여 효율적이지 못하다는 문제를 가지고 있다. 또한 표준에 따라 생성된 메타데이타라 할지라도 유효성을 검증하지 못하기 때문에 정확한 메타데이타로 정의내릴 수 없는 실정이다. 따라서 본 논문에서는 RDF(Resource Description Framework)에 기초하여 메타데이타를 효율적으로 저작할 수 있는 저작 기능, 생성된 메타데이타에 대한 유효성 검증이 가능한 유효성 검증 기능, 메타데이타의 N-Triple 표현을 지원하고 생성할 수 있는 N-Triple Generator를 지원할 수 있는 RDF 기반 저작 틀을 개발하였다.

  • PDF

Declustering of High-dimensional Data by Cyclic Sliced Partitioning (주기적 편중 분할에 의한 다차원 데이터 디클러스터링)

  • Kim Hak-Cheol;Kim Tae-Wan;Li Ki-Joune
    • Journal of KIISE:Databases
    • /
    • v.31 no.6
    • /
    • pp.596-608
    • /
    • 2004
  • A lot of work has been done to reduce disk access time in I/O intensive systems, which store and handle massive amount of data, by distributing data across multiple disks and accessing them in parallel. Most of the previous work has focused on an efficient mapping from a grid cell to a disk number on the assumption that data space is regular grid-like partitioned. Although we can achieve good performance for low-dimensional data by grid-like partitioning, its performance becomes degenerate as grows the dimension of data even with a good disk allocation scheme. This comes from the fact that they partition entire data space equally regardless of distribution ratio of data objects. Most of the data in high-dimensional space exist around the surface of space. For that reason, we propose a new declustering algorithm based on the partitioning scheme which partition data space from the surface. With an unbalanced partitioning scheme, several experimental results show that we can remarkably reduce the number of data blocks touched by a query as grows the dimension of data and a query size. In this paper, we propose disk allocation schemes based on the layout of the resultant data blocks after partitioning. To show the performance of the proposed algorithm, we have performed several experiments with different dimensional data and for a wide range of number of disks. Our proposed disk allocation method gives a performance within 10 additive disk accesses compared with strictly optimal allocation scheme. We compared our algorithm with Kronecker sequence based declustering algorithm, which is reported to be the best among the grid partition and mapping function based declustering algorithms. We can improve declustering performance up to 14 times as grows dimension of data.

The Integer Superscalar Processor Performance Model Using Dependency Trees and the Relative ILP (종속 트리와 상대적 병렬도를 이용하는 수퍼스칼라 프로세서의 정수형 성능 예측 모델)

  • 이종복
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10c
    • /
    • pp.13-15
    • /
    • 2001
  • 최근에 이르러 프로세서의 병렬성을 분석적 기법으로 예측하기 위한 연구가 활발해지면서 프로세서의 성능 예측 모델에 대한중요성이 대두되고 있다. 그러나 기존의 연구는 현재 광범위하게 사용되고 있는 다중 분기 예측법을 이용하는 프로세서에 대하여 분기 차수와 관계없는 재귀적 성능 모델을 제공해주지 않는다. 본 논문에서는 이것을 해결하기 위하여, 매 싸이클마다 명령어 종속 트리를 구성하고 종속인 명령어 간에 상대적인 병렬도 갓을 부여하여 성능 예측 모델 입력 데이타를 측정하였다. 그 곁과, 다중 분기 예측법을 사용하는 프로세서에서 정수형 프로그램에 대한 성능을 기존의 성능모델보다 작은 상대 오차로 예측할 수 있다.

  • PDF

The Cooperative Parallel X-Match Data Compression Algorithm (협동 병렬 X-Match 데이타 압축 알고리즘)

  • 윤상균
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.10
    • /
    • pp.586-594
    • /
    • 2003
  • X-Match algorithm is a lossless compression algorithm suitable for hardware implementation owing to its simplicity. It can compress 32 bits per clock cycle and is suitable for real time compression. However, as the bus width increases 64-bit, the compression unit also need to increase. This paper proposes the cooperative parallel X-Match (X-MatchCP) algorithm, which improves the compression speed by performing the two X-Match algorithms in parallel. It searches the all dictionary for two words, combines the compression codes of two words generated by parallel X-Match compression and outputs the combined code while the previous parallel X-Match algorithm searches an individual dictionary. The compression ratio in X-MatchCP is almost the same as in X-Match. X-MatchCP algorithm is described and simulated by Verilog hardware description language.

A Study of designing Parallel File System for Massive Information Processing (대규모 정보처리를 위한 병렬 화일시스템 설계에 관한 연구)

  • Jang, Si-Ung;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.5
    • /
    • pp.1221-1230
    • /
    • 1997
  • In this study, the performance of a parallel file system(N-PFS), which is inplemented using conventional disks as disk arrays on a Workstation Cluster, is analyzed by using analytical method and adtual values in experiments.N-PFS can be used as high-performance file sever in small-scale server systems and effciently pro-cess massive data I/Os such as multimedia and scientifid data. In this paper, an analytical model was suggested and the correctness of the suggested was verified by analyzing the experimental values on a system.The result of the appropriate stping unit for processing massive data of the Workstation Cluster with 8 disks is 64-128Kbytes and the maximum throughput on it is 15.8 Mbytes/ses.In addition, the performance of parallel file system on massive data is bounded by the time required to copy data between buffers.

  • PDF