• Title/Summary/Keyword: shared disks

Search Result 13, Processing Time 0.021 seconds

Affinity-based Dynamic Transaction Routing in Shared Disks Clusters (공유 디스크 클러스터에서 친화도 기반 동적 트랜잭션 라우팅)

  • 온경오;이상호;조행래
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04a
    • /
    • pp.542-544
    • /
    • 2003
  • 공유 디스크(Shared Disks: SD) 클러스터는 온라인 트랜잭션 처리를 위해 다수 개의 컴퓨터를 연동하는 방식으로, 각 노드들은 디스크 계층에서 데이타베이스를 공유한다. SD 클러스터에서 트랜잭션 라우팅은 사용자 트랜잭션이 요청될 경우 이를 실행할 노드를 결정하는 것을 의미한다. 이때, 동일한 클래스에 속하는 트랜잭션들을 가급적 동일한 노드에서 실행시킴으로써 캐쉬 무효화 오버헤드를 최소화할 수 있으며, 이러한 기법을 친화도 기반 트랜잭션 라우팅이라 한다. 한편, 트랜잭션 클래스의 발생빈도는 동적으로 변할 수 있으며, 특정 트랜잭션 클래스가 폭주할 경우 정적인 친화도 기반 트랜잭션 라우팅 정책만으로는 한계가 있다. 본 논문에서는 참조 지역성을 고려하여 동적인 트랜잭션 클래스의 부하를 SD 클러스터의 모든 노드들에 균등히 분배하는 동적 트랜잭션 라우팅 기법을 제안한다.

  • PDF

Performance Evaluation of Real-Time Transaction Processing Algorithms in Shared Disks Clusters (공유 디스크 클러스터 기반의 실시간 트랜잭션 처리 알고리즘 성능 평가)

  • 이상호;온경오;조행래
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.82-84
    • /
    • 2004
  • 인터넷을 이용한 전자 상거래 잎 관리 시스템 등의 실시간 처리를 요구하는 응용분야가 점차 증가함으로 인해 고성능 실시간 트랜잭션 처리 시스템 개발이 요구되고 있다. 그러나 기존에 제안된 대부분의 실시간 시스템은 다중 처리기나 분산 처리 방식을 이용하였으며, 클러스터 기술을 이용한 실시간 트랜잭션처리 시스템은 아직 제안된 바 없다 클러스터를 이용한 실시간 트랜잭션 처리 시스템은 저렴한 가격으로 높은 가용성과 병렬 처리를 이용한 고성능 트랜잭션 처리를 지원할 수 있다는 장점을 갖는다 이러한 관점에서 본 논문에서는 공유 디tm크(shared disks: SD) 클러스터 기반의 실시간 트랜잭션 처리 시스템을 개발하기 위하여 캐쉬 일관성 제어 기법이나 트랜잭션 라우팅 기법과 같은 전통적인 SD 클러스터 알고리즘과 실시간 트랜잭션을 위찬 동시성 제어 기법을 연동한 실험 모형을 개발하였다. 다양한 환경에서의 모의실험을 통하여 알고리즘간의 상호 관계와 실시간 환경에서 SD 클러스터의 성능을 평가 분석한다.

  • PDF

Design and implementation of a Shared-Concurrent File System in distributed UNIX environment (분산 UNIX 환경에서 Shared-Concurrent File System의 설계 및 구현)

  • Jang, Si-Ung;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.617-630
    • /
    • 1996
  • In this paper, a shared-concurrent file system (S-CFS) is designed and implemented using conventional disks as disk arrays on a Workstation Cluster which can be used as a small-scale server. Since it is implemented on UNIX operating systems, S_CFS is not only portable and flexible but also efficient in resource usage because it does not require additional I/O nodes. The result of the research shows that on small-scale systems with enough disks, the performance of the concurrent file system on transaction processing applications is bounded by the bottleneck of CPUs computing powers while the performance of the concurrent file system on massive data I/Os is bounded by the time required to copy data between buffers. The concurrent file system,which has been implemented on a Workstation Cluster with 8 disks,shows a throughput of 388 tps in case of transaction processing applications and can provide the bandwidth of 15.8 Mbytes/sec in case of massive data processing applications. Moreover,the concurrent file system has been dsigned to enhance the throughput of applications requirring high performance I/O by controlling the paralleism of the concurrent file system on user's side.

  • PDF

Failure Recovery in the Linux Cluster File System SANiqueTM (리눅스 클러스터 화일 시스템 SANiqueTM의 오류 회복 기법)

  • Lee, Gyu-Ung
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.359-366
    • /
    • 2001
  • This paper overviews the design of SANique$^{TM}$ -a shred file system for Linux cluster based on SAN environment. SANique$^{TM}$ has the capability of transferring user data from network-attached SAN disks to client applcations directly without the control of centralized file server system. The paper also presents the characteristics of each SANique$^{TM}$ subsystem: CFM(Cluster File Manager), CVM(Cluster Volume Manager), CLM(Cluster Lock Manager), CBM(Cluster Buffer Manager) and CRM(Cluster Recovery Manager). Under the SANique$^{TM}$ design layout, then, the syndrome of '||'&'||'quot;split-brain'||'&'||'quot; in shared file system environments is described and defined. The work first generalizes and illustrates possible situations in each of which a shared file system environment may split into two or more pieces of separate brain. Finally, the work describes the SANique$^{TM}$ approach to the given "split-brain"problem using SAN disk named "split-brain" and develops the overall recovery procedure of shared file systems.

  • PDF

Performance Evaluation of Real-Time Transaction Processing in a Shared Disk Cluster (공유 디스크 클러스터에서 실시간 트랜잭션 처리의 성능 평가)

  • Lee Sangho;Ohn Kyungoh;Cho Haengrae
    • Journal of KIISE:Databases
    • /
    • v.32 no.2
    • /
    • pp.142-150
    • /
    • 2005
  • A shared disks (SD) cluster couples multiple computing nodes, and every node shares a common database at the disk level. A great deal of research indicates that the SD cluster is suitable to high performance transaction processing, but the aggregation of SD cluster with real-time processing has not been investigated at all. A real-time transaction has not only ACID properties of traditional transactions but also time constraints. By adopting cluster technology, the real-time services will be highly available and can exploit inter-node parallelism. In this paper, we first develop an experiment model of an SD-based real-time database system (SD-RTDBS). Then we investigate the feasibility of real-time transaction processing in the SD cluster using the experiment model. We also evaluate the cross effect of real-time transaction processing algorithms and SD cluster algorithms under a wide variety of database workloads.

EXPERIMENTAL STUDIES ON THE SURFACE ROUGHNESS OF GLASS IONOMER CEMENT RESTORATIONS (Glass Ionomer Cement 수복물(修復物)의 표면거칠기에 관한 실험적 연구)

  • Kim, Kwang-Soon;Lee, Seung-Jong;Lee, Chung-Suck
    • Restorative Dentistry and Endodontics
    • /
    • v.17 no.1
    • /
    • pp.166-180
    • /
    • 1992
  • One disadvantage of Glass Ionomer Cement Restoration is the difficulty in polishing. To find the appropriate polishing method, we polished the surface of Glass Ionomer Cement Restorations by 11 combination methods serially using disks shared with large-small particles and evaluated the polishing process in terms of surface roughness, surface roughness curve, and SEM findings. In addition, a visible light curing type bonding material was applied to evaluate the possible improvement in surface properties. The following results were obtained. 1. The disk surface of Glass Ionomer Cement was polished serially by disks with superfine particles, but it didn't become smooth. 2. The surface of Microfilled Composite resin became smoother as using a disk with finer particles. 3. When a visible light curing type bonding material was applied in finishing process, the surface of Glass Ionomer Cement became smooth as much as the applied matrix.

  • PDF

Affinity-based Dynamic Transaction Routing in a Shared Disk Cluster (공유 디스크 클러스터에서 친화도 기반 동적 트랜잭션 라우팅)

  • 온경오;조행래
    • Journal of KIISE:Databases
    • /
    • v.30 no.6
    • /
    • pp.629-640
    • /
    • 2003
  • A shared disk (SD) cluster couples multiple nodes for high performance transaction processing, and all the coupled nodes share a common database at the disk level. In the SD cluster, a transaction routing corresponds to select a node for an incoming transaction to be executed. An affinity-based routing can increase local buffer hit ratio of each node by clustering transactions referencing similar data to be executed on the same node. However, the affinity-based routing is very much non-adaptive to the changes in the system load, and thus a specific node will be overloaded if transactions in some class are congested. In this paper, we propose a dynamic transaction routing scheme that can achieve an optimal balance between affinity-based routing and dynamic load balancing of all the nodes in the SD cluster. The proposed scheme is novel in the sense that it can improve the system performance by increasing the local buffer hit ratio and reducing the buffer invalidation overhead.

Performance Evaluation of Disk Replacement Algorithms in a Shared Cluster (공유 디스크 클러스터에서 버퍼 고체 알고리즘의 성능 평가)

  • Cho, Haeng-Rae
    • Journal of KIISE:Databases
    • /
    • v.35 no.6
    • /
    • pp.469-480
    • /
    • 2008
  • A shared disk (SD) cluster couples multiple nodes for high performance transaction processing, and all the coupled nodes share a common database at the disk level. To reduce the number of disk accesses, each node caches database pages in its memory buffer. Since a particular page may be cached simultaneously in different nodes, cache consistency should be maintained to ensure that nodes can always access the most recent version of database pages. Most cache consistency schemes proposed in the SD cluster adopted LRU as a buffer replacement algorithm. In this paper, we first present four buffer replacement algorithms that consider the characteristics of the SD cluster. Then we compare the performance of the buffer replacement algorithms. We perform the experiments on a variety of cluster configurations and database workloads. The experiment results show that the proposed algorithms achieve performance improvement up to 5 times of LRU algorithm.

Disk Cache Manager based on Minix3 Microkernel : Design and Implementation (Minix3 마이크로커널 기반 디스크 캐쉬 관리자의 설계 및 구현)

  • Choi, Wookjin;Kang, Yongho;Kim, Seonjong;Kwon, Hyeogsoong;Kim, Jooman
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.421-427
    • /
    • 2013
  • Disk Cache Manager(DCM), a functional server of microkernel based, to improve the I/O power of shared disks is designed and implemented in this work. DCM interfaces other different servers with message passing through ports by serving as a system actor the multi-thread mode on the Minix3 micro-kernel. DCM proposed in this paper uses the shared disk logically as a Seven Disk and Sodd Disk to enable parallel I/O. DCM enables the efficient placement of disk data because it raises disk cache hit-ratio by increasing the cache size when the utilization of the particular disk is high. Through experimental results, we show that DCM is quite efficient for a shared disk with higher utilization.

A Load Balancing Method using Partition Tuning for Pipelined Multi-way Hash Join (다중 해시 조인의 파이프라인 처리에서 분할 조율을 통한 부하 균형 유지 방법)

  • Mun, Jin-Gyu;Jin, Seong-Il;Jo, Seong-Hyeon
    • Journal of KIISE:Databases
    • /
    • v.29 no.3
    • /
    • pp.180-192
    • /
    • 2002
  • We investigate the effect of the data skew of join attributes on the performance of a pipelined multi-way hash join method, and propose two new harsh join methods in the shared-nothing multiprocessor environment. The first proposed method allocates buckets statically by round-robin fashion, and the second one allocates buckets dynamically via a frequency distribution. Using harsh-based joins, multiple joins can be pipelined to that the early results from a join, before the whole join is completed, are sent to the next join processing without staying in disks. Shared nothing multiprocessor architecture is known to be more scalable to support very large databases. However, this hardware structure is very sensitive to the data skew. Unless the pipelining execution of multiple hash joins includes some dynamic load balancing mechanism, the skew effect can severely deteriorate the system performance. In this parer, we derive an execution model of the pipeline segment and a cost model, and develop a simulator for the study. As shown by our simulation with a wide range of parameters, join selectivities and sizes of relations deteriorate the system performance as the degree of data skew is larger. But the proposed method using a large number of buckets and a tuning technique can offer substantial robustness against a wide range of skew conditions.