• Title/Summary/Keyword: 트랜잭션처리

Search Result 469, Processing Time 0.036 seconds

Historical Data, Transaction and Database for Industrial Monitoring and Control Applications (산업감시 및 제어 응용을 위한 이력 데이터, 트랜잭션 그리고 데이터베이스)

  • Han, Sang-Hyuck;Kim, Young-Kuk
    • Annual Conference of KIPS
    • /
    • 2012.04a
    • /
    • pp.1051-1053
    • /
    • 2012
  • SCADA, DCS, PLC 등 산업제어시스템은 전기, 수도, 수송, 가스 및 석유와 같은 국가기반시설의 감시 및 제어를 통해 위험의 조기 예측, 대응, 각 공정의 품질 향상 등에 기여하고 있다. 산업제어시스템은 HMI(Human Machine Interface), 이력 데이터베이스, 각 센서 H/W 및 S/W 기술로 구성되는데, 그 중 이력 데이터베이스는 실시간으로 들어오는 디지털 및 아날로그 형태의 이력 데이터에 대한 효과적으로 처리하기 위한 주요 요소이다. 현재, 국내에서는 히스토리안 등 주로 외산 제품에 의존하고 있어 이에 대한 기반 기술 연구 및 관련 산업화가 요구된다. 또한, 이력 데이터베이스의 종류 및 특성에 대한 연구가 선행되어야 한다. 본 논문에서는 산업제어시스템에 주로 적용된 이력 데이터베이스들에 대해 자세히 살펴보고, 일반적으로 사용되는 데이터와 산업제어시스템에서 사용하는 이력 데이터와 트랜잭션의 특징을 살펴봄으로써 산업제어 응용에서 요구되는 이력 데이터베이스가 어떤 모습을 갖추어야 할 지에 대한 이해를 높이고자 한다.

Concurrent blockchain architecture with small node network (소규모 노드로 구성된 고속 병렬 블록체인 아키텍처)

  • Joi, YongJoon;Shin, DongMyung
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.19-29
    • /
    • 2021
  • Blockchain technology fulfills the reliance requirement and is now entering a new stage of performance. However, the current blockchain technology has significant disadvantages in scalability and latency because of its architecture. Therefore, to adopt blockchain technology to real industry, we must overcome the performance issue by redesigning blockchain architecture. This paper introduces several element technologies and a novel blockchain architecture TPAC, that preserves blockchain's technical advantage but shows more stable and faster transaction processing performance and low latency.

Metadata Management Method for Consistency and Recency in Digital Library (디지탈 도서관 환경에서 일관성과 최근성을 고려한 메타데이타 관리 방법)

  • Lee, Hai-Min;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.27 no.1
    • /
    • pp.22-32
    • /
    • 2000
  • The Digital Library is the integrated system of Information Retrieval System(IRS) and Database Management system(DBMS). In the Digital Library environment where dynamic query and update processes are required, however, the existing transaction management methods cause the following problems. First, since the traditional consistency criteria is too restrictive, it causes increment of query processing time and cannot guarantee the reflection of recency. Second, query result could be unreliable because the consistency criteria between source data and metadata is not defined. This paper models the access to metadata based on Dublin Core as query transactions and update transactions, and gives the efficient method to manage them. Particularly, this paper describes the consistency criteria of metadata which takes it Into consideration the consistency between the result of query transaction and status of source data in the Digital Library, that is different from the consistency criteria in traditional transaction management. It also presents analysis of the view point of query transaction to reflect recency and proposes metadata management to guarantee recency within metadata consistency.

  • PDF

Finding Frequent Itemsets based on Open Data Mining in Data Streams (데이터 스트림에서 개방 데이터 마이닝 기반의 빈발항목 탐색)

  • Chang, Joong-Hyuk;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.447-458
    • /
    • 2003
  • The basic assumption of conventional data mining methodology is that the data set of a knowledge discovery process should be fixed and available before the process can proceed. Consequently, this assumption is valid only when the static knowledge embedded in a specific data set is the target of data mining. In addition, a conventional data mining method requires considerable computing time to produce the result of mining from a large data set. Due to these reasons, it is almost impossible to apply the mining method to a realtime analysis task in a data stream where a new transaction is continuously generated and the up-to-dated result of data mining including the newly generated transaction is needed as quickly as possible. In this paper, a new mining concept, open data mining in a data stream, is proposed for this purpose. In open data mining, whenever each transaction is newly generated, the updated mining result of whole transactions including the newly generated transactions is obtained instantly. In order to implement this mechanism efficiently, it is necessary to incorporate the delayed-insertion of newly identified information in recent transactions as well as the pruning of insignificant information in the mining result of past transactions. The proposed algorithm is analyzed through a series of experiments in order to identify the various characteristics of the proposed algorithm.

Column-aware Transaction Management Scheme for Column-Oriented Databases (컬럼-지향 데이터베이스를 위한 컬럼-인지 트랜잭션 관리 기법)

  • Byun, Si-Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.4
    • /
    • pp.125-133
    • /
    • 2014
  • The column-oriented database storage is a very advanced model for large-volume data analysis systems because of its superior I/O performance. Traditional data storages exploit row-oriented storage where the attributes of a record are placed contiguously in hard disk for fast write operations. However, for search-mostly datawarehouse systems, column-oriented storage has become a more proper model because of its superior read performance. Recently, solid state drive using MLC flash memory is largely recognized as the preferred storage media for high-speed data analysis systems. The features of non-volatility, low power consumption, and fast access time for read operations are sufficient grounds to support flash memory as major storage components of modern database servers. However, we need to improve traditional transaction management scheme due to the relatively slow characteristics of column compression and flash operation as compared to RAM memory. In this research, we propose a new scheme called Column-aware Multi-Version Locking (CaMVL) scheme for efficient transaction processing. CaMVL improves transaction performance by using compression lock and multi version reads for efficiently handling slow flash write/erase operation in lock management process. We also propose a simulation model to show the performance of CaMVL. Based on the results of the performance evaluation, we conclude that CaMVL scheme outperforms the traditional scheme.

Performance Analysis of High Technologies in Main Memory DBMS ALTIBASE (주기억 장치 DBMS ALTIBASE의 요소기술 성능평가)

  • Lee Kyu-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.3
    • /
    • pp.1-8
    • /
    • 2005
  • ALTIBASE is the relational main memory DBMS that enables us to develop the high performance and fault tolerant applications. It guarantees the short and predictable execution time as well as the basic functionality of conventional disk-based DBMS. We present the 'overview of system architecture and the performance analysis with respect to the various design choices. The assorted experiments are performed under the various environments. The results of TPC-H and Wisconsin benchmark tests are described. We illustrate the performance comparisons under the various index mechanisms, the replication models, and the transaction durabilities. A performance study shows the ALTIBASE system can be applied to the wide area of industrial DBMS fields.

  • PDF

T-Tree Index Structures Utilizing Prefetch Methods (프리패치 기법을 적용한 T.트리 인덱스 구조)

  • Lee, Ig-Hoon;Shim, Jun-Ho
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.119-131
    • /
    • 2009
  • During a decade, e-Commerce environments supporting real-time transaction processing have been getting larger. In telecommunication and financial environments, research and building for main memory database systems have been doing to support real-time transaction processing. A research on indexing for fast transaction support focuses on reducing cache misses or reducing memory access latency when cache misses happen. In the paper, we propose a prefetch method for tree index structures to reduce memory access latency. We present a prefetch-efficient pCST-tree and show superiority of the proposed tree by experiments.

  • PDF

A Study on Efficient Ethereum Smart Contract (효율적인 이더리움 스마트 콘트랙트에 관한 연구)

  • Kim, Dae Han;Choi, KwangHoon;Kim, Kangseok;Kim, Jai-Hoon
    • Annual Conference of KIPS
    • /
    • 2018.10a
    • /
    • pp.82-84
    • /
    • 2018
  • 본 논문은 이더리움 네트워크에 트랜잭션 발행 시 발생하는 부하(비용)을 줄이기 위해 스마트 콘트랙트를 효율적으로 구성하는 방식에 대해 연구한다. 이더리움 네트워크에 부하를 줄이기 위해서는 발생되는 트랜잭션의 양도 중요하지만 발생하는 트랜잭션의 크기가 작은 효율적인 스마트 콘트랙트 배포와 간단한 구조를 가진 함수를 호출하는 것도 중요하다. 그렇기 때문에 이더리움 스마트 콘트랙트의 구조에 따른 성능 평가를 진행하여 최적의 성능을 보이는 스마트 컨트랙트 구성 방법에 대해 연구를 진행한다. 최적의 성능은 동일한 데이터를 넣을 수 있는 상황에 대해 평가하며 평가 방식은 데이터를 블록체인에 저장 할 때 발생하는 가스 비용 비교를 통해 결정한다. 스마트 콘트랙트의 성능 평가 항목으로는 콘트랙트 배포와 함수 호출시 데이터의 구조, 개수에 따른 가스 비용의 비교 분석을 통해 최저의 가스 비용으로 함수 호출 및 스마트 콘트랙트 생성 및 배포 시키는 구조에 대해 연구를 진행한다.