• Title/Summary/Keyword: Transaction Update

Search Result 62, Processing Time 0.024 seconds

Update Protocols for Web-Based GIS Applications (웹 기반 GIS 응용을 위한 변경 프로토콜)

  • An, Seong-U;Seo, Yeong-Deok;Kim, Jin-Deok;Hong, Bong-Hui
    • Journal of KIISE:Databases
    • /
    • v.29 no.4
    • /
    • pp.321-333
    • /
    • 2002
  • As web-based services are becoming more and more popular, concurrent updates of spatial data should be possible in the web-based environments in order to use the various services. Web-based GIS applications are characterized by large quantity of data providing and these data should be continuously updated according to various user's requirements. Faced with such an enormous data providing system, it is inefficient for a server to do all of the works of updating spatial data requested by clients. Besides, the HTTP protocol used in the web environment is established under the assumption of 'Connectionless'and 'Stateless'. Lots of problems may occur if the scheme of transaction processing based on the LAN environment is directly applied to the web environment. Especially for long transactions of updating spatial data, it is very difficult to control the concurrency among clients and to keep the consistency of the server data. This paper proposes a solution of keeping consistency during updating directly spatial data in the client-side by resolving the Dormancy Region Lock problem caused by the 'Connectionless'and 'Stateless'feature of the HTTP protocol. The RX(Region-eXclusive) lock and the periodically sending of ALIVE_CLIENTi messages can solve this problem. The protocol designed here is verified as effective enough through implementing in the main memory spatial database system, called CyberMap.

The Consistency Management Using Trees of Replicated Data Items in Partially Replicated Database (부분 중복 데이터베이스에서 중복 데이터의 트리를 이용한 일관성 유지)

  • Bae, Mi-Sook;Hwang, Bu-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.10D no.4
    • /
    • pp.647-654
    • /
    • 2003
  • The replication of data is used to increase its availability and to improve the performance of a system. The distributed database system has to maintain both the database consistency and the replica consistency. This paper proposes an algorithm which resolves the conflict of the operations by using the mechanism based on the structure that the replicas of each data item are hierarchically organized. Each update is propagated along the tree based on the fact that the root of each data item is the primary replica in partially replicated databases. The use of a hierarchy of data may eliminate useless propagation since the propagation can be done only to sites having the replicas. In consequence, the propagation delay of updates may be reduced. By using the timestamp and a compensating transaction, our algorithm resolves the non-serializability problem caused by the conflict of operations that can happen on the way of the update propagation due to the lazy propagation. This resolution also guarantees the data consistency.

Design and Implementation of a Main-memory Storage System for Real-time Retrievals (실시간 검색을 위한 다중 사용자용 주기억장치 자료저장 시스템 개발)

  • Kwon, Oh-Su;Hong, Dong-Kweon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.187-194
    • /
    • 2003
  • Main Memory storage system can increase the performance of the system by assigning enough slack time to real-time transactions. Due to its high response time of main memory devices, main memory resident data management systems have been used for location management of personal mobile clients to cope with urgent location related operations. In this paper we have developed a multi-threaded main memory storage system as a core component of real-time retrieval system to handle a huge amount of readers and writers of main memory resident data. The storage system is implemented as an embedded component which is working with the help of a disk resident database system. It uses multi-threaded executions and utilizes latches for its concurrency control rather than complex locking method. It only saves most recent data on main memory and data synchronization is done only when disk resident database asks for update transactions. The system controls the number of read threads and update threads to guarantee the minimum requirements of real-time retrievals.

An Active Candidate Set Management Model on Association Rule Discovery using Database Trigger and Incremental Update Technique (트리거와 점진적 갱신기법을 이용한 연관규칙 탐사의 능동적 후보항목 관리 모델)

  • Hwang, Jeong-Hui;Sin, Ye-Ho;Ryu, Geun-Ho
    • Journal of KIISE:Databases
    • /
    • v.29 no.1
    • /
    • pp.1-14
    • /
    • 2002
  • Association rule discovery is a method of mining for the associated item set on large databases based on support and confidence threshold. The discovered association rules can be applied to the marketing pattern analysis in E-commerce, large shopping mall and so on. The association rule discovery makes multiple scan over the database storing large transaction data, thus, the algorithm requiring very high overhead might not be useful in real-time association rule discovery in dynamic environment. Therefore this paper proposes an active candidate set management model based on trigger and incremental update mechanism to overcome non-realtime limitation of association rule discovery. In order to implement the proposed model, we not only describe an implementation model for incremental updating operation, but also evaluate the performance characteristics of this model through the experiment.

A Study about Performance Evaluation of Various NoSQL Databases (다양한 NoSQL 데이터베이스의 성능 평가 연구)

  • Park, Hong-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.298-305
    • /
    • 2016
  • Various NoSQL databases are more excellent to process a large amount of big data than existing relational databases such as MySQL, PostgreSQL and Oracle. Among widely used NoSQL databases, performance of HBase, Cassandra, MongoDB and Redis was comparatively assessed. For distributed processing of a large amount of data, 12 servers were connected through switching hub and Ubuntu was installed as operating system. As for benchmark tool, YCSB was applied. Read and update ratios changed from 50% and 50%, 95% and 5% and finally, 100% and 0% and each of them was assessed as 200,000 commands developed into 1,200,000 commands for each case. Cassandra was most excellent with transaction processing per second while MongoDB was most excellent with the number of processes carried out per unit time.

A Recovery Scheme of SSD-based Databases using Snapshot Log (스냅샷 로그를 사용한 SSD 기반 데이터베이스 복구 기법)

  • Lim, Seong-Chae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.4
    • /
    • pp.85-91
    • /
    • 2019
  • In this paper, we propose a new logging and recovery scheme that is suited for the high-performance transaction processing system base on flash memory storage. The proposed scheme is designed by considering flash's I/O characteristic of asymmetric costs between page update/read operations. That is, we substitute the costly update operation with writing and real-time usage of snapshot log, which is for the page-level physical redo. From this, we can avoid costly rewriting of a dirty page when it is evicted form a buffering pool. while supporting efficient revery procedure. The proposed scheme would be not lucrative in the case of HDD-based system. However, the proposed scheme offers the performance advance sush as a reduced number of updates and the fast system recovery time, in the case of flash storage such as SSD (solid state drive). Because the proposed scheme can easily be applied to existing systems by saving our snapshot records and ordinary log records together, our scheme can be used for improving the performance of upcoming SSD-based database systems through a tiny modification to existing REDO algorithms.

A Stability Verification of Backup System for Disaster Recovery (재해 복구를 위한 백업 시스템의 안정성 검증)

  • Lee, Moon-Goo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.205-214
    • /
    • 2012
  • The main thing that IT operation managers consider is protecting assets of corporation from system failure and disaster. Therefore, this research proposed a backup system for a disaster recovery. Previous backup method is that if database update occurs, this record is saved in redo log, and if the size of record file is over than expected, this file is saved in archive log in order. Thus, it is possible to occur errors of data loss from the process of data backup which change in real time while changes of database occur. Suggested backup system is back redo log up to database of transaction log in real time, and back a record that can be omitted from previous backup method up to archive log. When recover the data, it is possible to recover redo log in real time online, and it minimizes data loss. Also, throughout multi thread processing method data recovery is performed and it is designed that system performance is improved. To verify stability of backup system CPN(Coloured Petri Net) is introduced, and each step of backup system is displayed in diagram form, and th e stability is verified based on the definition and theorem of CPN.

Scheduling of Concurrent Transactions in Broadcasting Environment

  • Al-Qerem, Ahmad;Hamarsheh, Ala;Al-Lahham, Yaser A.;Eleyat, Mujahed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.4
    • /
    • pp.1655-1673
    • /
    • 2018
  • Mobile computing environment is subject to the constraints of bounded network bandwidth, frequently encountered disconnections, insufficient battery power, and system asymmetry. To meet these constraints and to gain high scalability, data broadcasting has been proposed on data transmission techniques. However, updates made to the database in any broadcast cycle are deferred to the next cycle in order to appear to mobile clients with lower data currency. The main goal of this paper is to enhance the transaction performance processing and database currency. The main approach involves decomposing the main broadcast cycle into a number of sub-cycles, where data items are broadcasted as they were originally sequenced in the main cycle while appearing in the most current versions. A concurrency control method AOCCRBSC is proposed to cope well with the cycle decomposition. The proposed method exploits predeclaration and adapts the AOCCRB method by customizing prefetching, back-off, and partial backward and forward validation techniques. As a result, more than one of the conflicting transactions is allowed to commit at the server in the same broadcast cycle which empowers the processing of both update and read-only transactions and improves data currency.

An improvement of LEM2 algorithm

  • The, Anh-Pham;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.302-304
    • /
    • 2011
  • Rule based machine learning techniques are very important in our real world now. We can list out some important application which we can apply rule based machine learning algorithm such as medical data mining, business transaction mining. The different between rules based machine learning and model based machine learning is that model based machine learning out put some models, which often are very difficult to understand by expert or human. But rule based techniques output are the rule sets which is in IF THEN format. For example IF blood pressure=90 and kidney problem=yes then take this drug. By this way, medical doctor can easy modify and update some usable rule. This is the scenario in medical decision support system. Currently, Rough set is one of the most famous theory which can be used for produce the rule. LEM2 is the algorithm use this theory and can produce the small set of rule on the database. In this paper, we present an improvement of LEM2 algorithm which incorporates the variable precision techniques.

A Study on the DB-IR Integration: Per-Document Basis Online Index Maintenance

  • Jin, Du-Seok;Jung, Hoe-Kyung
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.3
    • /
    • pp.275-280
    • /
    • 2009
  • While database(DB) and information retrieval(IR) have been developed independently, there have been emerging requirements that both data management and efficient text retrieval should be supported simultaneously in an information system such as health care, customer support, XML data management, and digital libraries. The great divide between DB and IR has caused different manners in index maintenance for newly arriving documents. While DB has extended its SQL layer to cope with text fields due to lack of intact mechanism to build IR-like index, IR usually treats a block of new documents as a logical unit of index maintenance since it has no concept of integrity constraint. However, In the DB-IR integrations, a transaction on adding or updating a document should include maintenance of the posting lists accompanied by the document. Although DB-IR integration has been budded in the research filed, the issue will remain difficult and rewarding areas for a while. One of the primary reasons is lack of efficient online transactional index maintenance. In this paper, performance of a few strategies for per-document basis transactional index maintenance - direct index update, pulsing auxiliary index and posting segmentation index - will be evaluated. The result shows that the pulsing auxiliary strategy and posting segmentation indexing scheme, can be a challenging candidates for text field indexing in DB-IR integration.