• Title/Summary/Keyword: Database Management Systems

Search Result 1,021, Processing Time 0.023 seconds

Load Balancing Method for Query Processing Based on Cache Management in the Grid Database (그리드 데이터베이스에서 질의 처리를 위한 캐쉬 관리 기반의 부하분산 기법)

  • Shin, Soong-Sun;Back, Sung-Ha;Eo, Sang-Hun;Lee, Dong-Wook;Kim, Gyoung-Bae;Chung, Weon-Il;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.7
    • /
    • pp.914-927
    • /
    • 2008
  • Grid database management systems are used for large data processing, high availability and data integration in grid computing. Furthermore the grid database management systems are in the use of manipulating the queries that are sent to distributed nodes for efficient query processing. However, when the query processing is concentrated in a random node, it will be occurred with imbalance workload and decreased query processing. In this paper we propose a load balancing method for query processing based on cache Management in grid databases. This proposed method focuses on managing a cache in nodes by cache manager. The cache manager connects a node to area group and then the cache manager maintains a cached meta information in node. A node is used for caching the efficient meta information which is propagated to other node using cache manager. The workload of node is distributed by using caching meta information of node. This paper shows that there is an obvious improvement compared with existing methods, through adopting the proposed algorithm.

  • PDF

An Efficient Altruistic Looking Protocol for the Mobile Transaction Management System (이동 데이터베이스 시스템을 위한 효율적인 이타적 잠금기법)

  • 권혁신;김세윤;김응모
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.1
    • /
    • pp.53-67
    • /
    • 2004
  • We propose an advanced transaction scheduling protocol to improve the concurrency and to guarantee the mobility for the mobile database management systems. Mobility, portability, and wireless link In mobile computing environment can cause certain drawbacks, and thus it is more difficult to solve the concurrency control problems. However, a locking scheme should be used to guarantee the data consistency and to prevent the data conflicts. It is well known that data consistency is guaranteed by standard transaction scheduling schemes like two-phase locking (2PL). It has two of operation, lock and unlock. But 2PL does not give solution for mobile system. Altruistic Locking (AL) and classifying transactions, we adapt, can give solution for the previous problems. AL, as an advanced protocol, has attempted to reduce delay effects associated with lock release moment by the use of donation. In this paper, we extend those approaches and classify the transactions to reduce delay effects of short-lived transaction caused by long-lived transaction. In addition, we show efficient solution for the case of disconnection occurrence. Our protocol, namely, Mobile Altruistic Locking (MAL) is shown to be efficiently used in order to reduce delay effects and to guarantee database consistency in a state of the slippery connection in mobile database systems.

  • PDF

Estimating The Number of Hierarchical Distinct Values using Arrays of Attribute Value Intervals (속성값 구간 배열을 이용한 계층 상이값 갯수의 계산 기법)

  • Song, Ha-Joo;Kim, Hyoung-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.2
    • /
    • pp.265-273
    • /
    • 2000
  • In relational database management systems(RDBMS), a table consIn relational database management systems(RDBMS), a table consists of sets of records which are composed of a set of attributes. The number of distinct values(NDV) of an attribute denotes the number of distinct attribute values that actually appear in the database records, and is widely used in optimizing queries and supporting statistic queries. Object-relational database management systems(ORBBMSS), however, support the inheritance between tables which enforces an attribute defined in a super-table to be inherited in sub-tables automatically. Hence, in ORDBMSS, not only NDV of an attribute In a single table but also NDV of an attribute in multiple tables(HNDV) is needed. In this paper, we propose a method that calculates HNDV using arrays of attribute value intervals. In this method, an array of attribute value intervals is created for an attribute of interest In each table in a table hierarchy, and HNDV can be calculated or estimated by merging the arrays of attribute value intervals. The proposed method accurately calculates HNDV using small additional storage space and is efficient for an environment where only some of the tables in a table hierarchy are frequently updated.

  • PDF

Developing a Design Support System for Database Distribution : Relational Schema as an Input (데이터베이스 분산 설계 지원 시스템 개발: 관계형 스키마를 입력으로)

  • Lee, Hui-Seok;Ji, Jeong-Kang;Kim, Tae-Hun;Yeo, Ji-Yeong
    • Asia pacific journal of information systems
    • /
    • v.6 no.1
    • /
    • pp.165-194
    • /
    • 1996
  • Data distribution is one of the most important steps in distributed database design. This paper proposes a methodology for distributing database by using a relational schema as an input. A design support system called DBDD (Database Distribution Design) is implemented within the framework of this methodology. The methodology consists of three phases such as (i) schema analysis, (ii) fragmentation, and (iii) allocation. In the schema analysis phase, all the table names are acquired from a global relational schema. In the fragmentation phase, fragments are generated according to transaction retrieval patterns. Furthermore, DBDD enhances the design quality by allocating fragments in a progressive manner. A real-life case is illustrated to demonstrate the practical usefulness of the DBDD.

  • PDF

An Intelligent Search Modeling using Avatar Agent

  • Kim, Dae Su
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.3
    • /
    • pp.288-291
    • /
    • 2004
  • This paper proposes an intelligent search modeling using avatar agent. This system consists of some modules such as agent interface, agent management, preprocessor, interface machine. Core-Symbol Database and Spell Checker are related to the preprocessor module and Interface Machine is connected with Best Aggregate Designer. Our avatar agent system does the indexing work that converts user's natural language type sentence to the proper words that is suitable for the specific branch information retrieval. Indexing is one of the preprocessing steps that make it possible to guarantee the specialty of user's input and increases the reliability of the result. It references a database that consists of synonym and specific branch dictionary. The resulting symbol after indexing is used for draft search by the internet search engine. The retrieval page position and link information are stored in the database. We experimented our system with the stock market keyword SAMSUNG_SDI, IBM, and SONY and compared the result with that of Altavista and Google search engine. It showed quite excellent results.

Implementation of Mobile Cliented-Server System for Real-Time Processing (실시간 처리 기반의 모바일 클라이언트-서버 시스템 구현)

  • Joo, Hae-Jong;Hong, Bong-Wha
    • The Journal of Information Technology
    • /
    • v.9 no.4
    • /
    • pp.35-47
    • /
    • 2006
  • Many researches are going on with regard to issues and problems related to mobile database systems, which are caused by the weak connectivity of wireless networks, the mobility and the portability of mobile clients. Mobile computing satisfies user's demands for convenience and performance to use information at any time and in any place, but it has many problems to be solved in the aspect of data management. The purpose of our study is to Implement Real-Time Mobile Query Processing System(MQPS) to solve problems related to database hoarding, the maintenance of shared dataconsistency and the optimization of logging, which are caused by the weak connectivity and disconnection of wireless networks inherent in mobile database systems under mobile client server environments. In addition, we proved the superiority of the proposed MQPS by comparing its performance to the C I S(Client-Intercept-Srever) model.

  • PDF

A Study on Disability Database and Applicable System to Provide Continuous and Comprehensive Rehabilitation Service (지속적·포괄적 재활서비스 제공을 위한 장애인 통합 데이터베이스 및 활용체계 연구)

  • Lee, Hee-Yeon;Ho, Seung-Hee;Kang, Hyun-Gyu;Lee, Seung-Young
    • Health Policy and Management
    • /
    • v.21 no.2
    • /
    • pp.279-308
    • /
    • 2011
  • Background : Demands have increased for a variety of welfare services and customized services for persons with disabilities(PWD). A management System focused on PWD was needed to provide for comprehensive services. The purpose of this study was to design a disability database and an application system in order to provide continuous and comprehensive rehabilitation service for PWD. Methods : We analyzed local and abroad disability-related policies and systems and derived the contents that should be included in the integrated database for PWD through a survey among rehabilitation specialists. Result : The integrated database for PWD was composed of 7 categories including General Characteristics, Health & Medicine, Assistance, Education, Employment, Economics and Daily & Social Life. The applicable system of integrated database for PWD was proposed to help conducting policies in such areas as follows ; 'welfare', 'education and culture', 'economic activity', 'social participation' and 'Health'. Conclusion : The main goal of disability policy and strategy should be established by systematically analyzing disability-related data integrating database of PWD. Accordingly, specific objectives and directions for disability policies should be set and efficiently managed and operated. The integrated database for PWD may be utilized for disability-related policies and service monitering, sustainable and integrated management and community participation and integration based on the rights of the disabled.

A Database System for High-Throughput Transposon Display Analyses of Rice

  • Inoue, Etsuko;Yoshihiro, Takuya;Kawaji, Hideya;Horibata, Akira;Nakagawa, Masaru
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.15-20
    • /
    • 2005
  • We developed a database system to enable efficient and high-throughput transposon analyses in rice. We grow large-scale mutant series of rice by taking advantage of an active MITE transposon mPing, and apply the transposon display method to them to study correlation between genotypes and phenotypes. But the analytical phase, in which we find mutation spots from waveform data called fragment profiles, involves several problems from a viewpoint of labor amount, data management, and reliability of the result. As a solution, our database system manages all the analytical data throughout the experiments, and provides several functions and well designed web interfaces to perform overall analyses reliably and efficiently.

  • PDF

Building A PDM/CE Environment and Validating Integrity Using STEP (STEP을 이용한 PDM/CE환경의 구축과 데이타 무결성 확인)

  • 유상봉;서효원;고굉욱
    • The Journal of Society for e-Business Studies
    • /
    • v.1 no.1
    • /
    • pp.173-194
    • /
    • 1996
  • In order to adapt today's short product life cycle and rapid technology changes., integrated systems should be extended to support PDM (Product Data Management) or CE(Concurrent Engineering). A PDM/CE environment has been developed and a prototype is Presented in this paper. Features of the PDM/CE environment are 1) integrated product information model (IPIM) includes both data model and integrity constraints, 2) database systems are organized hierarchically so that working data C8Mot be referenced by other application systems until they are released into the global database, and 3) integrity constraints written in EXPRESS are validated both in the local databases and the global database. By keeping the integrity of the product data, undesirable propagation of illegal data to other application system can be prevented. For efficient validation, the constraints are distributed into the local and the global schemata. Separate triggering mechanisms are devised using the dependency of constraints to three different data operations, i.e., insertion, deletion, and update.

  • PDF

Optimal Savepoint in a Loosely-Coupled Resilient Database System (느슨히 결합된 데이타베이스 시스템에서 최적의 저장점 유도)

  • Choe, Jae-Hwa;Kim, Seong-Eon
    • Asia pacific journal of information systems
    • /
    • v.6 no.1
    • /
    • pp.21-38
    • /
    • 1996
  • This paper investigates the performance improvement opportunities through a resiliency mechanism in the distributed primary/backup database system. Recognizing that a distributed transaction executes at several servers during its lifetime, we propose a resiliency mechanism which allows continuous transaction processing in distributed database server systems in the presence of a server failure. In order to perform continuous transaction processing despite failures, every state change of a transaction processing can be saved in the backup server. Obviously, this pessimistic synchronization may give more burdens than benefits to the system. Thus, the tracking needs not be done synchronously with the transaction progress. Instead, the state of all transaction processing in a system is saved periodically. This activity is referred to as a savepoint. Then, the question is how often the savepoint has to be done. We derive the optimal savepoint to identify the optimization parameters for the resilient transaction processing system.

  • PDF