• Title/Summary/Keyword: Main-memory storage system

Search Result 61, Processing Time 0.03 seconds

Real time Storage Manager to store very large datausing block transaction (블록 단위 트랜잭션을 이용한 대용량 데이터의 실시간 저장관리기)

  • Baek, Sung-Ha;Lee, Dong-Wook;Eo, Sang-Hun;Chung, Warn-Ill;Kim, Gyoung-Bae;Oh, Young-Hwan;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.1-12
    • /
    • 2008
  • Automatic semiconductor manufacture system generating transaction from 50,000 to 500,000 per a second needs storage management system processing very large data at once. A lot of storage management systems are researched for storing very large data. Existing storage management system is typical DBMS on a disk. It is difficult that the DBMS on a disk processes the 500,000 number of insert transaction per a second. So, the DBMS on main memory appeared to use memory. But it is difficultthat very large data stores into the DBMS on a memory because of limited amount of memory. In this paper we propose storage management system using insert transaction of a block unit that can process insert transaction over 50,000 and store data on low storage cost. A transaction of a block unit can decrease cost for a log and index per each tuple as transforming a transaction of a tuple unit to a block unit. Besides, the proposed system come cost to decompress all block of data because the information of each field be loss. To solve the problems, the proposed system generates the index of each compressed block to prevent reducing speed for searching. The proposed system can store very large data generated in semiconductor system and reduce storage cost.

  • PDF

A Flash Memory Swap System for Mobile Computers (모바일 컴퓨터를 위한 플래시 메모리 스왑 시스템)

  • Jeon, Seon-Su;Ryu, Yeon-Seung
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.9
    • /
    • pp.1272-1284
    • /
    • 2010
  • As the mobile computers are becoming powerful and are used like general-purpose computers, operating systems for mobile computers also require swap system functionality that utilizes main memory efficiently. Flash memory is widely used as storage device for mobile computers but current linux swap system does not consider flash memory. Swap system is tightly related with process execution since it stores the contents of process in execution. By taking advantage of this characteristics, in this paper, we study a new linux swap system called PASS(Process-Aware Swap System), which allocates the different flash memory blocks to each process. Trace-driven experimental results show that PASS outperforms existing linux swap system with existing garbage collection schemes in terms of garbage collection cost.

Application-aware Design Parameter Exploration of NAND Flash Memory

  • Bang, Kwanhu;Kim, Dong-Gun;Park, Sang-Hoon;Chung, Eui-Young;Lee, Hyuk-Jun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.4
    • /
    • pp.291-302
    • /
    • 2013
  • NAND flash memory (NFM) based storage devices, e.g. Solid State Drive (SSD), are rapidly replacing conventional storage devices, e.g. Hard Disk Drive (HDD). As NAND flash memory technology advances, its specification has evolved to support denser cells and larger pages and blocks. However, efforts to fully understand their impacts on design objectives such as performance, power, and cost for various applications are often neglected. Our research shows this recent trend can adversely affect the design objectives depending on the characteristics of applications. Past works mostly focused on improving the specific design objectives of NFM based systems via various architectural solutions when the specification of NFM is given. Several other works attempted to model and characterize NFM but did not access the system-level impacts of individual parameters. To the best of our knowledge, this paper is the first work that considers the specification of NFM as the design parameters of NAND flash storage devices (NFSDs) and analyzes the characteristics of various synthesized and real traces and their interaction with design parameters. Our research shows that optimizing design parameters depends heavily on the characteristics of applications. The main contribution of this research is to understand the effects of low-level specifications of NFM, e.g. cell type, page size, and block size, on system-level metrics such as performance, cost, and power consumption in various applications with different characteristics, e.g. request length, update ratios, read-and-modify ratios. Experimental results show that the optimized page and block size can achieve up to 15 times better performance than the conventional NFM configuration in various applications. The results can be used to optimize the system-level objectives of a system with specific applications, e.g. embedded systems with NFM chips, or predict the future direction of NFM.

Implementation of Maim Memory DBMS for Efficient Transactions based on Embedded System (임베디드 시스템 상에서의 고속 트랜잭션을 위한 메인메모리 기반 데이터베이스 시스템 구현)

  • Kim, Young-Hwan;Son, Jae-Gi;Park, Chang-Won
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.769-770
    • /
    • 2008
  • Mani Memory DataBase(MMDB) system store their data in main physical memory and provide very high-speed access. Conventional database system are optimized for the particular characteristics of disk storage mechanism. Memory resident systems, on the other hand, use different optimizations to structure and organize data, as well as to make it reliable. This paper provides a brief overview on MMDBs and the results after evaluating the performance of our simple MMDB based on Embedded system.

  • PDF

Design of Main-Memory Database Prototype System using Fuzzy Checkpoint Technique in Real-Time Environment (실시간 시스템에서 퍼지 검사점을 이용한 주기억 데이터베이스 프로토타입 시스템의설계)

  • Park, Yong-Mun;Lee, Chan-Seop;Choe, Ui-In
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.6
    • /
    • pp.1753-1765
    • /
    • 2000
  • As the areas of computer application are expanded, real-time application environments that must process as many transactions as possible within their deadlines, such as a stock transaction systems, ATM switching systems etc, have been increased recently. The reason why the conventional database systems can't process soft real-time applications is the lack of prediction and poor performance on processing transaction's deadline. If transactions want to access data stored at the secondary storage, they can not satisfy requirements of real-time applications because of the disk delay time. This paper designs a main-memory database prototype systems to be suitable to real-time applications and then this system can produce rapid results without disk i/o as all of the information are loaded in main memory database. In thesis proposed the improved techniques with respect to logging, checkpointing, and recovering in our environment. In order to improve the performance of the system, a) the frequency of log analysis and redo processing is reduced by the proposed redo technique at system failure, b) database consistency is maintained by improved fuzzy checkpointing. The performance model is proposed which consists of two parts. The first part evaluates log processing time for recovery and compares with other research activities. The second part examines checkpointing behavior.

  • PDF

Two-Tier Storage DBMS for High-Performance Query Processing

  • Eo, Sang-Hun;Li, Yan;Kim, Ho-Seok;Bae, Hae-Young
    • Journal of Information Processing Systems
    • /
    • v.4 no.1
    • /
    • pp.9-16
    • /
    • 2008
  • This paper describes the design and implementation of a two-tier DBMS for handling massive data and providing faster response time. In the present day, the main requirements of DBMS are figured out using two aspects. The first is handling large amounts of data. And the second is providing fast response time. But in fact, Traditional DBMS cannot fulfill both the requirements. The disk-oriented DBMS can handle massive data but the response time is relatively slower than the memory-resident DBMS. On the other hand, the memory-resident DBMS can provide fast response time but they have original restrictions of database size. In this paper, to meet the requirements of handling large volumes of data and providing fast response time, a two-tier DBMS is proposed. The cold-data which does not require fast response times are managed by disk storage manager, and the hot-data which require fast response time among the large volumes of data are handled by memory storage manager as snapshots. As a result, the proposed system performs significantly better than disk-oriented DBMS with an added advantage to manage massive data at the same time.

A Mobile Flash File System - MJFFS (모바일 플래시 파일 시스템 - MJFFS)

  • 김영관;박현주
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.2
    • /
    • pp.29-43
    • /
    • 2004
  • As the development of an information technique, gradually, mobile device is going to be miniaturized and operates at high speed. By such the requirements, the devices using a flash memory as a storage media are increasing. The flash memory consumes low power, is a small size, and has a fast access time like the main memory. But the flash memory must erase for recording and the erase cycle is limited. JFFS is a representative filesystem which reflects the characteristics of the flash memory. JFFS to be consisted of LSF structure, writes new data to the flash memory in sequential, which is not related to a file size. Mounting a filesystem or an error recovery is achieved through the sequential approach. Therefore, the mounting delay time is happened according to the file system size. This paper proposes a MJFFS to use a multi-checkpoint information to manage a mass flash file system efficiently. A MJFFS, which improves JFFS, divides a flash memory into the block for suitable to the block device, and stores file information of a checkpoint structure at fixed interval. Therefore mounting and error recovery processing reduce efficiently a number of filesystem access by collecting a smaller checkpoint information than capacity of actual files. A MJFFS will be suitable to a mobile device owing to accomplish fast mounting and error recovery using advantage of log foundation filesystem and overcoming defect of JFFS.

  • PDF

A Technique for Improving the Performance of Cache Memories

  • Cho, Doosan
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.104-108
    • /
    • 2021
  • In order to improve performance in IoT, edge computing system, a memory is usually configured in a hierarchical structure. Based on the distance from CPU, the access speed slows down in the order of registers, cache memory, main memory, and storage. Similar to the change in performance, energy consumption also increases as the distance from the CPU increases. Therefore, it is important to develop a technique that places frequently used data to the upper memory as much as possible to improve performance and energy consumption. However, the technique should solve the problem of cache performance degradation caused by lack of spatial locality that occurs when the data access stride is large. This study proposes a technique to selectively place data with large data access stride to a software-controlled cache. By using the proposed technique, data spatial locality can be improved by reducing the data access interval, and consequently, the cache performance can be improved.

A Study on the Communication Method for a Ship Main Engine Remote Control System (선박 주기관 원격제어시스템을 위한 통신방식에 관한 연구)

  • 류길수
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.22 no.6
    • /
    • pp.894-900
    • /
    • 1998
  • In this paper, a communication method is proposed for the development of a main engine remote control system. The main engine control system compriese three subsystems such as RCS (Remote Control System) BCS (Bridge Control System) and SS (Safety System), Thus it is required to exchange data each other among these subsystems. The communication method has simplified hardware through the minimization of communication components where the interrupt method are employed for receiving and the polling method for transmitting. We discuss a methodology of using a ring buffer for data storage physically which has two buffers virtually for the effective use of memory. This communication method presents a good performance in the system which has rather small numbers of communication data.

  • PDF

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.