• Title/Summary/Keyword: Cache System

Search Result 458, Processing Time 0.028 seconds

A Cache buffer and Read Request-aware Request Scheduling Method for NAND flash-based Solid-state Disks (캐시 버퍼와 읽기 요청을 고려한 낸드 플래시 기반 솔리드 스테이트 디스크의 요청 스케줄링 기법)

  • Bang, Kwanhu;Park, Sang-Hoon;Lee, Hyuk-Jun;Chung, Eui-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.143-150
    • /
    • 2013
  • Solid-state disks (SSDs) have been widely used by high-performance personal computers or servers due to its good characteristics and performance. The NAND flash-based SSDs, which take large portion of the whole NAND flash market, are the major type of SSDs. They usually integrate a cache buffer which is built from DRAM and uses the write-back policy for better performance. Unfortunately, the policy makes existing scheduling methods less effective at the I/F level of SSDs Therefore, in this paper, we propose a scheduling method for the I/F with consideration of the cache buffer. The proposed method considers the hit/miss status of cache buffer and gives higher priority to the read requests. As a result, the requests whose data is hit on the cache buffer can be handled in advance and the read requests which have larger effects on the whole system performance than write requests experience shorter latency. The experimental results show that the proposed scheduling method improves read latency by 26%.

Utilizing Channel Bonding-based M-n and Interval Cache on a Distributed VOD Server (효율적인 분산 VOD 서버를 위한 Channel Bonding 기반 M-VIA 및 인터벌 캐쉬의 활용)

  • Chung, Sang-Hwa;Oh, Soo-Cheol;Yoon, Won-Ju;kim, Hyun-Pil;Choi, Young-In
    • The KIPS Transactions:PartA
    • /
    • v.12A no.7 s.97
    • /
    • pp.627-636
    • /
    • 2005
  • This paper presents a PC cluster-based distributed video on demand (VOD) server that minimizes the load of the interconnection network by adopting channel bonding-based MVIA and the interval cache algorithm Video data is distributed to the disks of each server node of the distributed VOD server and each server node receives the data through the interconnection network and sends it to clients. The load of the interconnection network increases because of the large volume of video data transferred. We adopt two techniques to reduce the load of the interconnection network. First, an Msupporting channel bonding technique is adopted for the interconnection network. n which is a user-level communication protocol that reduces the overhead of the TCP/IP protocol in cluster systems, minimizes the time spent in communicating. We increase the bandwidth of the interconnection network using the channel bonding technique with MThe channel bonding technique expands the bandwidth by sending data concurrently through multiple network cards. Second, the interval cache reduces traffic on the interconnection network by caching the video data transferred from the remote disks in main memory Experiments using the distributed VOD server of this paper showed a maximum performance improvement of $30\%$ compared with a distributed VOD server without channel bonding-based MVIA and the interval cache, when used with a four-node PC cluster.

Design and Implementation of a High-Performance Index Manager in a Main Memory DBMS (주기억장치 DBMS를 위한 고성능 인덱스 관리자의 설계 및 구현)

  • Kim, Sang-Wook;Lee, Kyung-Tae;Choi, Wan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.7B
    • /
    • pp.605-619
    • /
    • 2003
  • The main memory DBMS(MMDBMS) efficiently supports various database applications that require high performance since it employs main memory rather than disk as a primary storage. In this paper, we discuss the index manager of the Tachyon, a next-generation MMDBMS. Recently, the gap between the CPU processing and main memory access times is becoming much wider due to rapid advance of CPU technology. By devising data structures and algorithms that utilize the behavior of the cache in CPU, we are able to enhance the overall performance of MMDBMSs considerably. In this paper, we address the practical implementation issues and our solutions for them obtained in developing the cache-conscious index manager of the Tachyon. The main issues touched are (1) consideration of the cache behavior, (2) compact representation of the index entry and the index node, (3) support of variable-length keys, (4) support of multiple-attribute keys, (5) support of duplicated keys, (6) definition of the system catalog for indexes, (7) definition of external APIs, (8) concurrency control, and (9) backup and recovery. We also show the effectiveness of our approach through extensive experiments.

Efficient Implementation of SVM-Based Speech/Music Classifier by Utilizing Temporal Locality (시간적 근접성 향상을 통한 효율적인 SVM 기반 음성/음악 분류기의 구현 방법)

  • Lim, Chung-Soo;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.149-156
    • /
    • 2012
  • Support vector machines (SVMs) are well known for their pattern recognition capability, but proper care should be taken to alleviate their inherent implementation cost resulting from high computational intensity and memory requirement, especially in embedded systems where only limited resources are available. Since the memory requirement determined by the dimensionality and the number of support vectors is generally too high for a cache in embedded systems to accomodate, frequent accesses to the main memory occur inevitably whenever the cache is not able to provide requested data to the processor. These frequent accesses to the main memory result in overall performance degradation and increased energy consumption because a memory access typically takes longer and consumes more energy than a cache access or a register access. In this paper, we propose a technique that reduces the number of main memory accesses by optimizing the data access pattern of the SVM-based classifier in such a way that the temporal locality of the accesses increases, fully utilizing data loaded into the processor chip. With experiments, we confirm the enhancement made by the proposed technique in terms of the number of memory accesses, overall execution time, and energy consumption.

Real-Time Object-Oriented Caching System (실시간 객체지향 캐싱 시스템)

  • Kim, Yeong-Jae;Seong, Ho-Cheol;Hong, Seong-Jun;Han, Seon-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.11
    • /
    • pp.3077-3085
    • /
    • 1999
  • Conventional caching system doesn't support Real-Time attributes and load balance. To solve these problems, this paper describes the design and implementation of the RIOP(Real-Time Inter-ORB Protocol) to provide QoS guarantees mechanism integrating RSVP and TAO ORB. Futhermore, it provides fast XCSLS(Extended Caching System for Load Balance) implementing main memory cache in Primary Server using locality of objects. In this paper, a key feature is presented : QoS enforcement, PS(Primary Server) and RS(Replicated Server)

  • PDF

Interchange Algorithm for VoD System (VOD 시스템에서의 Interchange Agent 운영 알고리즘)

  • Kang, Seok-Hoon;Park, Su-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.8
    • /
    • pp.1847-1854
    • /
    • 2005
  • This paper proposes a approach to configure efficient video-on-demand system by introducing Multicast and Cache Video-on-Demand (MCVoD) system. As a key element or the MCVoD system, interchange agent provides this system with multicasting and switching functions. With the multicasting, the MCVoD system is able to reduce the load on the network as well as VoD servers by transmitting only one video request instead of sending multiple requests on a same video stream. The switching enables clients to receive the lust stream of requested video streams instantly without waiting time and also allows avoiding undesirable duplication of video streams in the system. With various experiment results through simulation about waiting tine and cache hit ratio, we show that the MCVoD system employing the interchange agent provides better performance than current uni-proxy based system.

Forecasting Load Balancing Method by Prediction Hot Spots in the Shared Web Caching System

  • Jung, Sung-C.;Chong, Kil-T.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2137-2142
    • /
    • 2003
  • One of the important performance metrics of the World Wide Web is how fast and precise a request from users will be serviced successfully. Shared Web Caching (SWC) is one of the techniques to improve the performance of the network system. In Shared Web Caching Systems, the key issue is on deciding when and where an item is cached, and also how to transfer the correct and reliable information to the users quickly. Such SWC distributes the items to the proxies which have sufficient capacity such as the processing time and the cache sizes. In this study, the Hot Spot Prediction Algorithm (HSPA) has been suggested to improve the consistent hashing algorithm in the point of the load balancing, hit rate with a shorter response time. This method predicts the popular hot spots using a prediction model. The hot spots have been patched to the proper proxies according to the load-balancing algorithm. Also a simulator is developed to utilize the suggested algorithm using PERL language. The computer simulation result proves the performance of the suggested algorithm. The suggested algorithm is tested using the consistent hashing in the point of the load balancing and the hit rate.

  • PDF

A Data-Consistency Scheme for the Distributed-Cache Storage of the Memcached System

  • Liao, Jianwei;Peng, Xiaoning
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.3
    • /
    • pp.92-99
    • /
    • 2017
  • Memcached, commonly used to speed up the data access in big-data and Internet-web applications, is a system software of the distributed-cache mechanism. But it is subject to the severe challenge of the loss of recently uncommitted updates in the case where the Memcached servers crash due to some reason. Although the replica scheme and the disk-log-based replay mechanism have been proposed to overcome this problem, they generate either the overhead of the replica synchronization or the persistent-storage overhead that is caused by flushing related logs. This paper proposes a scheme of backing up the write requests (i.e., set and add) on the Memcached client side, to reduce the overhead resulting from the making of disk-log records or performing the replica consistency. If the Memcached server fails, a timestamp-based recovery mechanism is then introduced to replay the write requests (buffered by relevant clients), for regaining the lost-data updates on the rebooted Memcached server, thereby meeting the data-consistency requirement. More importantly, compared with the mechanism of logging the write requests to the persistent storage of the master server and the server-replication scheme, the newly proposed approach of backing up the logs on the client side can greatly decrease the time overhead by up to 116.8% when processing the write workloads.

Performance Improvement of the Payload Signature based Traffic Classification System Using Application Traffic Locality (응용 트래픽의 지역성을 이용한 페이로드 시그니쳐 기반 트래픽 분석 시스템의 성능 향상)

  • Park, Jun-Sang;Yoon, Sung-Ho;Kim, Myung-Sup
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38B no.7
    • /
    • pp.519-525
    • /
    • 2013
  • The traffic classification is a preliminary and essential step for stable network service provision and efficient network resource management. However, the payload signature-based method has a significant drawback in high-speed network environment that the processing speed is much slower than other method such as header-based and statistical methods. In this paper, We propose the server IP, Port cache-based traffic classification method using application traffic locality to improve the processing speed of traffic classification. The suggested method achieved about 10 folds improvement in processing speed and 10% improvement in completeness over the payload-based classification system.

A Cache-based Reconfigurable Accelerator in Die-stacked DRAM (3차원 구조 DRAM의 캐시 기반 재구성형 가속기)

  • Kim, Yongjoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.2
    • /
    • pp.41-46
    • /
    • 2015
  • The demand on low power and high performance system is soaring due to the extending of mobile and small electronic device market. The 3D die-stacking technology is widely studying for next generation integration technology due to its high density and low access time. We proposed the 3D die-stacked DRAM including a reconfigurable accelerator in a logic layer of DRAM. Also we discuss and suggest a cache-based local memory for a reconfigurable accelerator in a logic layer. The reconfigurable accelerator in logic layer of 3D die-stacked DRAM reduces the overhead of data management and transfer due to the characteristics of its location, so that can increase the performance highly. The proposed system archives 24.8 speedup in maximum.