• Title/Summary/Keyword: 캐싱 시스템

Search Result 132, Processing Time 0.027 seconds

Flash Node Caching Scheme for Hybrid Hard Disk Systems (하이브리드 하드디스크 시스템을 위한 플래시 노드 캐싱 기법)

  • Byun, Si-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.6
    • /
    • pp.1696-1704
    • /
    • 2008
  • The conventional hard disk has been the dominant database storage system for over 25 years. Recently, hybrid systems which incorporate the advantages of flash memory into the conventional hard disks are considered to be the next dominant storage systems. Their features are satisfying the requirements like enhanced data I/O, energy consumption and reduced boot time, and they are sufficient to hybrid storage systems as major database storages. However, we need to improve traditional index management schemes based on B-Tree due to the relatively slow characteristics of hard disk operations, as compared to flashmemory. In order to achieve this goal, we propose a new index management scheme called FNC-Tree. FNC-Tree-based index management enhanced search and update performance by caching data objects in unused free area of flash leaf nodes to reduce slow hard disk I/Os in index access processes. Based on the results of the performance evaluation, we conclude that our scheme outperforms the traditional index management schemes.

A Degraded Quality Service Policy for reducing the transcoding loads in a Transcoding Proxy (트랜스코딩 프록시에서 트랜스코딩 부하를 줄이기 위한 낮은 품질 서비스 정책)

  • Park, Yoo-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.16A no.3
    • /
    • pp.181-188
    • /
    • 2009
  • Transcoding is one of core techniques that implement VoD services according to QoS. But it consumes a lot of CPU resource. A transcoding proxy transcodes multimedia objects to meet requirements of various mobile devices and caches them to reuse later. In this paper, we propose a service policy that reduces the load of transcoding multimedia objects by degrading QoS in a transcoding proxy. Due to the tradeoff between QoS and the load of a proxy system, a transcoding proxy provides lower QoS than a client's requirement so that it can accomodate more clients.

A Mapping Table Caching Scheme for NAND Flash-based Mobile Storage Devices (NAND 플래시 기반 모바일 저장장치를 위한 사상 테이블 캐싱 기법)

  • Yang, Soo-Hyeon;Ryu, Yeon-Seung
    • The Journal of Society for e-Business Studies
    • /
    • v.15 no.4
    • /
    • pp.21-31
    • /
    • 2010
  • Recently e-business such as online financial trade and online shopping using mobile computes are widely spread. Most of mobile computers use NAND flash memory-based storage devices for storing data. Flash memory storage devices use a software called flash translation layer to translate logical address from a file system to physical address of flash memory by using mapping tables. The legacy FTLs have a problem that they must maintain very large mapping tables in the RAM. In order to address this issues, in this paper, we proposed a new caching scheme of mapping tables. We showed through the trace-driven simulations that the proposed caching scheme reduces the space overhead dramatically but does not increase the time overhead. In the case of online transaction workload in e-business environment, in particular, the proposed scheme manifests better performance in reducing the space overhead.

Partition and Caching Mechanism for GML Visualization on Mobile Device (모바일 디바이스에서 GML 가시화를 위한 분할 및 캐싱 기법)

  • Song, Eun-Ha;Park, Yong-Jin;Han, Won-Hee;Jeong, Young-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.7
    • /
    • pp.1025-1034
    • /
    • 2008
  • In this paper, we developed GridGML for efficiently supplying a GML and visualizing the map with partitioning map and caching method to a mobile device. In order to overcome the weighting of a file, which is the biggest weakness of a GML, GridGML extracts only the most necessary parts for the visualization of the map among GML attributes, and makes the file light as a class instance by applying an offset value. GridGML manages a partition based on the visualization area of a mobile device to visualize the map to a mobile device in real time, and transmits the partition area by serializing it for the benefit of transmission. Also, the received partition area is compounded in a mobile device and is visualized by being partitioned again as four visible areas based on the display of a mobile device. Then, the area is managed by applying a caching algorithm in consideration of repetitiveness for a received map for the efficient operation of resources. Also, in order to prevent the delay in transmission time as regards the instance density area of the map, an adaptive map partition mechanism is proposed for maintaining the transmission time uniformly.

  • PDF

Game-Based Content Caching and Data Sponsor Scheme for the Content Network (콘텐츠 네트워크 환경에서 게임이론을 이용한 콘텐츠 캐싱 및 데이터 스폰서 기법)

  • Won, JoongSeop;Kim, SungWook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.7
    • /
    • pp.167-176
    • /
    • 2019
  • Recently, as the types of services that can be enjoyed in mobile telecommunication networks such as social networks and video streaming are increasing, mobile users(MUs) can access mobile contents easily by consuming mobile data. However, under a mobile telecommunication environment, MUs have to pay a high data fee to a network service provider(SP) in order to enjoy contents. The 'data sponsor' technique, introduced as a way to solve this problem, has attracted attention as a breakthrough method for enhancing contents accessibility of MUs. In this paper, we propose an algorithm that determines the optimal discount rate through the Stackelberg game in the data sponsor environment. We also propose an algorithm to design edge caching, which caches highly popular content for MUs on edge server, through many-to-many matching game. Simulation results clearly indicate that the profit for CP's content consumption is improved by about 6~11%, and the profit of CP according to the ratio of edge caching is improved by about 12% than the other existing schemes under data sponsor environment.

The Large Scale Workflow Model Data Process Mechanism Using Memory Cashing Repository (메모리 캐싱 저장소를 이용한 대규모 워크플로우 모델 데이터 처리 메커니즘)

  • 박민재;심성수;정재우;안형진;김민홍;김광훈
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04a
    • /
    • pp.686-688
    • /
    • 2003
  • 워크플로우 시스템의 핵심을 이루고 있는 엔진의 효율성을 극대화 시키기 위하여, 워크플로우 시스템 엔진 안에서 운용되는 데이터의 관리는 매우 중요하다. 본 논문에서는 워크를로우 시스템에서 운용되는 각 시스템 데이터의 특징을 고찰 한 후, 일반적으로 사용하는 데이터 베이스 시스템 호출을 통해 데이터를 관리하는 방법을 보완 할 수 있는 방법을 제시한다. 그 방법으로 각 시스템 데이터가 가지고 있는 특성에 맞추어 기존의 데이터 베이스 호출을 통한 방법에 메모리에 데이터를 로드시켜 공용적으로 사용하는 방법을 더해 시스템 데이터를 관리하는 방법을 기술한다.

  • PDF

Design and Implementation of A Distributed Information Integration System based on Metadata Registry (메타데이터 레지스트리 기반의 분산 정보 통합 시스템 설계 및 구현)

  • Kim, Jong-Hwan;Park, Hea-Sook;Moon, Chang-Joo;Baik, Doo-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.233-246
    • /
    • 2003
  • The mediator-based system integrates heterogeneous information systems with the flexible manner. But it does not give much attention on the query optimization issues, especially for the query reusing. The other thing is that it does not use standardized metadata for schema matching. To improve this two issues, we propose mediator-based Distributed Information Integration System (DIIS) which uses query caching regarding performance and uses ISO/IEC 11179 metadata registry in terms of standardization. The DIIS is designed to provide decision-making support, which logically integrates the distributed heterogeneous business information systems based on the Web environment. We designed the system in the aspect of three-layer expression formula architecture using the layered pattern to improve the system reusability and to facilitate the system maintenance. The functionality and flow of core components of three-layer architecture are expressed in terms of process line diagrams and assembly line diagrams of Eriksson Penker Extension Model (EPEM), a methodology of an extension of UML. For the implementation, Supply Chain Management (SCM) domain is used. And we used the Web-based environment for user interface. The DIIS supports functions of query caching and query reusability through Query Function Manager (QFM) and Query Function Repository (QFR) such that it enhances the query processing speed and query reusability by caching the frequently used queries and optimizing the query cost. The DIIS solves the diverse heterogeneity problems by mapping MetaData Registry (MDR) based on ISO/IEC 11179 and Schema Repository (SCR).

Server network architectures for VOD service (프록시 서버를 이용한 DAVIC VOD 시스템의 설계)

  • Ahn, Kyung-Ah;Choi, Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.5
    • /
    • pp.1229-1240
    • /
    • 1998
  • In this paper, we provide a design of DAVIC VOD service system with proxy servers which perform caching of video streams. Proxy servers are placed between a service provider system and service consumer systems. They provide video services to consumers on behalf of the service provider, therefore they reduce the loads of service providers and network. The operation of a proxy server depends on whether the requested program is in its storage. If this is the case, the prosy servere takes all the controls, but if the proxy does not have the program, it forwards the service request the proxy server takes all the controls, but if the prosy does not have the program, it forwards the service request to a service provider. While the service provider system provides the program to the consumer, the proxy copies and caches the program. The proxy server executes cache replacement, if necessary. We show by simultion that the LFU is the most efficiency caching replacement algorithm among the typical algorithms such as LRU, LFU, FIFO.

  • PDF

Hot Spot Prediction Method for Improving the Performance of Consistent Hashing Shared Web Caching System (컨시스턴스 해슁을 이용한 분산 웹 캐싱 시스템의 성능 향상을 위한 Hot Spot 예측 방법)

  • 정성칠;정길도
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5B
    • /
    • pp.498-507
    • /
    • 2004
  • The fast and Precise service for the users request is the most important in the World Wide Web. However, the lest service is difficult due to the rapid increase of the Internet users recently. The Shared Web Caching (SWC) is one of the methods solving this problem. The performance of SWC is highly depend on the hit rate and the hit rate is effected by the memory size, processing speed of the server, load balancing and so on. The conventional load balancing is usually based on the state history of system, but the prediction of the state of the system can be used for the load balancing that will further improve the hit rate. In this study, a Hot Spot Prediction Method (HSPM) has been suggested to improve the throughputs of the proxy. The predicted hot spots, which is the item most frequently requested, should be predicted beforehand. The result show that the suggested method is better than the consistent hashing in the point of the load balancing and the hit rate.

Proxy Caching Grouping by Partition and Mapping for Distributed Multimedia Streaming Service (분산 멀티미디어 스트리밍 서비스를 위한 분할과 사상에 의한 프록시 캐싱 그룹화)

  • Lee, Chong-Deuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.40-47
    • /
    • 2009
  • Recently, dynamic proxy caching has been proposed on the distributed environment so that media objects by user's requests can be served directly from the proxy without contacting the server. However, it makes caching challenging due to multimedia large sizes, low latency and continuous streaming demands of media objects. To solve the problems caused by streaming demands of media objects, this paper has been proposed the grouping scheme with fuzzy filtering based on partition and mapping. For partition and mapping, this paper divides media block segments into fixed partition reference block(R$_f$P) and variable partition reference block(R$_v$P). For semantic relationship, it makes fuzzy relationship to performs according to the fixed partition temporal synchronization(T$_f$) and variable partition temporal synchronization(T$_v$). Simulation results show that the proposed scheme makes streaming service efficiently with a high average request response time rate and cache hit rate and with a low delayed startup ratio compared with other schemes.