• Title/Summary/Keyword: 캐시 파일

Search Result 90, Processing Time 0.02 seconds

Method of estimating the deleted time of applications using Amcache.hve (앰캐시(Amcache.hve) 파일을 활용한 응용 프로그램 삭제시간 추정방법)

  • Kim, Moon-Ho;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.3
    • /
    • pp.573-583
    • /
    • 2015
  • Amcache.hve file is a registry hive file regarding Program Compatibility Assistant, which stores the executed information of applications. With Amcache.hve file, We can know execution path, first executed time as well as deleted time. Since it checks both the first install time and deleted time, Amcache.hve file can be used to draw up the overall timeline of applications when used with the Prefetch files and Iconcache.db files. Amcache.hve file is also an important artifact to record the traces of anti-forensic programs, portable programs and external storage devices. This paper illustrates the features of Amcache.hve file and methods for utilization in digital forensics such as estimation of deleted time of applications.

Mapping Cache for High-Performance Memory Mapped File I/O in Memory File Systems (메모리 파일 시스템 기반 고성능 메모리 맵 파일 입출력을 위한 매핑 캐시)

  • Kim, Jiwon;Choi, Jungsik;Han, Hwansoo
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.524-530
    • /
    • 2016
  • The desire to access data faster and the growth of next-generation memories such as non-volatile memories, contribute to the development of research on memory file systems. It is recommended that memory mapped file I/O, which has less overhead than read-write I/O, is utilized in a high-performance memory file system. Memory mapped file I/O, however, brings a page table overhead, which becomes one of the big overheads that needs to be resolved in the entire file I/O performance. We find that same overheads occur unnecessarily, because a page table of a file is removed whenever a file is opened after being closed. To remove the duplicated overhead, we propose the mapping cache, a technique that does not delete a page table of a file but saves the page table to be reused when the mapping of the file is released. We demonstrate that mapping cache improves the performance of traditional file I/O by 2.8x and web server performance by 12%.

Web Caching using File Type (파일 타입을 이용한 웹 캐싱)

  • Lim, Jae-Hyun;Lee, Jun-Yeon
    • The KIPS Transactions:PartC
    • /
    • v.9C no.6
    • /
    • pp.961-968
    • /
    • 2002
  • This paper proposes a new access method which is to considered the high variability in World Wide Web and manage the web cache space. Instead of using a single cache, we divide a cache and store all documents according to their file types. Proposed method was compares with current cache management policies using LFU, LRU and SIZE base algorithm. Using two different workload, we show the improvement hitting ratio and byte hitting ratio through simulating on the file type caching.

IT-based Technology An Efficient Global Buffer Management ,algorithm for SAN Environments (SAN 환경을 위한 효율적인 전역버퍼 관리 알고리즘)

  • 이석재;박새미;송석일;유재수;이장선
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.3
    • /
    • pp.71-80
    • /
    • 2004
  • In distributed file-systems, cooperative caching algorithm which owns the data cached at each node jointly is used to reduce an expense of disk access. Cooperative caching algorithm is the method that increases a cache hit-ratio and decrease a disk access as it holds the cache information of distributed systems in common and makes cache larger virtually. Recently, several cooperative caching algorithms decrease the message costs by using approximate information of the cache and increase the cache hit-ratio by using local and global cache fields dynamically. And they have an advantage that increases the whole field hit-ratio by sending a replaced buffer to the idle node on buffers replacement in order to maintain the replaced cache in the cache field. However the wrong approximate information deteriorates the performance, the consistency maintenance goes to great expense to exchange messages and the cost that manages Age-information of each node to choose the idle node increases. In this thesis, we propose a cooperative cache algorithm that maintains correct cache information, minimizes the maintenance cost for consistency and the management cost for buffer Age-information. Also, we show the superiority of our algorithm through the performance evaluation.

  • PDF

Design and Implementation of an In-Memory File System Cache with Selective Compression (대용량 파일시스템을 위한 선택적 압축을 지원하는 인-메모리 캐시의 설계와 구현)

  • Choe, Hyeongwon;Seo, Euiseong
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.658-667
    • /
    • 2017
  • The demand for large-scale storage systems has continued to grow due to the emergence of multimedia, social-network, and big-data services. In order to improve the response time and reduce the load of such large-scale storage systems, DRAM-based in-memory cache systems are becoming popular. However, the high cost of DRAM severely restricts their capacity. While the method of compressing cache entries has been proposed to deal with the capacity limitation issue, compression and decompression, which are technically difficult to parallelize, induce significant processing overhead and in turn retard the response time. A selective compression scheme is proposed in this paper for in-memory file system caches that rapidly estimates the compression ratio of incoming cache entries with their Shannon entropies and compresses cache entries with low compression ratio. In addition, a description is provided of the design and implementation of an in-kernel in-memory file system cache with the proposed selective compression scheme. The evaluation showed that the proposed scheme reduced the execution time of benchmarks by approximately 18% in comparison to the conventional non-compressing in-memory cache scheme. It also provided a cache hit ratio similar to the all-compressing counterpart and reduced 7.5% of the execution time by reducing the compression overhead. In addition, it was shown that the selective compression scheme can reduce the CPU time used for compression by 28% compared to the case of the all-compressing scheme.

A Design of Double Cache Policy for File System Based on NAND Flash Memory (NAND 플래시 메모리 기반 파일시스템을 위한 더블 캐시 정책 설계)

  • Park, Myung-Kyu;Kim, Sung-Jo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06b
    • /
    • pp.366-370
    • /
    • 2008
  • NAND 플래시 메모리는 특성상 쓰기 횟수가 제한적이라는 단점을 가지고 있어 쓰기 연산이 빈번히 발생하게 되면 NAND 플래시 메모리의 수명이 줄어든다. 이러한 문제점을 해결하기 위해 NAND 플래시 메모리의 특성을 고려한 지연 쓰기 기법이 연구되고 있다. 하지만 지연 쓰기를 하기 때문에 쓰기 횟수는 줄어들지만 캐시 적중률이 낮아진다. 이러한 문제해결을 위해 본 논문에서는 NAND 플래시 메모리 기반 파일 시스템을 위한 더블 캐시 정책을 제안한다. 더블 캐시는 실질적인 캐시인 Real Cache와 요구 페이지의 패턴을 관찰하기 위한 Ghost Cache로 구성된다. 이 정책은 Real Cache에서의 지연 쓰기를 하지 않고, Ghost Cache 공간에서 dirty페이지와 clean페이지를 활용하여 효율적인 지연 쓰기가 가능하도록 설계함으로써 쓰기 횟수를 줄이고, 적중률을 높인다.

  • PDF

Web Caching Strategy based on Documents Popularity (선호도 기반 웹 캐싱 전략)

  • Yoo, Hae-Young;Park, Chel
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.9
    • /
    • pp.530-538
    • /
    • 2002
  • In this paper, we propose a new caching strategy for web servers. The proposed algorithm collects on]y the statistics of the requested file, for example the popularity, when a request arrives. And, at times, only files with higher popularity are cached all together. Because the cache remains unchanged until the cache is made newly, web server can use very efficient data structure for cache to determine whether a file is in the cache or not. This increases greatly tile efficiency of cache manipulation. Furthermore, the experiment that is performed with real log files built by web servers shows that the cache hit ratio and the cache hit ratio are better than those produced by LRU. The proposed algorithm has a drawback such that the cache hit ratio may decrease when the popularity of files that is not in the cache explodes instantaneously. But in our opinion, such explosion happens infrequently, and it is easy to implement the web servers to adapt them to such unusual cases.

Forecasted Popularity Based Lazy Caching Strategy (예측된 선호도 기반 게으른 캐싱 전략)

  • Park, Chul;Yoo, Hae-Young
    • The KIPS Transactions:PartA
    • /
    • v.10A no.3
    • /
    • pp.261-268
    • /
    • 2003
  • In this paper, we propose a new caching strategy for web servers. The proposed strategy collects only the statistics of the requested file, for example the popularity, when a request arrives. At a point of time, only files with higher forecasted popularity are cached all together. Forecasted popularity based lazy caching (FPLC) strategy uses exponential smoothing method for forecast popularity of web files. And, FPLC strategy shows that the cache hit ratio and the cache transfer ratio are better than those produced by other caching strategy. Furthermore, the experiment that is performed with real log files built from web servers shows our study on forecast method for popularity of web files improves cache efficiency.

Cache Policies for Multimedia Web Server (멀티미디어 웹 서비스를 위한 캐시 정책)

  • 최영주;임재현;김치수;이준연
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.11a
    • /
    • pp.379-382
    • /
    • 2003
  • 본 논문에서는 멀티미디어 웹 서버에 나타나는 통신량을 추적하여 사용자의 액세스 패턴을 분석하고 그에 따른 캐싱 모델을 수립하였다. 적중률과 바이트적중률을 동시에 향상시키기 위한 파일 타입 캐시 정책을 제안하였으며, 성능을 평가한 결과 캐시의 유용성을 개선하였다.

  • PDF

A Study on the Cache Consistency in Global Memory (전역적 메모리에서의 캐시 일관성에 관한 연구)

  • 진연호;김은경;정병수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10c
    • /
    • pp.9-11
    • /
    • 2000
  • 최근의 네트웍 환경에서는 멀티미디어 서비스와 대용량의 파일을 이용하는 어플리케이션의 증가로 인해 이를 충족시킬 수 있는 저장 장치가 요구되고 있는 실정이며 이러한 저장 장치를 이용한 분산 환경의 네트웍 파일 시스템이 필수적이 되었다. 실제로 ATM, Fast switched LAN, Fibre channel 같은 고속의 네트웍 발달로 인해 분산 환경의 네트웍 파일 시스템에서 디스크를 엑세스하는 속도보다 오히려 네트웍으로 연결된 원격지의 메모리를 액세스하는 것이 더 빨라졌다. 따라서 지역 디스크 캐싱 기법이 분산 환경의 네트웍 저장 시스템으로 적용되면서 전역적 메모리를 관리하는 것과 원격지간의 캐시 일관성 문제(cache consistency)를 고려하지 않을 수 없게 되었다. 본 논문에서는 분산 환경의 캐싱 기법을 살펴보고 전역적 메모리의 캐시 일관성 문제를 다루면서 이에 대한 설계방안 및 앞으로의 연구 방향을 제시한다.

  • PDF