• Title/Summary/Keyword: 캐시 서버

Search Result 175, Processing Time 0.025 seconds

Virtual Machine Scheduling for Multicores Considering Effects of Shared On-chip Last Level Cache Interference (공유 말단 캐시에서의 간섭의 영향을 고려한 멀티코어 프로세서를 위한 가상 머신 스케줄링)

  • Kim, Shin-gyu;Choi, Chanho;Eom, Hyeonsang;Yeom, Heon Y.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.134-136
    • /
    • 2012
  • 클라우드 컴퓨팅 서비스 시장이 성장하면서, 서비스 제공자들은 전력 사용량 감소와 서비스 수준을 보장하는 등의 여러 가지 문제와 맞딱드리게 되었다. 이런 문제에 대한 원인 중 하나는 자원 효율성을 높이기 위해 도입한 가상머신 기반의 서버 통합 정책이다. 현재의 가상머신 기술들은 아직까지 완벽한 격리수준을 제공하지 못하기 때문에, 같은 노드에 배치된 가상머신들은 자원을 공유하면서 서로 간에 간섭을 일으키게 된다. 본 연구에서는 가상머신끼리 공유하는 자원 중 프로세서의 말단 캐시(Last-level Cache, LLC)에서의 간섭을 최대한 줄여서 성능을 극대화하기 위한 방법을 제안한다.

A Study of Information Collection for Computer Forensics on Digital Contents Computing Environment (디지털 콘텐츠 컴퓨팅 환경에서의 컴퓨터 포렌식스 정보 수집에 관한 연구 기술에 관한 연구)

  • Lee, Jong-Sup;Jang, Eun-Gyeom;Choi, Yong-Rak
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.507-513
    • /
    • 2008
  • In Digital Contents Computing Environment, information such as register, cache memory, and network information are hard to make certain of a real-time collection because such information collection are easily modified or disappeared. Thus, a collection of information is one of important step for computer forensics system on Digital Contents computing. In this paper, we propose information collection module, which collects variable information of server system based on memory mapping in real-time.

  • PDF

Energy Conservation of RAID by Exploiting SSD Cache (SSD 캐시를 이용한 RAID의 에너지 절감 기법)

  • Lee, Hyo-J.;Kim, Eun-Sam;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.2
    • /
    • pp.237-241
    • /
    • 2010
  • Energy conservation of server systems has become important. Though storage subsystem is one of the biggest power consumers, development of energy conservation techniques is challenging problem due to striping techniques like RAID and physical characteristics of hard disks. According to our observation, the size of footprint for a day or for hours is much smaller compared to the size of whole data set. In this paper, we describe a design of a novel architecture for RAID that uses an SSD as a large cache to conserve energy by holding such a footprint. We incorporate these approaches into a real implementation of a RAID 5 system that consists of four hard disks and an SSD in a Linux environment. Our preliminary results in actual performance measurements using the cello99 and SPC traces show that energy consumption is reduced by a maximum of 14%.

A Cache buffer and Read Request-aware Request Scheduling Method for NAND flash-based Solid-state Disks (캐시 버퍼와 읽기 요청을 고려한 낸드 플래시 기반 솔리드 스테이트 디스크의 요청 스케줄링 기법)

  • Bang, Kwanhu;Park, Sang-Hoon;Lee, Hyuk-Jun;Chung, Eui-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.143-150
    • /
    • 2013
  • Solid-state disks (SSDs) have been widely used by high-performance personal computers or servers due to its good characteristics and performance. The NAND flash-based SSDs, which take large portion of the whole NAND flash market, are the major type of SSDs. They usually integrate a cache buffer which is built from DRAM and uses the write-back policy for better performance. Unfortunately, the policy makes existing scheduling methods less effective at the I/F level of SSDs Therefore, in this paper, we propose a scheduling method for the I/F with consideration of the cache buffer. The proposed method considers the hit/miss status of cache buffer and gives higher priority to the read requests. As a result, the requests whose data is hit on the cache buffer can be handled in advance and the read requests which have larger effects on the whole system performance than write requests experience shorter latency. The experimental results show that the proposed scheduling method improves read latency by 26%.

Mapping Cache for High-Performance Memory Mapped File I/O in Memory File Systems (메모리 파일 시스템 기반 고성능 메모리 맵 파일 입출력을 위한 매핑 캐시)

  • Kim, Jiwon;Choi, Jungsik;Han, Hwansoo
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.524-530
    • /
    • 2016
  • The desire to access data faster and the growth of next-generation memories such as non-volatile memories, contribute to the development of research on memory file systems. It is recommended that memory mapped file I/O, which has less overhead than read-write I/O, is utilized in a high-performance memory file system. Memory mapped file I/O, however, brings a page table overhead, which becomes one of the big overheads that needs to be resolved in the entire file I/O performance. We find that same overheads occur unnecessarily, because a page table of a file is removed whenever a file is opened after being closed. To remove the duplicated overhead, we propose the mapping cache, a technique that does not delete a page table of a file but saves the page table to be reused when the mapping of the file is released. We demonstrate that mapping cache improves the performance of traditional file I/O by 2.8x and web server performance by 12%.

Improving Search Performance of Tries Data Structures for Network Filtering by Using Cache (네트워크 필터링에서 캐시를 적용한 트라이 구조의 탐색 성능 개선)

  • Kim, Hoyeon;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.6
    • /
    • pp.179-188
    • /
    • 2014
  • Due to the tremendous amount and its rapid increase of network traffic, the performance of network equipments are becoming an important issue. Network filtering is one of primary functions affecting the performance of the network equipment such as a firewall or a load balancer to process the packet. In this paper, we propose a cache based tri method to improve the performance of the existing tri method of searching for network filtering. When several packets are exchanged at a time between a server and a client, the tri method repeats the same search procedure for network filtering. However, the proposed method can avoid unnecessary repetition of search procedure by exploiting cache so that the performance of network filtering can be improved. We performed network filtering experiments for the existing method and the proposed method. Experimental results showed that the proposed method could process more packets up to 790,000 per second than the existing method. When the size of cache list is 11, the proposed method showed the most outstanding performance improvement (18.08%) with respect to memory usage increase (7.75%).

Performance Evaluation of SSD Cache Based on DM-Cache (DM-Cache를 이용해 구현한 SSD 캐시의 성능 평가)

  • Lee, Jaemyoun;Kang, Kyungtae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.11
    • /
    • pp.409-418
    • /
    • 2014
  • The amount of data located in storage servers has dramatically increased with the growth in cloud and social networking services. Storage systems with very large capacities may suffer from poor reliability and long latency, problems which can be addressed by the use of a hybrid disk, in which mechanical and flash memory storage are combined. The Linux-based SSD(solid-state disk) uses a caching technique based on the DM-cache utility. We assess the limitations of DM-cache by evaluating its performance in diverse environments, and identify problems with the caching policy that it operates in response to various commands. This policy is effective in reducing latency when Linux is running in native mode; but when Linux is installed as a guest operating systems on a virtual machine, the overhead incurred by caching actually reduces performance.

Web Traffic Data Analyze for Cache Server (캐시 서버를 위한 웹 트래픽 데이터 분석)

  • Seulki Jung;Yillbyung Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.1303-1306
    • /
    • 2008
  • 전체 웹 트래픽 요소 중 가장 큰 비중을 차지하는 HTTP 트래픽을 대상으로 하여 과거의 데이터와 비교 분석해 보았다. 현재의 웹 페이지의 경우 최소 10개~ 20개 이상의 또 다른 객체를 요청 하게 되고 있음을 발견했다. 이는 텍스트가 주를 이루었던 과거의 객체들과 매우 다른 양상을 보인다. 최근의 웹 트레이스 로그를 분석하여 기존 알고리즘들의 문제점을 발견하여 지적 하며 새로운 캐싱 알고리즘의 개념을 제안한다.

A Cache Management Technique for an Efficient Video Proxy Server (효율적인 비디오 프록시 서버를 위한 캐시 관리 방법)

  • Lee, Jun-Pyo;Park, Sung-Han
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.82-88
    • /
    • 2009
  • Video proxy server which is located near clients can store the frequently requested video data in storage space in order to minimize initial latency and network traffic significantly. However, due to the limited storage space in video proxy server, an appropriate video selection method is needed to store the videos which are frequently requested by users. Thus, we present a virtual caching technique to efficiently store the video in video proxy server. For this purpose, we employ a virtual memory in video poky server. If the video is requested by user, it is loaded in virtual memory first and then, delivered to the user. A video which is loaded in virtual memory is deleted or moved into the storage space of video poxy sewer depending on the request condition. In addition, virtual memory is divided into each segment area in order to store the segments efficiently and to avoid the fragmentation. The simulation results show that the proposed method performs better than other methods in terms of the block hit rate and the number of block deletion.

Implementation and Performance Evaluation of Migration Agent for Seamless Virtual Environment System in Grid Computing Network (그리드 컴퓨팅 네트워크에서 Seamless 가상 환경 시스템 구축을 위한 마이그레이션 에이전트 구현 및 성능 평가)

  • Won, Dong Hyun;An, Dong-Un
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.11
    • /
    • pp.269-274
    • /
    • 2018
  • MMORPG is a role-playing game that tens of thousands of people access it online at the same time. Users connect to the server through the game client and play with their own characters. If the user moves to a management area of another server beyond the area managed by the server, the user information must be transmitted to the server to be moved. In an actual game, the user is required to synchronize the established and the transferred information. In this paper, we propose a migration agent server in the virtual systems. We implement a seamless virtual server using the grid method to experiment with seamless server architecture for virtual systems. We propose a method to minimize the delay and equalize the load when the user moves to another server region in the virtual environment. Migration Agent acts as a cache server to reduce response time, the response time was reduced by 50% in the case of 70,000 people.