• Title/Summary/Keyword: Cache System

Search Result 457, Processing Time 0.033 seconds

Multicast VOD System for Interactive Services in the Head-End-Network (Head-End-Network에서 대화형 서비스를 위한 멀티캐스트 VOD 시스템)

  • Kim, Back-Hyun;Hwang, Tae-June;Kim, Ik-Soo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.361-368
    • /
    • 2004
  • This paper proposes an interactive VOD system to serve truly interactive VCR services using multicast delivery, client buffer and web-caching technique which implements the distributed proxy in Head-End- Network(HNET). This technique adopts some caches in the HNET that consists of a Switching Agent(SA), some Head-End-Nodes(HEN) and many clients. In this model, HENs distributively store the requested video under the control of SA. Also, client buffer dynamically expands to support various VCR playback rate. Thus, interactive services are offered with transmitting video streams from network, HENs and stored streams on buffer. Therefore this technique makes the load of network occur In the limited area, minimizes the additional channel allocation from server and restricts the transmission of duplicated video contents

Data De-duplication and Recycling Technique in SSD-based Storage System for Increasing De-duplication Rate and I/O Performance (SSD 기반 스토리지 시스템에서 중복률과 입출력 성능 향상을 위한 데이터 중복제거 및 재활용 기법)

  • Kim, Ju-Kyeong;Lee, Seung-Kyu;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.149-155
    • /
    • 2012
  • SSD is a storage device of having high-performance controller and cache buffer and consists of many NAND flash memories. Because NAND flash memory does not support in-place update, valid pages are invalidated when update and erase operations are issued in file system and then invalid pages are completely deleted via garbage collection. However, garbage collection performs many erase operations of long latency and then it reduces I/O performance and increases wear leveling in SSD. In this paper, we propose a new method of de-duplicating valid data and recycling invalid data. The method de-duplicates valid data and then recycles invalid data so that it improves de-duplication ratio. Due to reducing number of writes and garbage collection, the method could increase I/O performance and decrease wear leveling in SSD. Experimental result shows that it can reduce maximum 20% number of garbage collections and 9% I/O latency than those of general case.

A Transaction Level Simulator for Performance Analysis of Solid-State Disk (SSD) in PC Environment (PC향 SSD의 성능 분석을 위한 트랜잭션 수준 시뮬레이터)

  • Kim, Dong;Bang, Kwan-Hu;Ha, Seung-Hwan;Chung, Sung-Woo;Chung, Eui-Young
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.12
    • /
    • pp.57-64
    • /
    • 2008
  • In this paper, we propose a system-level simulator for the performance analysis of a Solid-State Disk (SSD) in PC environment by using TLM (Transaction Level Modeling) method. Our method provides quantitative analysis for a variety of architectural choices of PC system as well as SSD. Also, it drastically reduces the analysis time compared to the conventional RTL (Register Transfer Level) modeling method. To show the effectiveness of the proposed simulator, we performed several explorations of PC architecture as well as SSD. More specifically, we measured the performance impact of the hit rate of a cache buffer which temporarily stores the data from PC. Also, we analyzed the performance variation of SSD for various NAND Flash memories which show different response time with our simulator. These experimental results show that our simulator can be effectively utilized for the architecture exploration of SSD as well as PC.

A Processor Allocation Policy using Program Characteristics on Shared Bus (공유 버스상에서 프로그램 특성을 사용한 프로세서 할당 정책)

  • Jeong, In-Beom;Lee, Jun-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1073-1082
    • /
    • 1999
  • 본 논문에서는 시스템 내의 프로세서들을 효과적으로 사용하기 위한 적응적 프로세서 할당 정책을 제안한다. 프로그램의 병렬성을 향상시키기 위하여 일반적으로 병렬 처리에 사용될 프로세서 개수를 증가시킨다. 그러나 증가된 프로세서들은 그레인 크기에 변화를 일으키며 이는 캐쉬 성능에 영향을 미친다. 특히 대역이 제한된 공유 버스를 사용하는 시스템에서는 프로세서 개수의 증가는 공유 버스에 대한 접근 경쟁을 크게 증가하므로 버스에서 대기하는 시간이 프로세서 증가에 의한 계산 능력 이득을 상쇄시키는 주요한 원인이 되고 있다. 본 논문에서 제안한 적응적 프로세서 할당 정책은 프로그램이 수행되는 도중에 임의의 기간동안 공유버스에 대기중인 프로세서 분포에 관한 정보를 얻는다. 그리고 이 정보를 바탕으로 프로세서 개수를 변경하는 방법이다. 모의 시험에서 적응적 프로세서 할당 정책은 프로그램들의 버스 트래픽 특성에 따른 최적의 적합한 프로세서 개수를 발견함을 보인다. 그리고 적응적 프로세서 할당 정책은 고정된 프로세서 개수를 사용한 가장 좋은 성능보다는 다소 떨어진 성능을 나타내었으나 시스템의 프로세서 활용성을 높여 효과적 시스템 사용에 기여함을 보인다. Abstract In this paper, the adaptive processor allocation policy is suggested to make effective use of processors in system. To enhance the parallelism, the number of processors used in the parallel computing may be increased. However, increasing the number of processors affects the grain size of the parallel program. Therefore, it affects the cache performance. In particular, when the shared bus is employed, since increasing the number of processors can result in a significant amount of contention to achieve the shared-bus, the increased computing power is offset by the bus waiting time due to these contentions. The adaptive processor allocation policy acquires the information about the distribution of waiting processors on shared bus for any execution period of programs. And it changes the number of processors working in parallel processing during the program's run. Our simulation results show that the adaptive processor allocation policy finds the optimum feasible number of processors based on the bus traffic characteristic of programs. Thus, it contributes to effective system utilization, even though it performs slightly less efficiently than using a fixed number of processors with the best performance.

Analysis of Performance Interference in a KVM-virtualized Environment in the Aspect of CPU Scheduling (KVM 기반 가상화 환경에서 CPU 스케줄링 관점으로 본 Network I/O 성능간섭 현상 분석)

  • Kang, Donghwa;Lee, Kyungwoon;Park, Hyunchan;Yoo, Chuck
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.9
    • /
    • pp.473-478
    • /
    • 2016
  • Server virtualization provides abstraction of physical resources to users and thus accomplishes high resource utilization and flexibility. However, the characteristics of server virtualization, such as the limited number of physical resources shared by virtual machines, can cause problems, mainly performance interference. The performance interference is caused by the fact that the CPU scheduler running on the host operating system schedules virtual machines without considering the characteristics of the virtual machine's internal process. To address performance interference, a number of research activities to improve performance interference have been conducted, but do not deal with the fundamental analysis of performance interference. In this paper, in order to analyze the cause of performance interference, we carry out profiling in a variety of scenarios in a virtualized environment based on KVM. As a result, we analyze the phenomenon of the performance interference in terms of CPU scheduling and propose an efficient scheduling solution.

Development of Communication Module for a Mobile Integrated SNS Gateway (모바일 통합 SNS 게이트웨이 통신 모듈 개발)

  • Lee, Shinho;Kwon, Dongwoo;Kim, Hyeonwoo;Ju, Hongtaek
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.2
    • /
    • pp.75-85
    • /
    • 2014
  • Recently, mobile SNS traffic has increased tremendously due to the deployment of smart devices such as smart phones and smart tablets. In this paper, mobile integrated SNS gateway is proposed to cope with massive SNS traffic. Most of mobile SNS applications update the information with individual connection to the corresponding servers. The proposed gateway integrates these applications. It is for reducing SNS traffic caused by continuous data request and improving the mobile communication performance. The key elements of the mobile integrated SNS gateway are the synchronization, cache and integrated certification. The proposed protocol and gateway system have implemented on the testbed which deployed on the real network to evaluate the performance of the proposed gateway. Finally, we present the caching performance of gateway system implementation.

An Associative Class Set Generation Method for supporting Location-based Services (위치 기반 서비스 지원을 위한 연관 클래스 집합 생성 기법)

  • 김호숙;용환승
    • Journal of KIISE:Databases
    • /
    • v.31 no.3
    • /
    • pp.287-296
    • /
    • 2004
  • Recently, various location-based services are becoming very popular in mobile environments. In this paper, we propose a new concept of a frequent item set, called “associative class set”, for supporting the location-based service which uses a large quantity of a spatial database in mobile computing environments, and then present a new method for efficiently generating the associative class set. The associative class set is generated with considering the temporal relation of queries, the spatial distance of required objects, and access patterns of users. The result of our research can play a fundamental role in efficiently supporting location-based services and in overcoming the limitation of mobile environments. The associative class set can be applied by a recommendation system of a geographic information system in mobile computing environments, mobile advertisement, city development planning, and client cache police of mobile users.

Resource Allocation for Heterogeneous Service in Green Mobile Edge Networks Using Deep Reinforcement Learning

  • Sun, Si-yuan;Zheng, Ying;Zhou, Jun-hua;Weng, Jiu-xing;Wei, Yi-fei;Wang, Xiao-jun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2496-2512
    • /
    • 2021
  • The requirements for powerful computing capability, high capacity, low latency and low energy consumption of emerging services, pose severe challenges to the fifth-generation (5G) network. As a promising paradigm, mobile edge networks can provide services in proximity to users by deploying computing components and cache at the edge, which can effectively decrease service delay. However, the coexistence of heterogeneous services and the sharing of limited resources lead to the competition between various services for multiple resources. This paper considers two typical heterogeneous services: computing services and content delivery services, in order to properly configure resources, it is crucial to develop an effective offloading and caching strategies. Considering the high energy consumption of 5G base stations, this paper considers the hybrid energy supply model of traditional power grid and green energy. Therefore, it is necessary to design a reasonable association mechanism which can allocate more service load to base stations rich in green energy to improve the utilization of green energy. This paper formed the joint optimization problem of computing offloading, caching and resource allocation for heterogeneous services with the objective of minimizing the on-grid power consumption under the constraints of limited resources and QoS guarantee. Since the joint optimization problem is a mixed integer nonlinear programming problem that is impossible to solve, this paper uses deep reinforcement learning method to learn the optimal strategy through a lot of training. Extensive simulation experiments show that compared with other schemes, the proposed scheme can allocate resources to heterogeneous service according to the green energy distribution which can effectively reduce the traditional energy consumption.

Optimizing LRU Lock Management in the Linux Kernel for Improving Parallel Write Throughout in Many-Core CPU Systems (매니코어 CPU 시스템의 병렬 쓰기 성능 향상을 위한 리눅스 커널의 LRU 관리 최적화 기법)

  • Eun-Kyu Byun;Gibeom Gu;Kwang-Jin Oh;Jiwoo Bang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.7
    • /
    • pp.209-216
    • /
    • 2023
  • Modern HPC systems are equipped with many-core CPUs with dozens of cores. When performing parallel I/O in such a system, there is a limit to scalability due to the problem of the LRU lock management policy of the Linux system. The study proposes an improved FinerLRU to solve this problem. Our new FinerLRU improves the parallel write performance of file systems using the buffer cache through granular lock management by increasing the number of LRU locks upto the maximum number of cores. The proposed method was implemented in Linux 5.18.11, and the performance was measured on two types of CPUs, Intel Icelake Xeon and Intel Knights landing, with different characteristics, and it was found that a performance improvement of about two times can be obtained in both types of systems.

Implementation of a DB-Based Virtual File System for Lightweight IoT Clouds (경량 사물 인터넷 클라우드를 위한 DB 기반 가상 파일 시스템 구현)

  • Lee, Hyung-Bong;Kwon, Ki-Hyeon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.10
    • /
    • pp.311-322
    • /
    • 2014
  • IoT(Internet of Things) is a concept of connected internet pursuing direct access to devices or sensors in fused environment of personal, industrial and public area. In IoT environment, it is possible to access realtime data, and the data format and topology of devices are diverse. Also, there are bidirectional communications between users and devices to control actuators in IoT. In this point, IoT is different from the conventional internet in which data are produced by human desktops and gathered in server systems by way of one-sided simple internet communications. For the cloud or portal service of IoT, there should be a file management framework supporting systematic naming service and unified data access interface encompassing the variety of IoT things. This paper implements a DB-based virtual file system maintaining attributes of IoT things in a UNIX-styled file system view. Users who logged in the virtual shell are able to explore IoT things by navigating the virtual file system, and able to access IoT things directly via UNIX-styled file I O APIs. The implemented virtual file system is lightweight and flexible because it maintains only directory structure and descriptors for the distributed IoT things. The result of a test for the virtual shell primitives such as mkdir() or chdir() shows the smooth functionality of the virtual file system, Also, the exploring performance of the file system is better than that of Window file system in case of adopting a simple directory cache mechanism.