• Title/Summary/Keyword: Level 2 Cache

Search Result 68, Processing Time 0.023 seconds

Segment-based Buffer Management for Multi-level Streaming Service in the Proxy System (프록시 시스템에서 multi-level 스트리밍 서비스를 위한 세그먼트 기반의 버퍼관리)

  • Lee, Chong-Deuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.135-142
    • /
    • 2010
  • QoS in the proxy system are under heavy influence from interferences such as congestion, latency, and retransmission. Also, multi-level streaming services affects from temporal synchronization, which lead to degrade the service quality. This paper proposes a new segment-based buffer management mechanism which reduces performance degradation of streaming services and enhances throughput of streaming due to drawbacks of the proxy system. The proposed paper optimizes streaming services by: 1) Use of segment-based buffer management mechanism, 2) Minimization of overhead due to congestion and interference, and 3) Minimization of retransmission due to disconnection and delay. This paper utilizes fuzzy value $\mu$ and cost weight $\omega$ to process the result. The simulation result shows that the proposed mechanism has better performance in buffer cache control rate, average packet loss rate, and delay saving rate with stream relevance metric than the other existing methods of fixed segmentation method, pyramid segmentation method, and skyscraper segmentation method.

Improving Flash Translation Layer for Hybrid Flash-Disk Storage through Sequential Pattern Mining based 2-Level Prefetching Technique (하이브리드 플래시-디스크 저장장치용 Flash Translation Layer의 성능 개선을 위한 순차패턴 마이닝 기반 2단계 프리패칭 기법)

  • Chang, Jae-Young;Yoon, Un-Keum;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.15 no.4
    • /
    • pp.101-121
    • /
    • 2010
  • This paper presents an intelligent prefetching technique that significantly improves performance of hybrid fash-disk storage, a combination of flash memory and hard disk. Since flash memory embedded in a hybrid device is much faster than hard disk in terms of I/O operations, it can be utilized as a 'cache' space to improve system performance. The basic strategy for prefetching is to utilize sequential pattern mining, with which we can extract the access patterns of objects from historical access sequences. We use two techniques for enhancing the performance of hybrid storage with prefetching. One of them is to modify a FAST algorithm for mapping the flash memory. The other is to extend the unit of prefetching to a block level as well as a file level for effectively utilizing flash memory space. For evaluating the proposed technique, we perform the experiments using the synthetic data and real UCC data, and prove the usability of our technique.

A Peer Load Balancing Method for P2P-assisted DASH Systems (P2P 통신 병용 DASH 시스템의 피어 부하 분산 방안 연구)

  • Seo, Ju Ho;Kim, Yong Han
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.94-104
    • /
    • 2020
  • Currently media consumption over fixed/mobile Internet is mostly conducted by adaptive media streaming technology such as DASH (Dynamic Adaptive Streaming over HTTP), which is an ISO/IEC MPEG (Moving Picture Experts Group) standard, or some other technologies similar to DASH. All these heavily depend on the HTTP caches that ISPs (Internet Service Providers) are obliged to provide sufficiently to make sure fast enough Web services. As a result, as the number of media streaming users increases, ISPs' burden for HTTP cache has been greatly increased rather than CDN (Content Delivery Network) providers' server burden. Hence ISPs charge traffic cost to CDN providers to compensate for the increased cost of HTTP caches. Recently in order to reduce the traffic cost of CDN providers, P2P (Peer-to-Peer)-assisted DASH system was proposed and a peer selection algorithm that maximally reduces CDN provides' traffic cost was investigated for this system. This algorithm, however, tends to concentrate the burden upon the selected peer. This paper proposes a new peer selection algorithm that distributes the burden among multiple peers while maintaining the proper reduction level of the CDN providers' cost. Through implementation of the new algorithm in a Web-based media streaming system using WebRTC (Web Real-Time Communication) standard APIs, it demonstrates its effectiveness with experimental results.

Exploiting Hardware Events to Reduce Energy Consumption of HPC Systems

  • Lee, Yongho;Kwon, Osang;Byeon, Kwangeun;Kim, Yongjun;Hong, Seokin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.8
    • /
    • pp.1-11
    • /
    • 2021
  • This paper proposes a novel mechanism called Event-driven Uncore Frequency Scaler (eUFS) to improve the energy efficiency of the HPC systems. UFS exploits the hardware events such as LAPI (Last-level Cache Accesses Per Instructions) and CPI (Clock Cycles Per Instruction) to dynamically adjusts the uncore frequency. Hardware events are collected at a reference time period, and the target uncore frequency is determined using the collected event and the previous uncore frequency. Experiments with the NPB benchmarks demonstrate that the eUFS reduces the energy consumption by 6% on average for class C and D NPB benchmarks while it only increases the execution time by 2% on average.

Real-time Implementation of MPEG-4 HVXC Encoder and Decoder on Floating Point DSP (부동 소수점 DSP를 이용한 MPEG-4 HVXC 인코더 및 디코더의 실시간 구현)

  • Kang, Kyeong-ok;Na, Hoon;Hong, Jin-Woo;Jeong, Dae-Gwon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.37-44
    • /
    • 2000
  • In this paper, we described the real-time implementation effort of MPEG-4 audio HVXC (Harmonic Vector eXcitation Coding) algorithm for very low bitrates, which has target applications from mobile communications to Internet telephony, on current high performance floating point TMS320C6701 DSP. We adopted a hardware structure for real-time operation. In order for software optimization, we used C- and assembly-language level optimizations for time-critical functional codes. Utilizing the internal program memory of the DSP as the program cache, the internal data memory overlap technique and DMA functionality, we could get a goal of realtime operation of HVXC codec both at 2 kbit/s and at 4 kbit/s. For an encoder at 2 kbit/s, the optimization ratio to original code is about 96 %. Finally, we got the subjective quality of MOS 2.45 at 2 kbit/s from an informal quality test.

  • PDF

Design and Implementation of the Multi-level Pre-fetch and Deferred-flush in BADA-III for GIS Applications (GIS 응용을 위한 바다-III의 다단계 사전인출과 지연쓰기의 설계 및 구현)

  • Park, Jun-Ho;Park, Sung-Chul;Shim, Kwang-Hoon;Seong, Jun-Hwa;Park, Young-Chul
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.1 no.2
    • /
    • pp.67-79
    • /
    • 1998
  • Most GIS applications are read-intensive on a large number of spatial objects and when the spatial objects are composite objects, the contained objects within the composite objects are also accessed. In GIS applications, creation, deletion, and update operations on spatial objects occur very rarely, but once they occur they deal with a large number of spatial objects. This paper proposes the concept of the multi-level pre-fetch query to retrieve a large number of spatial objects efficiently, and the functionality of the deferred-flush on the newly created persistent objects into the database with the optimal performance, and presents the design and implementation details of those ideas into an object-oriented DBMS BADA-III while considering these characteristics of GIS applications. The multi-level pre-fetch query retrieves the objects that satisfy the query and the objects that are contained within the objects up to the level specified by users, and registers the retrieved objects on the client cache. The deferred-flush flushes a large number of composite objects that are created by the application with a minimal overhead of the server and a minimal number of communications between the client and the server. These two functionality are suitable for the applications that search or create a large number of composite objects like GIS applications.

  • PDF

The Bit-Map Trip Structure for Giga-Bit Forwarding Lookup in High-Speed Routers (고속 라우터의 기가비트 포워딩 검색을 위한 비트-맵 트라이 구조)

  • Oh, Seung-Hyun;Ahn, Jong-Suk
    • Journal of KIISE:Information Networking
    • /
    • v.28 no.2
    • /
    • pp.262-276
    • /
    • 2001
  • Recently much research for developing forwarding table that support fast router without employing both special hardware and new protocols. This article introduces a new forwarding data structure based on the software to enable forwarding lookup to be penormed at giga-bit speed. The forwarding table is known as a bottleneck of the routers penormance due to its high complexity proportional to the forwarding table size. The recent research that based on the software uses a Patricia trie and its variants. and also uses a hash function with prefix length key and others. The proposed forwarding table structure construct a forwarding table by the bit stream array in which it constructs trie from routing table prefix entries and it represents each pointer pointing the child node and the associated forwarding table entry with one bit The trie structure and routing prefix pointer need a large memory when representing those by linked-list or array. but in the proposed data structure, the needed memory size is small enough since it represents information with one bit. Additionally, by use a lookup method that start searching at desired middle level we can shorten the search path. The introduced data structure. called bit-map trie shows that we can implement a fast forwarding engine on the conventional Pentium processor by reducing the backbone routing table fits into Level 2 cache of Pentium II processor and shortens the searching path. Our experiments to evaluate the performance of proposed method show that this bit-map trie accomplishes 5.7 million lookups per second.

  • PDF

A Bit-Map Trie for the High-Speed Longest Prefix Search of IP Addresses (고속의 최장 IP 주소 프리픽스 검색을 위한 비트-맵 트라이)

  • 오승현;안종석
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.2
    • /
    • pp.282-292
    • /
    • 2003
  • This paper proposes an efficient data structure for forwarding IPv4 and IPv6 packets at the gigabit speed in backbone routers. The LPM(Longest Prefix Matching) search becomes a bottleneck of routers' performance since the LPM complexity grows in proportion to the forwarding table size and the address length. To speed up the forwarding process, this paper introduces a data structure named BMT(Bit-Map Tie) to minimize the frequent main memory accesses. All the necessary search computations in BMT are done over a small index table stored at cache. To build the small index table from the tie representation of the forwarding table, BMT represents a link pointer to the child node and a node pointer to the corresponding entry in the forwarding table with one bit respectively. To improve the poor performance of the conventional tries when their height becomes higher due to the increase of the address length, BMT adopts a binary search algorithm for determining the appropriate level of tries to start. The simulation experiments show that BMT compacts the IPv4 backbone routers' forwarding table into a small one less than 512-kbyte and achieves the average speed of 250ns/packet on Pentium II processors, which is almost the same performance as the fastest conventional lookup algorithms.