• Title/Summary/Keyword: 캐시 비용

Search Result 71, Processing Time 0.028 seconds

A Peer Load Balancing Method for P2P-assisted DASH Systems (P2P 통신 병용 DASH 시스템의 피어 부하 분산 방안 연구)

  • Seo, Ju Ho;Kim, Yong Han
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.94-104
    • /
    • 2020
  • Currently media consumption over fixed/mobile Internet is mostly conducted by adaptive media streaming technology such as DASH (Dynamic Adaptive Streaming over HTTP), which is an ISO/IEC MPEG (Moving Picture Experts Group) standard, or some other technologies similar to DASH. All these heavily depend on the HTTP caches that ISPs (Internet Service Providers) are obliged to provide sufficiently to make sure fast enough Web services. As a result, as the number of media streaming users increases, ISPs' burden for HTTP cache has been greatly increased rather than CDN (Content Delivery Network) providers' server burden. Hence ISPs charge traffic cost to CDN providers to compensate for the increased cost of HTTP caches. Recently in order to reduce the traffic cost of CDN providers, P2P (Peer-to-Peer)-assisted DASH system was proposed and a peer selection algorithm that maximally reduces CDN provides' traffic cost was investigated for this system. This algorithm, however, tends to concentrate the burden upon the selected peer. This paper proposes a new peer selection algorithm that distributes the burden among multiple peers while maintaining the proper reduction level of the CDN providers' cost. Through implementation of the new algorithm in a Web-based media streaming system using WebRTC (Web Real-Time Communication) standard APIs, it demonstrates its effectiveness with experimental results.

The Adaptive Multimedia Contents Service Method to Reduce Delay of MN in HMIPv6 (HMIPv6에서 MN의 지연을 최소화하는 멀티미디어 컨텐츠 서비스 방법)

  • Park, Won-Gil;Kang, Eui-Sun
    • The KIPS Transactions:PartB
    • /
    • v.15B no.6
    • /
    • pp.585-594
    • /
    • 2008
  • The issues that we should consider in the process of providing mobile web service using a mobile device are seamless service and QoS-guaranteed service. HMIPv6 has MAP because of improving packet loss and transmission delay due to disconnection. However, a load is concentrated on HMIPv6 because of receiving and delivering packet for MN. Owing to this, real time data fails to be processed quickly, and also adaptive mobile service is required for QoS guaranteed service. However, this method demands the response time cost of contents service owing to the hardware differences of various devices. Therefore, we improve the process performance of real time data by applying a queue in MAP for seamless service in this paper. For decreasing response time cost, we propose mobile web service method which has reusable cache of contents using the elements of contents. The result of a numerical formula and simulation shows that our proposed method is superior under various system conditions.

Improvement in Performance of ATM Network Interface Card and Performance Evaluation (ATM 망 접속 장치의 성능 향상 방법과 성능 평가)

  • Kim, Cheul-Young;Lee, Seung-Ha;Na, Yun-Joo;Nam, Ji-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10b
    • /
    • pp.1383-1386
    • /
    • 2001
  • Internet 이용자의 급격한 증가와 광대역 통신망(B-ISDN) 구축의 확산에 따라 ATM(Asynchronous Transfer Mode)망 접속장치의 큰 수요가 기대되며, 또한 ATM망 접속장치의 성능 향상도 요구되고 있다. 기존의 연구들은 컴퓨터 프로그램의 메모리에 대한 참조가 지역적이라는 특성을 이용한 가상 메모리의 효율적인 페이지 교체 알고리즘 및 캐쉬 처리 방안들이 진행되어 왔다. 본 논문은 ATM 프로토콜 프로세서를 설계하는데 있어 네트워크 트래픽의 지역성(Locality of Reference)을 고려한 캐쉬 메모리 구조를 적용하여 보다 향상된 ATM 셀 수신이 가능하도록 한다. ATM 셀의 가상 패스 식별자/가상 채널 식별자(VPI/VCI)를 캐쉬 처리함으로써, 패킷을 분해, 재조립(Segmentation and Reassembly)할 때 관련 테이블의 검색 시간을 줄일 수 있다. 캐쉬 메모리 적용으로 인한 성능 향상을 평가하기 위해 ATM NIC 프로세서와 내부 캐시 메모리 그리고, 외부 SRAM 사이에 셀 수신 정보의 Read 와 Write에 드는 시간 비용(System Clock Cycle)을 캐시의 Hit 또는 Miss 등에 따라 구분하고, 이를 기반으로 한 시뮬레이터에 3 종류의 ATM 셀 스트림을 가하여 각각에 대해 평균 셀 처리시간, 데이터 버스의 트래픽 비율 그리고, 히트율의 3가지 평가요소를 측정하고, 비교하였다.

  • PDF

A Cache-based Reconfigurable Accelerator in Die-stacked DRAM (3차원 구조 DRAM의 캐시 기반 재구성형 가속기)

  • Kim, Yongjoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.2
    • /
    • pp.41-46
    • /
    • 2015
  • The demand on low power and high performance system is soaring due to the extending of mobile and small electronic device market. The 3D die-stacking technology is widely studying for next generation integration technology due to its high density and low access time. We proposed the 3D die-stacked DRAM including a reconfigurable accelerator in a logic layer of DRAM. Also we discuss and suggest a cache-based local memory for a reconfigurable accelerator in a logic layer. The reconfigurable accelerator in logic layer of 3D die-stacked DRAM reduces the overhead of data management and transfer due to the characteristics of its location, so that can increase the performance highly. The proposed system archives 24.8 speedup in maximum.

Prefetching for Broadcasting Correlated Data (상호 연관 데이터(correlated data)의 브로드캐스트를 위한 prefetching)

  • 최정필;신성욱
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2004.05a
    • /
    • pp.30-35
    • /
    • 2004
  • 모바일 환경에서 브로드캐스트는 그 확장성 때문에 매우 유용한 데이터 전송 방법이다. 'push-based' 데이터 전송 방식에서 서버는 넓은 대역폭을 통해 클라이언트에게 다양한 데이터를 반복적으로 브로드캐스트 한다. 〔1,2〕 브로드캐스트에 기반을 둔 정보 시스템의 데이터간의 연관성에 관한 연구는 미흡한 실정이다. 상호 연관 데이터의 브로드캐스트에서, 클라이언트는 자연스럽게 상호 연관된 데이터의 집합을 요청하게 되며, 데이터의 상호 연관성을 고려할 때 기존의 스케줄링 및 캐싱 기법 등은 달라져야 한다. CBS〔3〕에서는 모든 데이터간의 연관도를 계산하여 최소 비용 경로를 구해, 이 순서대로 브로드캐스트하는 기법을 제안하였다. CBS 기법은, 클라이언트가 연관된 데이터를 동시에 요청하지 않고, NP-문제인 최소 비용 경로를 많은 데이터에 대해서 실시간에 계산해야 되며, 데이터 아이템간의 상호 연관성이 클라이언트마다 다르게 정의되는 문제점이 있다. 따라서 본 논문에서는 응답 시간을 줄이기 위해, 브로드캐스트 되는 상호 연관 데이터의 prefetching기법을 제안한다, 제안된 CT 기법은 상호 연관도와 브로드캐스트 대기시간을 고려하여 캐시를 관리한다. CT를 현실적으로 적용한 ACT의 알고리즘을 소개하였으며, 시뮬레이션을 통해 CT의 성능과 특징을 실험하였다.

  • PDF

Compact Field Remapping for Dynamically Allocated Structures (동적으로 할당된 구조체를 위한 압축된 필드 재배치)

  • Kim, Jeong-Eun;Han, Hwan-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.1003-1012
    • /
    • 2005
  • The most significant difference of embedded systems from general purpose systems is that embedded systems are allowed to use only limited resources including battery and memory. Especially, the number of applications increases which deal with multimedia data. In those systems with high data computations, the delay of memory access is one of the major bottlenecks hurting the system performance. As a result, many researchers have investigated various techniques to reduce the memory access cost. Most programs generally have locality in memory references. Temporal locality of references means that a resource accessed at one point will be used again in the near future. Spatial locality of references is that likelihood of using a resource gets higher if resources near it were just accessed. The latest embedded processors usually adapt cache memory to exploit these two types of localities. Processors access faster cache memory than off-chip memory, reducing the latency. In this paper we will propose the enhanced dynamic allocation technique for structure-type data in order to eliminate unused memory space and to reduce both the cache miss rate and the application execution time. The proposed approach aggregates fields from multiple records dynamically allocated and consecutively remaps them on the memory space. Experiments on Olden benchmarks show $13.9\%$ L1 cache miss rate drop and $15.9\%$ L2 cache miss drop on average, compared to the previously proposed techniques. We also find execution time reduced by $10.9\%$ on average, compared to the previous work.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

An Efficient MBR Compression Technique for Main Memory Multi-dimensional Indexes (메인 메모리 다차원 인덱스를 위한 효율적인 MBR 압축 기법)

  • Kim, Joung-Joon;Kang, Hong-Koo;Kim, Dong-Oh;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.9 no.2
    • /
    • pp.13-23
    • /
    • 2007
  • Recently there is growing Interest in LBS(Location Based Service) requiring real-time services and the spatial main memory DBMS for efficient Telematics services. In order to optimize existing disk-based multi-dimensional Indexes of the spatial main memory DBMS in the main memory, multi-dimensional index structures have been proposed, which minimize failures in cache access by reducing the entry size. However, because the reduction of entry size requires compression based on the MBR of the parent node or the removal of redundant MBR, the cost of MBR reconstruction increases in index update and the efficiency of search is lowered in index search. Thus, to reduce the cost of MBR reconstruction, this paper proposed the RSMBR(Relative-Sized MBR) compression technique, which applies the base point of compression differently in case of broad distribution and narrow distribution. In case of broad distribution, compression is made based on the left-bottom point of the extended MBR of the parent node, and in case of narrow distribution, the whole MBR is divided into cells of the same size and compression is made based on the left-bottom point of each cell. In addition, MBR was compressed using a relative coordinate and size to reduce the cost of search in index search. Lastly, we evaluated the performance of the proposed RSMBR compression technique using real data, and proved its superiority.

  • PDF

The Study of the Object Replication Management using Adaptive Duplication Object Algorithm (적응적 중복 객체 알고리즘을 이용한 객체 복제본 관리 연구)

  • 박종선;장용철;오수열
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.1
    • /
    • pp.51-59
    • /
    • 2003
  • It is effective to be located in the double nodes in the distributed object replication systems, then object which nodes share is the same contents. The nodes store an access information on their local cache as it access to the system. and then the nodes fetch and use it, when it needed. But with time the coherence Problems will happen because a data carl be updated by other nodes. So keeping the coherence of the system we need a mechanism that we managed the to improve to improve the performance and availability of the system effectively. In this paper to keep coherence in the shared memory condition, we can set the limited parallel performance without the additional cost except the coherence cost using it to keep the object at the proposed adaptive duplication object(ADO) algorithms. Also to minimize the coherence maintenance cost which is the bi99est overhead in the duplication method, we must manage the object effectively for the number of replication and location of the object replica which is the most important points, and then it determines the cos. And that we must study the adaptive duplication object management mechanism which will improve the entire run time.

  • PDF

File-System-Level SSD Caching for Improving Application Launch Time (응용프로그램의 기동시간 단축을 위한 파일 시스템 수준의 SSD 캐싱 기법)

  • Han, Changhee;Ryu, Junhee;Lee, Dongeun;Kang, Kyungtae;Shin, Heonshik
    • Journal of KIISE
    • /
    • v.42 no.6
    • /
    • pp.691-698
    • /
    • 2015
  • Application launch time is an important performance metric to user experience in desktop and laptop environment, which mostly depends on the performance of secondary storage. Application launch times can be reduced by utilizing solid-state drive (SSD) instead of hard disk drive (HDD). However, considering a cost-performance trade-off, utilizing SSDs as caches for slow HDDs is a practicable alternative in reducing the application launch times. We propose a new SSD caching scheme which migrates data blocks from HDDs to SSDs. Our scheme operates entirely in the file system level and does not require an extra layer for mapping SSD-cached data that is essential in most other schemes. In particular, our scheme does not incur mapping overheads that cause significant burdens on the main memory, CPU, and SSD space for mapping table. Experimental results conducted with 8 popular applications demonstrate our scheme yields 56% of performance gain in application launch, when data blocks along with metadata are migrated.