• Title/Summary/Keyword: Cache Server

Search Result 221, Processing Time 0.026 seconds

Impact Evaluation of DDoS Attacks on DNS Cache Server Using Queuing Model

  • Wang, Zheng;Tseng, Shian-Shyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.895-909
    • /
    • 2013
  • Distributed Denial-of-Service (DDoS) attacks towards name servers of the Domain Name System (DNS) have threaten to disrupt this critical service. This paper studies the vulnerability of the cache server to the flooding DNS query traffic. As the resolution service provided by cache server, the incoming DNS requests, even the massive attacking traffic, are maintained in the waiting queue. The sojourn of requests lasts until the corresponding responses are returned from the authoritative server or time out. The victim cache server is thus overloaded by the pounding traffic and thereafter goes down. The impact of such attacks is analyzed via the model of queuing process in both cache server and authoritative server. Some specific limits hold for this practical dual queuing process, such as the limited sojourn time in the queue of cache server and the independence of the two queuing processes. The analytical results are presented to evaluate the impact of DDoS attacks on cache server. Finally, numerical results are provided for further analysis.

A Study on the Data Retrieval By Using a Cache Forward/Backward Technique (캐쉬 Forward/Backward기법을 이용한 데이터 검색에 관한 연구)

  • Kim Soo-Jang
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2003.08a
    • /
    • pp.229-233
    • /
    • 2003
  • 최근, 인터넷 사용자가 급증하면서 빠른 서비스에 대한 문제가 큰 관심이 되고있다. 특히 데이터베이스 시스템에서 저장 삭세 수정 등은 사용자에게 긴 대기시간을 요구할 수도 있기 때문에 사용자의 불평을 야기할 수 있다. 이 논문에서는 3-티어 방식에서 요즘 많이 사용되는 application server의 cache에 대해서 말하고자 한다. 기존 application server는 데이터를 application server cache에 저장하여 같은 데이터를 서비스할 경우 server의 cache를 사용하지만 이 논문에서 제안하는 것은 접속된 client를 관리하여 각각의 client에 cache를 만들고 application server나 또는 데이터베이스 server가 서비스를 하지 못할 경우는 가장 최근의 데이터를 가지고 있는 client를 찾아 client cache에 있는 데이터를 서비스 하자는 것이다.

  • PDF

WWW Cache Replacement Algorithm Based on the Network-distance

  • Kamizato, Masaru;Nagata, Tomokazu;Taniguchi, Yuji;Tamaki, Shiro
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.238-241
    • /
    • 2002
  • With the popularity of utilization of the Internet among people, the amount of data in the network rapidly increased. So that, the fall of response time from WWW server, which is caused by the network traffic and the burden on m server, has become more of an issue. This problem is encouraged the rearch by redundancy of requesting the same pages by many people, even though they browse the same the ones. To reduce these redundancy, WWW cache server is used commonly in order to store m page data and reuse them. However, the technical uses of WWW cache that different from CPU and Disk cache, is known for its difficulty of improving the cache hit rate. Consecuently, it is difficult to choose effective WWW data to be stored from all data flowing through the WWW cache server. On the other hand, there are room for improvement in commonly used cache replacement algorithms by WWW cache server. In our study, we try to realize a WWW cache server that stresses on the improvement of the stresses of response time. To this end, we propose the new cache replacement algorithm by focusing on the utilizable information of network distance from the WWW cache server to WWW server that possessing the page data of the user requesting.

  • PDF

Hashing Method with Dynamic Server Information for Load Balancing on a Scalable Cluster of Cache Servers (확장성 있는 캐시 서버 클러스터에서의 부하 분산을 위한 동적 서버 정보 기반의 해싱 기법)

  • Hwak, Hu-Keun;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.14A no.5
    • /
    • pp.269-278
    • /
    • 2007
  • Caching in a cache sorrel cluster environment has an advantage that minimizes the request and response tine of internet traffic and web user. Then, one of the methods that increases the hit ratio of cache is using the hash function with cooperative caching. It is keeping a fixed size of the total cache memory regardless of the number of cache servers. On the contrary, if there is no cooperative caching, the total size of cache memory increases proportional to the number of cache sowers since each cache server should keep all the cache data. The disadvantage of hashing method is that clients' requests stress a few servers in all the cache servers due to the characteristics of hashing md the overall performance of a cache server cluster depends on a few servers. In this paper, we propose the method that distributes uniformly client requests between cache servers using dynamic server information. We performed experiments using 16 PCs. Experimental results show the uniform distribution o

A RTSP/RTP Stream Control Mechanism for Streaming Cache Server (스트리밍 미디어 캐쉬 서버를 위한 RTSP/RTP 스트림 제어 기법)

  • 오재학;차호정;최영근
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.254-265
    • /
    • 2003
  • This paper presents the design and implementation of stream control mechanisms which are necessary for the development of an efficient streaming cache server. The streaming protocols used in our implementation are the RTSP/RTP/RTCP standards. The mechanisms support both the on-demand media caching and real-time media splitting applications. The core of the stream control includes the session management, which handles the RTSP/RTCP control session and the RTP transport session, and the cache block management which efficiently manages the RTP-based cache blocks stored in the cache server. The streaming cache server with the proposed stream control mechanism has successfully been implemented on a Linux platform and it works well with the Apple's QTSS server and the QuickTime player for both on-demand and splitting media services.

Enhanced Client Polling with Multilevel Pre-Fetching Algorithm for Wireless Networks

  • Ahmad Nazrul Muhaimin;Geok Tan Kim
    • Journal of Communications and Networks
    • /
    • v.9 no.1
    • /
    • pp.43-49
    • /
    • 2007
  • The implementation of client polling as a weak cache coherence mechanism has two major drawbacks: Firstly, the cache may return a stale copy if the object is changed in the origin server while the cached copy is considered valid. Secondly, the cache can invalidate a cached copy that is still valid in the server. Therefore, we propose a multilevel pre-fetching (MLP) in conjunction with the client polling to refine these drawbacks. MLP is introduced to improve the level of freshness among the cached objects. The simulation results presented in this paper show that the proposed MLP significantly minimizes the number of stale objects and reduces the invalidation messages sent out to the server, i.e., increase the cache HIT rate.

DOC: A Distributed Object Caching System for Information Infrastructure (분산 환경에서의 객체 캐슁)

  • 이태희;심준호;이상구
    • Proceedings of the CALSEC Conference
    • /
    • 2003.09a
    • /
    • pp.249-254
    • /
    • 2003
  • Object caching is a desirable feature to improve the both scalability and performance of distributed application systems for information infrastructure, the information management system leveraging the power of network computing. However, in order to exploit such benefits, we claim that the following problems: cache server placement, cache replacement, and cache synchronization, should be considered when designing any object cache system. We are under developing DOC: a Distributed Object Caching, as a part of building our information infrastructures. In this paper, we show how each problem is inter-related, and focus to highlight how we handle cache server deployment problem

  • PDF

Wireless Caching Algorithm Based on User's Context in Smallcell Environments (소형셀 환경에서 사용자 컨텍스트 기반 무선 캐시 알고리즘)

  • Jung, Hyun Ki;Jung, Soyi;Lee, Dong Hak;Lee, Seung Que;Kim, Jae-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.7
    • /
    • pp.789-798
    • /
    • 2016
  • In this paper, we propose a cache algorithm based on user's context for enterprise/urban smallcell environments. The smallcell caching method is to store mobile users' data traffic at a storage which is equipped in smallcell base station and it has an effect of reducing core networks traffic volume. In our algorithm, contrary to existing smallcell cache algorithms, the cache storage is equipped in a edge server by using a concept of the Mobile Edge Computing. In order to reflect user's characteristics, the edge server classifies users into several groups based on user's context. Also the edge server changes the storage size and the cache replacement frequency of each group to improve the cache efficiency. As the result of performance evaluation, the proposed algorithm can improve the cache hit ratio by about 11% and cache efficiency by about 5.5% compared to the existing cache algorithm.

A study on incrementally expandable online game server architecture (서비스 단계별 확장 가능한 온라인 게임 서버 구조에 대한 연구)

  • Kim Jeong-Hoon
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.3
    • /
    • pp.237-244
    • /
    • 2006
  • The purpose of this study is to propose the online game server architecture which can expand as the number of users increases. In most online game servers, there is a server group composed of a login server, a cache server, a database server, a game server, and an NPC server, and when the number of users increases, an additional server group with the same structure is installed. The server architecture proposed in this study does not install a server group composed of a login server, a cache server, a database server, a game server, an NPC server, etc., but installs a game server only. When there is a need for the cache server and database server, the required servers will be additionally installed, thus reducing costs.

  • PDF

Increasing a Mobile Client's Cache Reusability in Wireless Client - Server Environments (무선 클라이언트-서버 환경에서 이동 클라이언트의 캐시 데이타 재사용율 향상기법)

  • Yi Song-Yi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.5
    • /
    • pp.282-296
    • /
    • 2006
  • In a wireless client server environment, data broadcasting is an efficient data dissemination method; a server broadcasts data, and some of broadcasted data are cached in a mobile client's cache to save the narrow communication bandwidth, limited resources, and data access time. A server also broadcasts invalidation reports to maintain the consistency between server data and a client's cached data. Most of existing works on the cache consistency problems simply purge the entire cache when the disconnection time is long enough to miss the certain amount(window size) of IRs. This paper presents a cache invalidation method to increase mobile clients' cache reusability in case of a long disconnection. Instead of simply dropping the entire cache regardless of its consistency, a client estimates the cost of purging all the data with the cost of selective purge. If the cost of dropping entire cache is higher, a client maintains the cache and selectively purge inconsistent data using uplink bandwidth for validation request. The simulation results show that this scheme increases the cache reusability since it effectively considers the update rates and the broadcast frequencies of cached data in estimating the cost of cache maintenance.