• Title/Summary/Keyword: caching performance

Search Result 280, Processing Time 0.02 seconds

Design and Implementation of an Embedded Spatial MMDBMS for Spatial Mobile Devices (공간 모바일 장치를 위한 내장형 공간 MMDBMS의 설계 및 구현)

  • Park, Ji-Woong;Kim, Joung-Joon;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.1 s.13
    • /
    • pp.25-37
    • /
    • 2005
  • Recently, with the development of wireless communications and mobile computing, interest about mobile computing is rising. Mobile computing can be regarded as an environment where a user carries mobile devices, such as a PDA or a notebook, and shares resources with a server computer via wireless communications. A mobile database refers to a database which is used in these mobile devices. The mobile database can be used in the fields of insurance business, banking business, medical treatment, and so on. Especially, LBS(Location Based Service) which utilizes location information of users becomes an essential field of mobile computing. In order to support LBS in the mobile environment, there must be an Embedded Spatial MMDBMS(Main-Memory Database Management System) that can efficiently manage large spatial data in spatial mobile devices. Therefore, in this paper, we designed and implemented the Embedded Spatial MMDBMS, extended from the HSQLDB which is an existing MMDBMS for PC, to manage spatial data efficiently in spatial mobile devices. The Embedded Spatial MMDBMS adopted the spatial data model proposed by ISO(International Organization for Standardization), provided the arithmetic coding method that is suitable for spatial data, and supported the efficient spatial index which uses the MBR compression and hashing method suitable for spatial mobile devices. In addition, the system offered the spatial data display capability in low-performance processors of spatial mobile devices and supported the data caching and synchronization capability for performance improvement of spatial data import/export between the Embedded Spatial MMDBMS and the GIS server.

  • PDF

Design and Implementation of An I/O System for Irregular Application under Parallel System Environments (병렬 시스템 환경하에서 비정형 응용 프로그램을 위한 입출력 시스템의 설계 및 구현)

  • No, Jae-Chun;Park, Seong-Sun;;Gwon, O-Yeong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.11
    • /
    • pp.1318-1332
    • /
    • 1999
  • 본 논문에서는 입출력 응용을 위해 collective I/O 기법을 기반으로 한 실행시간 시스템의 설계, 구현 그리고 그 성능평가를 기술한다. 여기서는 모든 프로세서가 동시에 I/O 요구에 따라 스케쥴링하며 I/O를 수행하는 collective I/O 방안과 프로세서들이 여러 그룹으로 묶이어, 다음 그룹이 데이터를 재배열하는 통신을 수행하는 동안 오직 한 그룹만이 동시에 I/O를 수행하는 pipelined collective I/O 등의 두 가지 설계방안을 살펴본다. Pipelined collective I/O의 전체 과정은 I/O 노드 충돌을 동적으로 줄이기 위해 파이프라인된다. 이상의 설계 부분에서는 동적으로 충돌 관리를 위한 지원을 제공한다. 본 논문에서는 다른 노드의 메모리 영역에 이미 존재하는 데이터를 재 사용하여 I/O 비용을 줄이기 위해 collective I/O 방안에서의 소프트웨어 캐슁 방안과 두 가지 모형에서의 chunking과 온라인 압축방안을 기술한다. 그리고 이상에서 기술한 방안들이 입출력을 위해 높은 성능을 보임을 기술하는데, 이 성능결과는 Intel Paragon과 ASCI/Red teraflops 기계 상에서 실험한 것이다. 그 결과 응용 레벨에서의 bandwidth는 peak point가 55%까지 측정되었다.Abstract In this paper we present the design, implementation and evaluation of a runtime system based on collective I/O techniques for irregular applications. We present two designs, namely, "Collective I/O" and "Pipelined Collective I/O". In the first scheme, all processors participate in the I/O simultaneously, making scheduling of I/O requests simpler but creating a possibility of contention at the I/O nodes. In the second approach, processors are grouped into several groups, so that only one group performs I/O simultaneously, while the next group performs communication to rearrange data, and this entire process is pipelined to reduce I/O node contention dynamically. In other words, the design provides support for dynamic contention management. Then we present a software caching method using collective I/O to reduce I/O cost by reusing data already present in the memory of other nodes. Finally, chunking and on-line compression mechanisms are included in both models. We demonstrate that we can obtain significantly high-performance for I/O above what has been possible so far. The performance results are presented on an Intel Paragon and on the ASCI/Red teraflops machine. Application level I/O bandwidth up to 55% of the peak is observed.he peak is observed.

Prefetching Mechanism using the User's File Access Pattern Profile in Mobile Computing Environment (이동 컴퓨팅 환경에서 사용자의 FAP 프로파일을 이용한 선인출 메커니즘)

  • Choi, Chang-Ho;Kim, Myung-Il;Kim, Sung-Jo
    • Journal of KIISE:Information Networking
    • /
    • v.27 no.2
    • /
    • pp.138-148
    • /
    • 2000
  • In the mobile computing environment, in order to make copies of important files available when being disconnected the mobile host(client) must store them in its local cache while the connection is maintained. In this paper, we propose the prefetching mechanism for the client to save files which may be accessed in the near future. Our mechanism utilizes analyzer, prefetch-list producer, and prefetch manager. The analyzer records file access patterns of the user in a FAP(File Access Patterns) profile. Using the profile, the prefetch-list producer creates the prefetch-list. The prefetch manager requests a file server to return this list. We set the parameter TRP(Threshold of Reference Probability) to ensure that only reasonably related files can be prefetched. The prefetch-list producer adds the files to a prefetch-list if their reference probability is greater than the TRP. We also use the parameter TACP(Threshold of Access Counter Probability) to reduce the hoarding size required to store a prefetch-list. Finally, we measure the metrics such as the cache hit ratio, the number of files referenced by the client after disconnection and the hoarding size. The simulation results show that the performance of our mechanism is superior to that of the LRU caching mechanism. Our results also show that prefetching with the TACP can reduce the hoard size while maintaining similar performance of prefetching without TACP.

  • PDF

A Distributed VOD Server Based on Virtual Interface Architecture and Interval Cache (버추얼 인터페이스 아키텍처 및 인터벌 캐쉬에 기반한 분산 VOD 서버)

  • Oh, Soo-Cheol;Chung, Sang-Hwa
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.10
    • /
    • pp.734-745
    • /
    • 2006
  • This paper presents a PC cluster-based distributed VOD server that minimizes the load of an interconnection network by adopting the VIA communication protocol and the interval cache algorithm. Video data is distributed to the disks of the distributed VOD server and each server node receives the data through the interconnection network and sends it to clients. The load of the interconnection network increases because of the large amount of video data transferred. This paper developed a distributed VOD file system, which is based on VIA, to minimize cost using interconnection network when accessing remote disks. VIA is a user-level communication protocol removing the overhead of TCP/IP. This papers also improved the performance of the interconnection network by expanding the maximum transfer size of VIA. In addition, the interval cache reduces traffic on the interconnection network by caching, in main memory, the video data transferred from disks of remote server nodes. Experiments using the distributed VOD server of this paper showed a maximum performance improvement of 21.3% compared with a distributed VOD server without VIA and the interval cache, when used with a four-node PC cluster.

An Energy Efficient Transmission Scheme based on Cross-Layer for Wired and Wireless Networks (유.무선 혼합망에서 Cross-Layer기반의 에너지 효율적인 전송 기법)

  • Kim, Jae-Hoon;Chung, Kwang-Sue
    • Journal of KIISE:Information Networking
    • /
    • v.34 no.6
    • /
    • pp.435-445
    • /
    • 2007
  • Snoop protocol is one of the efficient schemes to compensate TCP packet loss and enhance TCP throughput in wired-cum-wireless networks. However, Snoop protocol has a problem: it cannot perform local retransmission efficiently under the bursty-error prone wireless link. To solve this problem, SACK-Aware-Snoop and SNACK mechanism have been proposed. These approaches improve the performance by using SACK option field between base station and mobile host. However in the wireless channel with high packet loss rate, SACK-Aware-Snoop and SNACK mechanism do not work well because of two reason: (a) end-to-end performance is degraded because duplicate ACKs themself can be lost in the presence of bursty error, (b) energy of mobile device and bandwidth utilization in the wireless link are wasted unnecessarily because of SACK option field in the wireless link. In this paper, we propose a new local retransmission scheme based on Cross-layer approach, called Cross-layer Snoop(C-Snoop) protocol, to solve the limitation of previous localized link layer schemes. C-Snoop protocol includes caching lost TCP data and performing local retransmission based on a few policies dealing with MAC-layer's timeout and local retransmission timeout. From the simulation result, we could see more improved TCP throughput and energy efficiency than previous mechanisms.

Cache Replacement Strategies considering Location and Region Properties of Data in Mobile Database Systems (이동 데이타베이스 시스템에서 데이타의 위치와 영역 특성을 고려한 캐쉬 교체 기법)

  • Kim, Ho-Sook;Yong, Hwan-Seung
    • Journal of KIISE:Databases
    • /
    • v.27 no.1
    • /
    • pp.53-63
    • /
    • 2000
  • The mobile computing service market is increasing rapidly due to the development of low-cost wireless network technology and the high-performance mobile computing devices. In recent years, several methods have been proposed to effectively deal with restrictions of the mobile computing environment such as limited bandwidth, frequent disconnection and short-lived batteries. Amongst those methods, much study is being done on the caching method - among the data transmitted from a mobile support station, it selects those that are likely to be accessed in the near future and stores them in the local cache of a mobile host. Existing cache replacement methods have some limitations in efficiency because they do not take into consideration the characteristics of user mobility and spatial attributes of geographical data. In this paper, we show that the value and the semantic of the data, which are stored in the cache of a mobile host, changes according to the movement of the mobile host. We argue it is because data that are geographically near are better suited to provide an answer to a users query in the mobile environment. Also, we define spatial location of geographical data has effect on, using the spatial attributes of data. Finally, we propose two new cache replacement methods that efficiently support user mobility and spatial attributes of data. One is based on the location of data and the other on the meaningful region of data. From the comparative analysis of the previous methods and that they improve the cache hit ratio. Also we show that performance varies according to data density using this, we argue different cache replacement methods are required for regions with varying density of data.

  • PDF

Dynamic Query Processing Using Description-Based Semantic Prefetching Scheme in Location-Based Services (위치 기반 서비스에서 서술 기반의 시멘틱 프리페칭 기법을 이용한 동적 질의 처리)

  • Kang, Sang-Won;Song, Ui-Sung
    • Journal of KIISE:Databases
    • /
    • v.34 no.5
    • /
    • pp.448-464
    • /
    • 2007
  • Location-Based Services (LBSs) provide results to queries according to the location of the client issuing the query. In LBS, techniques such as caching and prefetching are effective approaches to reducing the data transmission from a server and query response time. However, they can lead to cache inefficiency and network overload due to the client's mobility and query pattern. To solve these drawbacks, we propose a semantic prefetching (SP) scheme using prefetching segment concept and improved cache replacement policies. When a mobile client enters a new service area, called semantic prefetching area, proposed scheme fetches the necessary semantic information from the server in advance. The mobile client maintains the information in its own cache for query processing of location-dependent data (LDD) in mobile computing environment. The performance of the proposed scheme is investigated in relation to various environmental variables, such as the mobility and query pattern of user, the distributions of LDDs and applied cache replacement strategies. Simulation results show that the proposed scheme is more efficient than the well-known existing scheme for range query and nearest neighbor query. In addition, applying the two queries dynamically to query processing improves the performance of the proposed scheme.

A Hashing Scheme using Round Robin in a Wireless Internet Proxy Server Cluster System (무선 인터넷 프록시 서버 클러스터 시스템에서 라운드 로빈을 이용한 해싱 기법)

  • Kwak, Huk-Eun;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.13A no.7 s.104
    • /
    • pp.615-622
    • /
    • 2006
  • Caching in a Wireless Internet Proxy Server Cluster Environment has an effect that minimizes the time on the request and response of Internet traffic and Web user As a way to increase the hit ratio of cache, we can use a hash function to make the same request URLs to be assigned to the same cache server. The disadvantage of the hashing scheme is that client requests cannot be well-distributed to all cache servers so that the performance of the whole system can depend on only a few busy servers. In this paper, we propose an improved load balancing scheme using hashing and Round Robin scheme that distributes client requests evenly to cache servers. In the existing hashing scheme, if a hashing value for a request URL is calculated, the server number is statically fixed at compile time while in the proposed scheme it is dynamically fixed at run time using round robin method. We implemented the proposed scheme in a Wireless Internet Proxy Server Cluster Environment and performed experiments using 16 PCs. Experimental results show the even distribution of client requests and the 52% to 112% performance improvement compared to the existing hashing method.

XML View Indexing Using an RDBMS based XML Storage System (관계 DBMS 기반 XML 저장시스템 상에서의 XML 뷰 인덱싱)

  • Park Dae-Sung;Kim Young-Sung;Kang Hyunchul
    • Journal of Internet Computing and Services
    • /
    • v.6 no.4
    • /
    • pp.59-73
    • /
    • 2005
  • Caching query results and reusing them in processing of subsequent queries is an important query optimization technique. Materialized view and view indexing are the representative examples of such a technique. The two schemes had received much attention for relational databases, and have been investigated for XML data since XML emerged as the standard for data exchange on the Web. In XML view indexing, XML view xv which is the result of an XML query is represented as an XML view index(XVI), a structure containing the identifiers of xv's underlying XML elements as well as the information on xv. Since XVI for xv stores just the identifiers of the XML elements not the elements themselves, when xv is requested, its XVI should be materialized against xv's underlying XML documents. In this paper, we address the problem of integrating an XML view index management system with an RDBMS based XML storage system. The proposed system was implemented in Java on Windows 2000 Server with each of two different commercial RDBMSs, and used in evaluating performance improvement through XML view indexing as well as its overheads. The experimental results revealed that XML view indexing was very effective with an RDBMS based XML storage system while its overhead was negligible.

  • PDF

Utilizing Channel Bonding-based M-n and Interval Cache on a Distributed VOD Server (효율적인 분산 VOD 서버를 위한 Channel Bonding 기반 M-VIA 및 인터벌 캐쉬의 활용)

  • Chung, Sang-Hwa;Oh, Soo-Cheol;Yoon, Won-Ju;kim, Hyun-Pil;Choi, Young-In
    • The KIPS Transactions:PartA
    • /
    • v.12A no.7 s.97
    • /
    • pp.627-636
    • /
    • 2005
  • This paper presents a PC cluster-based distributed video on demand (VOD) server that minimizes the load of the interconnection network by adopting channel bonding-based MVIA and the interval cache algorithm Video data is distributed to the disks of each server node of the distributed VOD server and each server node receives the data through the interconnection network and sends it to clients. The load of the interconnection network increases because of the large volume of video data transferred. We adopt two techniques to reduce the load of the interconnection network. First, an Msupporting channel bonding technique is adopted for the interconnection network. n which is a user-level communication protocol that reduces the overhead of the TCP/IP protocol in cluster systems, minimizes the time spent in communicating. We increase the bandwidth of the interconnection network using the channel bonding technique with MThe channel bonding technique expands the bandwidth by sending data concurrently through multiple network cards. Second, the interval cache reduces traffic on the interconnection network by caching the video data transferred from the remote disks in main memory Experiments using the distributed VOD server of this paper showed a maximum performance improvement of $30\%$ compared with a distributed VOD server without channel bonding-based MVIA and the interval cache, when used with a four-node PC cluster.