• Title/Summary/Keyword: Cache data

Search Result 488, Processing Time 0.027 seconds

A Multimedia Contents Recommendation System for Mobile Devices using Push Technology (모바일 환경에서 푸쉬 기술을 이용한 개인화된 멀티미디어 콘텐츠 추천 시스템)

  • Kim, Ryong;Kang, Ji-Heon;Kim, Young-Kuk
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.745-749
    • /
    • 2006
  • The appearance of wirelsss internet service made accessing easier than existing mobile devices. Due to the properity of mobile devices, we can easily obtain his/her profile information compared to a wired internet service. It enables to provide a personalized service through mobile devices. In this paper, we propose a recommendation service based on collaborative filtering method and a content push service. Using users' profile information, we recommand a target user's favoriate content. The recommanded contents are stored in mobile device through the push service. When we connect a wireless internet service, our mobile push service starts to cache from user's favorite contents. Especially, when we select a large mobile content, our system can reduce a download time by using our recommandation service. Also, in case of a connectionless, we can use a cached data from pushed content in our mobile device.

  • PDF

A Caching Mechanism for Knowledge Maps (지식 맵을 위한 캐슁 기법)

  • 정준원;민경섭;김형주
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.3
    • /
    • pp.282-291
    • /
    • 2004
  • There has been many researches in TopicMap and RDF which are approach to handle data efficiently with metadata. However, No researches has been performed to service and implement except for presentation and description. In this paper, We suggest the caching mechanism to support an efficient access of knowledgemap and practical knowledgemap service with implementation of TopicMap system. First, We propose a method to navigate Knowledgemap efficiently that includes advantage of former methods. Then, To transmit TopicMap efficiently, We suggest caching mechanism for knowledgemap. This method is that user will be able to navigate knowledgemap efficiently in the viewpoint of human, not application. Therefor the mechanism doesn't cash topics by logical or physical locality but clustering by information and characteristic value of TopicMap. Lastly, we suggest replace mechanism by using graph structure of TopicMap for efficiency of transmission.

HIPSS : A RAID System for SPAX (HIPSS : SPAX(주전산기 IV) RAID시스템)

  • 이상민;안대영;김중배;김진표;이해동
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.6
    • /
    • pp.9-19
    • /
    • 1998
  • RAID technology that provides the disk I/O system with high performance and high availability is essential for OLTP server. This paper describes the design and implementation of the HIPSS RAID system that has been developed for the SPAX OLTP server. HIPSS has the following design objectives: high performance, high availability, standardization and modularization of external interface, and ease of maintenance. It guarantees high performance by providing 10 independent I/O channels, large data cache, and parity calculation engine. Hardware modularization of the host interface makes it easy to replace host interface hardware module. By providing dual power supply, dual array controller, and disk hot swapping, it provides the system with high availability Implementation of HIPSS and integration test on SPAX has been completed and performance measurement on HIPSS is now going on. In this paper, we provide the detail description for HIPSS system architecture and the implementation results.

  • PDF

A Transaction Level Simulator for Performance Analysis of Solid-State Disk (SSD) in PC Environment (PC향 SSD의 성능 분석을 위한 트랜잭션 수준 시뮬레이터)

  • Kim, Dong;Bang, Kwan-Hu;Ha, Seung-Hwan;Chung, Sung-Woo;Chung, Eui-Young
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.12
    • /
    • pp.57-64
    • /
    • 2008
  • In this paper, we propose a system-level simulator for the performance analysis of a Solid-State Disk (SSD) in PC environment by using TLM (Transaction Level Modeling) method. Our method provides quantitative analysis for a variety of architectural choices of PC system as well as SSD. Also, it drastically reduces the analysis time compared to the conventional RTL (Register Transfer Level) modeling method. To show the effectiveness of the proposed simulator, we performed several explorations of PC architecture as well as SSD. More specifically, we measured the performance impact of the hit rate of a cache buffer which temporarily stores the data from PC. Also, we analyzed the performance variation of SSD for various NAND Flash memories which show different response time with our simulator. These experimental results show that our simulator can be effectively utilized for the architecture exploration of SSD as well as PC.

Real-time Implementation of MPEG-4 HVXC Encoder and Decoder on Floating Point DSP (부동 소수점 DSP를 이용한 MPEG-4 HVXC 인코더 및 디코더의 실시간 구현)

  • Kang, Kyeong-ok;Na, Hoon;Hong, Jin-Woo;Jeong, Dae-Gwon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.37-44
    • /
    • 2000
  • In this paper, we described the real-time implementation effort of MPEG-4 audio HVXC (Harmonic Vector eXcitation Coding) algorithm for very low bitrates, which has target applications from mobile communications to Internet telephony, on current high performance floating point TMS320C6701 DSP. We adopted a hardware structure for real-time operation. In order for software optimization, we used C- and assembly-language level optimizations for time-critical functional codes. Utilizing the internal program memory of the DSP as the program cache, the internal data memory overlap technique and DMA functionality, we could get a goal of realtime operation of HVXC codec both at 2 kbit/s and at 4 kbit/s. For an encoder at 2 kbit/s, the optimization ratio to original code is about 96 %. Finally, we got the subjective quality of MOS 2.45 at 2 kbit/s from an informal quality test.

  • PDF

Performance Comparison of Synchronization Methods for CC-NUMA Systems (CC-NUMA 시스템에서의 동기화 기법에 대한 성능 비교)

  • Moon, Eui-Sun;Jhang, Seong-Tae;Jhon, Chu-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.4
    • /
    • pp.394-400
    • /
    • 2000
  • The main goal of synchronization is to guarantee exclusive access to shared data and critical sections, and then it makes parallel programs work correctly and reliably. Exclusive access restricts parallelism of parallel programs, therefor efficient synchronization is essential to achieve high performance in shared-memory parallel programs. Many techniques are devised for efficient synchronization, which utilize features of systems and applications. This paper shows the simulation results that existing synchronization methods have inefficiency under CC-NUMA(Cache Coherent Non-Uniform Memory Access) system, and then compares the performance of Freeze&Melt synchronization that can remove the inefficiency. The simulation results present that Test-and-Test&Set synchronization has inefficiency caused by broadcast operation and the pre-defined order of Queue-On-Lock-Bit (QOLB) synchronization to execute a critical section causes inefficiency. Freeze&Melt synchronization, which removes these inefficiencies, has performance gain by decreasing the waiting time to execute a critical section and the execution time of a critical section, and by reducing the traffic between clusters.

  • PDF

Design and implementation of a cache manager for pipeline time-series data (배관 시계열 데이터를 위한 캐시 관리자의 설계 및 구현)

  • Kim, Seon-Hyo;Kim, Won-Sik;Shin, Je-Yong;Han, Wook-Shin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.109-112
    • /
    • 2005
  • 배관에 생기는 구멍이나 틈은 대형 사고의 원인이 될 수 있다. 이러한 배관의 결함을 찾기 위해서는 먼저 센서를 부착한 배관 탐사 장비를 배관에 통과시키고, 배관을 통과하는 중에 센서가 읽은 정보들을 배관 탐사 장비의 하드 디스크에 저장한다. 배관 통과가 완료된 후, 분석가는 분석 프로그램을 사용하여 탐사 장비에서 얻은 데이터에서 결함을 수동적으로 찾는다. 분석가가 데이터를 분석할 때 일반적으로 두 가지 패턴이 존재한다. 첫 번째 패턴은 일정한 구간의 센서 데이터를 순차적으로 분석하는 패턴이고, 두 번째 패턴은 현재 구간에서 이전 구간으로 되돌아가서 다시 분석하는 반복적인 패턴이다. 현재까지 만족할 만 한 수준으로 자동적으로 분석이 되지 않으므로, 분석가는 수작업으로 분석을 하는 경우가 많은데 이로 인해 최근에 읽은 부분을 전후 반복해서 액세스하는 반복적인 패턴이 많이 사용된다. 반복적 패턴의 경우 시스템의 성능을 향상시키기 위해, 이전에 읽은 배관 센서 데이터를 캐싱 할 필요가 있다. 그러나 기존의 분석 소프트웨어에는 캐싱 기능이 없으므로 반복적 패턴일 경우 데이터베이스에서 동일한 데이터를 반복적으로 읽는 문제를 가지고 있다. 본 논문에서는 배관 센서 데이터를 효율적으로 관리하는 캐쉬 관리자를 설계하고 구현하였다. 세부적으로는, 배관 센서 데이터를 시계열 데이터로 간주하고, 시계열 데이터에 대한 캐시 관리자를 제안하였다. 본 논문은 배관 탐사 장비에서 획득한 데이터들을 시계열 데이터로 간주하여 데이터베이스 측면에서 이러한 문제들을 접근하였다는 점에서 의미가 있으며, 향후 이 분야에 대한 많은 연구들이 나올 것으로 기대한다.

  • PDF

Development of Communication Module for a Mobile Integrated SNS Gateway (모바일 통합 SNS 게이트웨이 통신 모듈 개발)

  • Lee, Shinho;Kwon, Dongwoo;Kim, Hyeonwoo;Ju, Hongtaek
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.2
    • /
    • pp.75-85
    • /
    • 2014
  • Recently, mobile SNS traffic has increased tremendously due to the deployment of smart devices such as smart phones and smart tablets. In this paper, mobile integrated SNS gateway is proposed to cope with massive SNS traffic. Most of mobile SNS applications update the information with individual connection to the corresponding servers. The proposed gateway integrates these applications. It is for reducing SNS traffic caused by continuous data request and improving the mobile communication performance. The key elements of the mobile integrated SNS gateway are the synchronization, cache and integrated certification. The proposed protocol and gateway system have implemented on the testbed which deployed on the real network to evaluate the performance of the proposed gateway. Finally, we present the caching performance of gateway system implementation.

Enhancing LRU Buffer Replacement Policy with Delayed Write of Not-cold-dirty-pages for Flash Memory (플래시 메모리를 위한 Not-cold-Page 쓰기지연을 통한 LRU 버퍼교체 정책 개선)

  • Jung Ho-Young;Park Sung-Min;Cha Jae-Hyuk;Kang Soo-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.634-641
    • /
    • 2006
  • Flash memory has many advantages like non-volatility and fast I/O speed, but it has also disadvantages such as not-in-place-update data and asymmetric read/write/erase speed. For the performance of flash memory storage, it is essential for the buffer replacement algorithms to reduce the number of write operations that also affects the number of erase operations. A new buffer replacement algorithm is proposed in this paper, that delays the writes of not-cold-dirty pages in the buffer cache of flash storage. We show that this algorithm effectively decreases the number of write operations and erase operations without much degradation of hit ratio. As a result overall performance of flash I/O speed is improved.

Improving the Availability of Scalable on-demand Streams by Dynamic Buffering on P2P Networks

  • Lin, Chow-Sing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.491-508
    • /
    • 2010
  • In peer-to-peer (P2P) on-demand streaming networks, the alleviation of server load depends on reciprocal stream sharing among peers. In general, on-demand video services enable clients to watch videos from beginning to end. As long as clients are able to buffer the initial part of the video they are watching, on-demand service can provide access to the video to the next clients who request to watch it. Therefore, the key challenge is how to keep the initial part of a video in a peer's buffer for as long as possible, and thus maximize the availability of a video for stream relay. In addition, to address the issues of delivering data on lossy network and providing scalable quality of services for clients, the adoption of multiple description coding (MDC) has been proven as a feasible resolution by much research work. In this paper, we propose a novel caching scheme for P2P on-demand streaming, called Dynamic Buffering. The proposed Dynamic Buffering relies on the feature of MDC to gradually reduce the number of cached descriptions held in a client's buffers, once the buffer is full. Preserving as many initial parts of descriptions in the buffer as possible, instead of losing them all at one time, effectively extends peers’ service time. In addition, this study proposes a description distribution balancing scheme to further improve the use of resources. Simulation experiments show that Dynamic Buffering can make efficient use of cache space, reduce server bandwidth consumption, and increase the number of peers being served.