• Title/Summary/Keyword: data cache

Search Result 487, Processing Time 0.029 seconds

User Identification and Session completion in Input Data Preprocessing for Web Mining (웹 마이닝을 위한 입력 데이타의 전처리과정에서 사용자구분과 세션보정)

  • 최영환;이상용
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.843-849
    • /
    • 2003
  • Web usage mining is the technique of data mining that analyzes web users' usage patterns by large web log. To use the web usage mining technique, we have to classify correctly users and users session in preprocessing, but can't classify them completely by only log files with standard web log format. To classify users and user session there are many problems like local cache, firewall, ISP, user privacy, cookey etc., but there isn't any definite method to solve the problems now. Especially local cache problem is the most difficult problem to classify user session which is used as input in web mining systems. In this paper we propose a heuristic method which solves local cache problem by using only click stream data of server side like referrer log, agent log and access log, classifies user sessions and completes session.

Efficient Cache Management Scheme with Maintaining Strong Data Consistency in a VANET (VANET에서 효율적이며 엄격한 데이터 일관성을 유지하는 캐쉬 관리 기법)

  • Moon, Sung-Hoon;Park, Kwang-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.5
    • /
    • pp.41-48
    • /
    • 2012
  • A Vehicular Ad-hoc Network (VANET) is a vehicular specific type of a mobile ad-hoc network, to provide temporary communications among nearby vehicles. Mobile node of VANET consumes energy and resource with participating in the member of network. In a VANET, data replication and cooperative caching have been used as promising solutions to improve system performance. Existing cooperative caching scheme in a VANET mostly focuses on weak consistency is not always satisfactory. In this paper, we propose an efficient cache management scheme to maintain strong data consistency in a VANET. We make an adaptive scheduling scheme to broadcast Invalidation Report (IR) in order to reduce query delay and communication overhead to maintain strong data consistency. The simulation result shows that our proposed method has a strength in terms of query delay and communication overhead.

The Effect of Absorbing Hot Write References on FTLs for Flash Storage Supporting High Data Integrity (데이터 무결성을 보장하는 플래시 저장 장치에서 잦은 쓰기 참조 흡수가 플래시 변환 계층에 미치는 영향)

  • Shim, Myoung-Sub;Doh, In-Hwan;Moon, Young-Je;Lee, Hyo-J.;Choi, Jong-Moo;Lee, Dong-Hee;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.336-340
    • /
    • 2010
  • Flash storages are prevalent as portable storage in computing systems. When we consider the detachability of Flash storage devices, data integrity becomes an important issue. To assure extreme data integrity, file systems synchronously write all file data to storage accompanying hot write references. In this study, we concentrate on the effect of hot write references on Flash storage, and we consider the effect of absorbing the hot write references via nonvolatile write cache on the performance of the FTL schemes in Flash storage. In 80 doing, we quantify the performance of typical FTL schemes for workloads that contain hot write references through a wide range of experiments on a real system environment. Through the results, we conclude that the impact of the underlying FTL schemes on the performance of Flash storage is dramatically reduced by absorbing the hot write references via nonvolatile write cache.

A Study of the Improvement of Execution Speed and Loading of Java Card Program by applying prefetching LRU-OBL Buffer Technique (선반입 LRU-OBL 버퍼 기법을 적용한 자바 카드 프로그램 적재 및 실행 속도 개선에 관한 연구)

  • Oh, Se-Won;Choi, Won-Ho;Jung, Min-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.9
    • /
    • pp.1197-1208
    • /
    • 2007
  • These days, most of SMART card, JAVA card, picked up the JAVA Card Platform gets the position as a standard. Java Card technology provides implantation, platform portability and high security function to SMART Card. Compared to normal Smart Card, JAVA card has a defect that is a low running speed caused by a distinctive feature of JAVA programming language. Factors that affect JAVA Card execution speed are the method how to save the data and install the applets of JAVA Card installation instrument. In this paper, I will offer the plan to improve JAVA Card program's loading and execution speed. At Java Card program, writing, updating and deleting process for data at EEPROM can be improved of Java Card speed by using high speed RAM. For this, at JAVA Card as a application of RAM, I will present prefetching LRU-ORL Buffer Cache Technique that is suitable for Java Card environment. As a data character, managing all data created from JAVA Curd at Buffer Cache, decrease times of recording at maximum for EEPROM so that JAVA Card program upload and execution speed will be improved.

  • PDF

A Cache Managing Strategy for Fast Media Data Access (미디어 데이터의 빠른 참조를 위한 캐시 운영 전략)

  • Moon, Hyun-Ju;Kim, Suk-il
    • The KIPS Transactions:PartA
    • /
    • v.11A no.1
    • /
    • pp.11-20
    • /
    • 2004
  • Multimedia data processing in streaming pattern contains high spatial locality and low temporal locality. This paper has proposed a dynamic data prefetching scheme that fully exploits the regularity between memory addresses referred consecutively. Compared to the existing data Prefetching scheme, the Proposed scheme can reduce data Prefetching error when an application divides an way into smaller blocks and processes them block by block. Experimental results on various media benchmark programs show the proposed scheme predicts memory addresses more accurately and results in better performance than existing prefetching schemes.

A Data-Consistency Scheme for the Distributed-Cache Storage of the Memcached System

  • Liao, Jianwei;Peng, Xiaoning
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.3
    • /
    • pp.92-99
    • /
    • 2017
  • Memcached, commonly used to speed up the data access in big-data and Internet-web applications, is a system software of the distributed-cache mechanism. But it is subject to the severe challenge of the loss of recently uncommitted updates in the case where the Memcached servers crash due to some reason. Although the replica scheme and the disk-log-based replay mechanism have been proposed to overcome this problem, they generate either the overhead of the replica synchronization or the persistent-storage overhead that is caused by flushing related logs. This paper proposes a scheme of backing up the write requests (i.e., set and add) on the Memcached client side, to reduce the overhead resulting from the making of disk-log records or performing the replica consistency. If the Memcached server fails, a timestamp-based recovery mechanism is then introduced to replay the write requests (buffered by relevant clients), for regaining the lost-data updates on the rebooted Memcached server, thereby meeting the data-consistency requirement. More importantly, compared with the mechanism of logging the write requests to the persistent storage of the master server and the server-replication scheme, the newly proposed approach of backing up the logs on the client side can greatly decrease the time overhead by up to 116.8% when processing the write workloads.

A Strategy for Efficiently Maintaining Cache Consistency in Mobile Computing Environments of the Asynchronous Broadcasting, (비동기적 방송을 하는 이동 컴퓨팅 환경에서 효율적인 캐쉬 일관성 유지 정책)

  • 김대옹;박성배;김길삼;황부현
    • Journal of the Korea Society of Computer and Information
    • /
    • v.4 no.3
    • /
    • pp.78-92
    • /
    • 1999
  • In mobile computing environments, to efficiently use the narrow bandwidth of wireless networks a mobile host caches the data that are frequently accessed. To guarantee the correctness of the mobile transaction, the data cached in a mobile host must be consistent with the data in a server. This paper proposes a new strategy which maintains cache consistency efficiently when the data cached in a mobile host are inconsistent with the data in a server by the mobility of the mobile host at the asynchronous mobile environment. In this strategy, the size of the invalidation message is relatively small and is independent of the number of data to be invalidated under conditions of variable update rates/patterns. So this strategy uses the narrow bandwidth of wireless networks efficiently and reduces the communication cost.

4-Deap✽ : A Fast 4-ary Deap using Cache (4-딥✽ : 캐쉬를 이용한 빠른 4-원 딥)

  • Jung Haejae
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.577-582
    • /
    • 2004
  • Double-ended Proirity queues(DEPQ) can be used in applications such as scheduling or sorting. The data structures for DEPQ can be con-structed with or without pointers. The implicit representation without pointers uses less memory space than pointer-based representation. This paper presents a novel fast implicit heap called 4-deapr$\ast$ which utilizes cache memory efficiently. Experimental results show that the 4-deap$\ast$ is faster than symmetric min-max heap as well as deap.

An efficient caching scheme at replacing a dirty block for softwre RAID filte systems (소프트웨어 RAID 파일 시스템에서 오손 블록 교체시에 효율적인 캐슁 기법)

  • 김종훈;노삼혁;원유헌
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1599-1606
    • /
    • 1997
  • The software RAID file system is defined as the system which distributes data redundantly across an aray of disks attached to each workstations connected on a high-speed network. This provides high throughput as well as higher availability. In this paper, we present an efficient caching scheme for the software RAID filte system. The performance of this schmem is compared to two other schemes previously proposed for convnetional file systems and adapted for the software RAID file system. As in hardware RAID systems, small-writes to be the performance bottleneck in softwre RAID filte systems. To tackle this problem, we logically divide the cache into two levels. By keeping old data and parity val7ues in the second-level cache we were able to eliminate much of the extra disk reads and writes necessary for write-back of dirty blocks. Using track driven simulations we show that the proposed scheme improves performance for both the average response time and the average system busy time.

  • PDF

An On-chip Multiprocessor Miroprocessor with Shared MMU and Cache

  • Lee, Yong-Hwan;Jeong, Woo-Kyeong;An, Sang-Jun;Lee, Yong-Surk
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.4
    • /
    • pp.1-7
    • /
    • 1997
  • A multiprocessor microprocessor named SMPC(scaleable multiprocessor chip) that contains tow IU (integer unit) is presented in this paper. It can execute multiple instructions from several tasks exploiting task-level parallelism that is free from instruction dependencies, and provide high performance and throughput on both single program and multiprogramming environments. the IU is a 32-bit scalar processor expecially designed to boost up the performance of string manipulations which are frequently used in RDBMS(relational data base management system) applications. A memory management unit and a data cache shared by two IUs improve the performance and reduce the chip area required. ETH SMPC is implemented in VLSI circuit by custom design and automated design tools.

  • PDF