• Title/Summary/Keyword: temporal locality

Search Result 59, Processing Time 0.027 seconds

STP-FTL: An Efficient Caching Structure for Demand-based Flash Translation Layer

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.7
    • /
    • pp.1-7
    • /
    • 2017
  • As the capacity of NAND flash module increases, the amount of RAM increases for caching and maintaining the FTL mapping information. In order to reduce the amount of mapping information managed in the RAM, a demand-based address mapping method stores the entire mapping information in the flash and some valid mapping information in the form of cache in the RAM so that the RAM can be used efficiently. However, when cache miss occurs, it is necessary to read the mapping information recorded in the flash, so overhead occurs to translate the address. If the RAM space is not enough, the cache hit ratio decreases, resulting in greater overhead. In this paper, we propose a method using two tables called TPMT(Translation Page Mapping Table) and SMT(Segmented Translation Page Mapping Table) to utilize both temporal locality and spatial locality more efficiently. A performance evaluation shows that this method can improve the cache hit ratio by up to 30% and reduces the extra translation operations by up to 72%, compared to the TPM scheme.

An Efficient Cache Management Scheme of Flash Translation Layer for Large Size Flash Memory Drives

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.31-38
    • /
    • 2015
  • Nowadays, large size flash memory drives with more than a couple of hundreds of gigabytes are common. This paper presents an efficient cache management scheme of flash translation layer, called TPC-FTL, for large size flash memory drives. Since flash drives of large size usually contain large size RAM, we can enhance the performance of page mapping cache by using more RAM for the cache. But if the size exceeds a threshold, the existing schemes are impractical for real devices, because the time for cache manipulation becomes too long. TPC-FTL manages the cache in translation page unit, not in logical page number unit used in existing schemes. Since a translation page covers a large number of logical page numbers (for example, 512 for 2KB size page), the number of cache elements can be reduced up to a practical level. A performance evaluation shows that average response time, an important performance measure, is better than existing schemes via the effect of utilizing spacial locality in addition to temporal locality.

The intersection of online/offline spaces and the remediation of the city : a case study of a workshop on locality education (온라인/오프라인 공간의 교차와 도시의 재매개 - 지역 교육 연수를 사례로 -)

  • Lee, Heesang
    • Journal of the Korean association of regional geographers
    • /
    • v.19 no.2
    • /
    • pp.362-374
    • /
    • 2013
  • ICTs(Information and Communication Technologies) have changed the ways of social activities and communications, and in the process, online and offline spaces have been thought of as binary spaces in which online spaces substitute and erode offline spaces. The aim of this study is to explore how urban space where local social activities and communications are performed is constructed in terms of timespace through the intersection of online and offline communications and how the urban space is 'remediated' through online spaces. For this, the study looks at the case of a workshop on 'locality education' held at the Yeungnam University Museum in January 2013. Criticizing the dichotomist viewpoint that increasing in communications through online spaces results in the expansion of 'absent presence' or 'placelessness' in offline spaces, the study argues that online spaces remediating offline spaces are spaces not transcending the timespace constraints of the offline spaces but rather reflecting the spatial, temporal, material, social, cultural environments of urban space and place.

  • PDF

A New Flash Memory Package Structure with Intelligent Buffer System and Performance Evaluation (버퍼 시스템을 내장한 새로운 플래쉬 메모리 패키지 구조 및 성능 평가)

  • Lee Jung-Hoon;Kim Shin-Dug
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.2
    • /
    • pp.75-84
    • /
    • 2005
  • This research is to design a high performance NAND-type flash memory package with a smart buffer cache that enhances the exploitation of spatial and temporal locality. The proposed buffer structure in a NAND flash memory package, called as a smart buffer cache, consists of three parts, i.e., a fully-associative victim buffer with a small block size, a fully-associative spatial buffer with a large block size, and a dynamic fetching unit. This new NAND-type flash memory package can achieve dramatically high performance and low power consumption comparing with any conventional NAND-type flash memory. Our results show that the NAND flash memory package with a smart buffer cache can reduce the miss ratio by around 70% and the average memory access time by around 67%, over the conventional NAND flash memory configuration. Also, the average miss ratio and average memory access time of the package module with smart buffer for a given buffer space (e.g., 3KB) can achieve better performance than package modules with a conventional direct-mapped buffer with eight times(e.g., 32KB) as much space and a fully-associative configuration with twice as much space(e.g., 8KB)

Mutual Interference on Mobile Pulsed Scanning LIDAR

  • Kim, Gunzung;Eom, Jeongsook;Choi, Jeonghee;Park, Yongwan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.1
    • /
    • pp.43-62
    • /
    • 2017
  • Mobile pulse scanning Light Detection And Ranging (LIDAR) are essential components of intelligent vehicles capable of autonomous travel. Obstacle detection functions of autonomous vehicles require very low failure rates. With the increasing number of autonomous vehicles equipped with scanning LIDARs to detect and avoid obstacles and navigate safely through the environment, the probability of mutual interference becomes an important issue. The reception of foreign laser pulses can lead to problems such as ghost targets or a reduced signal-to-noise ratio. This paper will show the probability that any two scanning LIDARs will interfere mutually by considering spatial and temporal overlaps. We have conducted four experiments to investigate the occurrence of the mutual interference between scanning LIDARs. These four experimental results introduced the effects of mutual interference and indicated that the interference has spatial and temporal locality. It is hard to ignore consecutive mutual interference on the same line or the same angle because it is possible the real object not noise or error. It may make serious faults because the obstacle detection functions of autonomous vehicle rely on heavily the scanning LIDAR.

A Data-Centric Clustering Algorithm for Reducing Network Traffic in Wireless Sensor Networks (무선 센서 네트워크에서 네트워크 트래픽 감소를 위한 데이타 중심 클러스터링 알고리즘)

  • Yeo, Myung-Ho;Lee, Mi-Sook;Park, Jong-Guk;Lee, Seok-Jae;Yoo, Jae-Soo
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.2
    • /
    • pp.139-148
    • /
    • 2008
  • Many types of sensor data exhibit strong correlation in both space and time. Suppression, both temporal and spatial, provides opportunities for reducing the energy cost of sensor data collection. Unfortunately, existing clustering algorithms are difficult to utilize the spatial or temporal opportunities, because they just organize clusters based on the distribution of sensor nodes or the network topology but not correlation of sensor data. In this paper, we propose a novel clustering algorithm with suppression techniques. To guarantee independent communication among clusters, we allocate multiple channels based on sensor data. Also, we propose a spatio-temporal suppression technique to reduce the network traffic. In order to show the superiority of our clustering algorithm, we compare it with the existing suppression algorithms in terms of the lifetime of the sensor network and the site of data which have been collected in the base-station. As a result, our experimental results show that the size of data was reduced by $4{\sim}40%$, and whole network lifetime was prolonged by $20{\sim}30%$.

Enhancing GPU Performance by Efficient Hardware-Based and Hybrid L1 Data Cache Bypassing

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.2
    • /
    • pp.69-77
    • /
    • 2017
  • Recent GPUs have adopted cache memory to benefit general-purpose GPU (GPGPU) programs. However, unlike CPU programs, GPGPU programs typically have considerably less temporal/spatial locality. Moreover, the L1 data cache is used by many threads that access a data size typically considerably larger than the L1 cache, making it critical to bypass L1 data cache intelligently to enhance GPU cache performance. In this paper, we examine GPU cache access behavior and propose a simple hardware-based GPU cache bypassing method that can be applied to GPU applications without recompiling programs. Moreover, we introduce a hybrid method that integrates static profiling information and hardware-based bypassing to further enhance performance. Our experimental results reveal that hardware-based cache bypassing can boost performance for most benchmarks, and the hybrid method can achieve performance comparable to state-of-the-art compiler-based bypassing with considerably less profiling cost.

Prefetching Methods of User's Moving Pattern with Spacial and Temporal Locality in Mobile Information Service Environment (이동 정보 서비스 환경에서 공간.시간 지역성을 가진 사용자의 이동 패턴을 고려한 프리페칭)

  • Choi, In-Seon;Kim, Hyung-Jin
    • Proceedings of the KAIS Fall Conference
    • /
    • 2011.05a
    • /
    • pp.433-436
    • /
    • 2011
  • 사용자의 이동성으로 인하여 이동 정보서비스 환경에서 안정된 서비스 품질(QoS)로 사용자가 원하는 정보를 제공받는데 많은 한계점이 있다. 이동성과 더불어 무선 네트워크의 낮은 대역폭, 높은 전송지연 등의 고유 특성을 부분적으로 보완하기 위해서 유효 데이터의 캐쉬 혹은 프리페칭 기법의 적용이 심도 있게 연구되고 있다. 본 논문은 공간지역성과 시간지역성을 가진 사용자의 이동패턴을 고려한 프리페칭기법을 제안한다. 제안한 프리페칭기법은 사용자의 특정 영역의 방문 빈도수와 일정 시간 이상 머물러 있었던 정도에 따라 정보의 중요도가 높은 것으로 판단하여 이를 적용함으로써 프리페칭의 유효성을 높이는 기초를 제공한다.

  • PDF

An Area Efficient Low Power Data Cache for Multimedia Embedded Systems (멀티미디어 내장형 시스템을 위한 저전력 데이터 캐쉬 설계)

  • Kim Cheong-Ghil;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.101-110
    • /
    • 2006
  • One of the most effective ways to improve cache performance is to exploit both temporal and spatial locality given by any program executional characteristics. This paper proposes a data cache with small space for low power but high performance on multimedia applications. The basic architecture is a split-cache consisting of a direct-mapped cache with small block sire and a fully-associative buffer with large block size. To overcome the disadvantage of small cache space, two mechanisms are enhanced by considering operational behaviors of multimedia applications: an adaptive multi-block prefetching to initiate various fetch sizes and an efficient block filtering to remove rarely reused data. The simulations on MediaBench show that the proposed 5KB-cache can provide equivalent performance and reduce energy consumption up to 40% as compared with 16KB 4-way set associative cache.

Efficient Implementation of SVM-Based Speech/Music Classifier by Utilizing Temporal Locality (시간적 근접성 향상을 통한 효율적인 SVM 기반 음성/음악 분류기의 구현 방법)

  • Lim, Chung-Soo;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.149-156
    • /
    • 2012
  • Support vector machines (SVMs) are well known for their pattern recognition capability, but proper care should be taken to alleviate their inherent implementation cost resulting from high computational intensity and memory requirement, especially in embedded systems where only limited resources are available. Since the memory requirement determined by the dimensionality and the number of support vectors is generally too high for a cache in embedded systems to accomodate, frequent accesses to the main memory occur inevitably whenever the cache is not able to provide requested data to the processor. These frequent accesses to the main memory result in overall performance degradation and increased energy consumption because a memory access typically takes longer and consumes more energy than a cache access or a register access. In this paper, we propose a technique that reduces the number of main memory accesses by optimizing the data access pattern of the SVM-based classifier in such a way that the temporal locality of the accesses increases, fully utilizing data loaded into the processor chip. With experiments, we confirm the enhancement made by the proposed technique in terms of the number of memory accesses, overall execution time, and energy consumption.