• Title/Summary/Keyword: page size

Search Result 286, Processing Time 0.023 seconds

An Efficient Cache Management Scheme of Flash Translation Layer for Large Size Flash Memory Drives

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.31-38
    • /
    • 2015
  • Nowadays, large size flash memory drives with more than a couple of hundreds of gigabytes are common. This paper presents an efficient cache management scheme of flash translation layer, called TPC-FTL, for large size flash memory drives. Since flash drives of large size usually contain large size RAM, we can enhance the performance of page mapping cache by using more RAM for the cache. But if the size exceeds a threshold, the existing schemes are impractical for real devices, because the time for cache manipulation becomes too long. TPC-FTL manages the cache in translation page unit, not in logical page number unit used in existing schemes. Since a translation page covers a large number of logical page numbers (for example, 512 for 2KB size page), the number of cache elements can be reduced up to a practical level. A performance evaluation shows that average response time, an important performance measure, is better than existing schemes via the effect of utilizing spacial locality in addition to temporal locality.

Impact Analysis for Page Size of Desktop and Smartphone Environments under Fast Storage Media (고속 스토리지 탑재에 따른 데스크탑과 스마트폰 환경의 페이지 크기 영향력 분석)

  • Park, Yunjoo;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.2
    • /
    • pp.77-82
    • /
    • 2022
  • Due to the recent advent of fast storage media, the memory management system needs to reconsider the configuring of a page unit. In this paper, we analyze the effect of the page size on memory performance as fast storage is adopted. Specifically, we analyze the TLB hit ratio and the page fault ratio as the workload and the page size are varied in desktop and smartphone environments. Our analysis shows that the influence of the page size depends on the system and workload conditions in desktop systems. However, in smartphone systems, the effect of the page size on memory performance is not large, and is not also sensitive to workloads. We expect that the analysis of this paper will be helpful in configuring the page size of given workloads under the system with fast storage media.

Implementation of Web-page & Development of Size Informational Model on Fashion Electronic Commerce (패션전자상거래 치수정보모델 개발 및 웹페이지 구현)

  • Kang, Myoung-Hui;Nam, Yun-Ja;Choi, Young-Lim
    • Fashion & Textile Research Journal
    • /
    • v.13 no.2
    • /
    • pp.205-214
    • /
    • 2011
  • The purpose of this study is to develop a size information providing model which is easy recognition and utilization for customer. This study also implemented web page to apply the size-informational model. Web page implemented using Apache Web Server and JAVA client-side scripting. Research result on the actual condition of fashion electronic commerce, most of the firms are used the old named same with period of 1980. On the same named-code, they are used different sizing systems by firms or items. Size interval is used 2~5 cm, different by firms. In the size information, is provided only named-code(55, 66 etc.) or garment size, and is confusing whether the marked is body size or garment size. Many of the marked size information were wrong. The sizing system of KS K5001(2009) is not used well. These problems are increased a lose customer and firm by return, exchange, mending-cost, stock, etc. Therefore, the problems should be improved by providing correct and detailed information of size and garment, as well as standardization of sizing systems based on KS K5001.

Workload-Aware Page Size Modeling for Fast Storage in Virtualized Environments (가상화 환경에서 고속 스토리지를 위한 워크로드 맞춤형 페이지 크기 모델링)

  • Bahn, Hyokyung;Park, Yunjoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.93-98
    • /
    • 2022
  • Recently, fast storage media such as Optane have emerged, and memory system configurations designed for disk storage should be reconsidered. In this paper, we analyze the effect of the page size on the memory system performances when fast storage is adopted. Based on this, we design a page size model that can guide an appropriate page size for given workloads in virtualized environments. Configuring different page sizes for various workloads is not an easy matter in traditional systems, but due to the widespread adoption of cloud systems, page sizing performed in our model is feasible for virtual machines, which are generated for executing specific workloads. Simulation experiments under various virtual machine scenarios show that the proposed model improves the memory access time significantly by configuring page sizes for given workloads.

Modeling of TLB Miss Rate and Page Fault Rate for Memory Management in Fast Storage Environments (고속 스토리지 환경의 메모리 관리를 위한 TLB 미스율 및 페이지 폴트율 모델링)

  • Park, Yunjoo;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.65-70
    • /
    • 2022
  • As fast storage has become popular, the memory management system designed for hard disks needs to be reconsidered. In this paper, we observe that memory access latency is sensitive to the page size when fast storage is adopted. We find the reason from the TLB miss rate, which has the increased impact on the memory access latency in comparison with the page fault rate, and there is trade-off between the TLB miss rate and the page fault rate as the page size is varied. To handle such situations, we model the page fault rate and the TLB miss rate accurately as a function of the page size. Specifically, we show that the power fit and the exponential fit with two terms are appropriate for fitting the TLB miss rate and the page fault rate, respectively. We validate the effectiveness of our model by comparing the estimated values from the model and real values.

A dual TLB supporting two pages without operating system aid (운영체제의 지원 없이 이중 페이지를 지원하는 TLB)

  • 이정훈;이장수;김신덕
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.42-44
    • /
    • 2000
  • TLB 성능을 높이기 위한 기존의 3가지 주요 연구방향은, TLB 엔트리 계수를 최대한 증대 시키는 방법, 페이지 크기(page size)를 크게 증대 시키는 방법, 다중 페이지 크기(multiple page sizes)을 지원하는 방법 등의 연구가 제시되어 왔다. 이러한 방법들 중 다중 페이지 크기를 지원하는 방법이 가장 우수한 성능을 제공하는 방법이지만, 이작 어떠한 운영체제(operting system)도 다중 페이지를 사용자(user) 영역까지 지원하고 있지는 않은 상태이다. 따라서 다중 페이지의 효과를 살리기 위해 운영체제의 도움 없이 이중 페이지를 지원하면서 낮은 가격(low cost)으로 높은 성능(high performance) 향상을 보일 수 있는 새로운 듀얼(dual) TLB 구조와 운영 방법을 제안하고자 한다. 제안하는 듀얼 TLB 구조는 작은 페이지 크기( small page size)를 지원하는 완전 연관TLB와 큰 페이지 크기(large page size)를 지원하는 완전 연관TLB로 구성된다. 제시된 구조는 기존의 많은 엔트리 개수를 지원하는 TLB와의 성능 비교분석 결과를 통해 볼 때, 작은 엔트리 개수를 가지면서도 거의 같은 성능을 보임을 알 수 있다. 또한 동일 한 TLB 면적 크기로 기존 방식의 접근 실패율을 90%정도 감소시키는 성능을 제시하였다.

  • PDF

Efficient Management of PCM-based Swap Systems with a Small Page Size

  • Park, Yunjoo;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.5
    • /
    • pp.476-484
    • /
    • 2015
  • Due to the recent advances in non-volatile memory technologies such as PCM, a new memory hierarchy of computer systems is expected to appear. In this paper, we explore the performance of PCM-based swap systems and discuss how this system can be managed efficiently. Specifically, we introduce three management techniques. First, we show that the page fault handling time can be reduced by attaching PCM on DIMM slots, thereby eliminating the software stack overhead of block I/O and the context switch time. Second, we show that it is effective to reduce the page size and turn off the read-ahead option under the PCM swap system where the page fault handling time is sufficiently small. Third, we show that the performance is not degraded even with a small DRAM memory under a PCM swap device; this leads to the reduction of DRAM's energy consumption significantly compared to HDD-based swap systems. We expect that the result of this paper will lead to the transition of the legacy swap system structure of "large memory - slow swap" to a new paradigm of "small memory - fast swap."

A Compact Representation of Translation Pages for Flash Translation Layers of Solid State Drives

  • Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.1-7
    • /
    • 2019
  • This paper presents CTP (Compact Translation Page), a compact representation of translation pages, for page mapping-based flash translation layers to improve RAM utilization and reduce the response time of solid state drives. CTP can store translation information twice in a translation page and the total number of translation pages stored in flash is reduced to half. Therefore, CTP halves the RAM size of the directory of translation pages and uses the saved RAM space for translation cache. CTP shows the best response time when compared to existing page mapping-based flash translation layers.

Effect of ASLR on Memory Duplicate Ratio in Cache-based Virtual Machine Live Migration

  • Piao, Guangyong;Oh, Youngsup;Sung, Baegjae;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.4
    • /
    • pp.205-210
    • /
    • 2014
  • Cache based live migration method utilizes a cache, which is accessible to both side (remote and local), to reduce the virtual machine migration time, by transferring only irredundant data. However, address space layout randomization (ASLR) is proved to reduce the memory duplicate ratio between targeted migration memory and the migration cache. In this pager, we analyzed the behavior of ASLR to find out how it changes the physical memory contents of virtual machines. We found that among six virtual memory regions, only the modification to stack influences the page-level memory duplicate ratio. Experiments showed that: (1) the ASLR does not shift the heap region in sub-page level; (2) the stack reduces the duplicate page size among VMs which performed input replay around 40MB, when ASLR was enabled; (3) the size of memory pages, which can be reconstructed from the fresh booted up state, also reduces by about 60MB by ASLR. With those observations, when applying cache-based migration method, we can omit the stack region. While for other five regions, even a coarse page-level redundancy data detecting method can figure out most of the duplicate memory contents.

The Development of Editor for Web Authoring Tool (웹 저작도구를 위한 에디터 개발)

  • 박헌정;김치수
    • Journal of Internet Computing and Services
    • /
    • v.3 no.4
    • /
    • pp.27-36
    • /
    • 2002
  • The purpose of this study is to develop editor applied to vector image for the distance learning system(FVU), which enables teachers effectively to construct self-page on the screen, to reduce the size of file for teaching, and to correct many different kinds of event which was already made in the previous, The design of the editor is used UML and the editor is named VUEditor. The first page which is needed in class can be constructed by using VUEditor. The contents using VUEditor ore exported into VUAuthor through Vector-transformation. Through this procedure, the size of image file comes to be reduced, it has a low bond width. In conclusion, this VUEditor enables user to construct the first page. even without using such applied program as Image Tool and Power Point, to solve the problem of network traffic for reducing size of the file.

  • PDF