• Title/Summary/Keyword: Memory allocation

Search Result 204, Processing Time 0.028 seconds

Performance Analysis of Block Allocation of File Systems on Linux Environment (리눅스 환경에서 파일 시스템들의 블록 할당 성능 분석)

  • Choi, Jin-oh
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.355-357
    • /
    • 2014
  • Linux environment that is commonly used at embedded systems, supports various file systems as Ext2, FAT, NTFS, ets. The file system that is equiped on the embedded system is mostly implemented on mini hard disk or flash memory. The types of the file system of the system make an effect on the performance of a application programs. The factors of file system performance on a same media are block allocation and block free time. On these factors, block free time is not so different according to the type of file systems. This thesis performs the performance benchmark of a Ext2, FAT and NTFS file systems about block allocation performance. As the result, it is discussed that what file system is better at which case.

  • PDF

A Simulator for Performance Evaluation of Multithreaded Memory Allocation Operation in Multi-Core Environment (멀티코어 환경에서의 멀티스레드 기법을 이용한 메모리 할당 연산의 성능 평가를 위한 시뮬레이터)

  • Kim, Ho-Young;Huang, Dada;Han, Sang-Hyuck;Kim, Young-Kuk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06a
    • /
    • pp.245-247
    • /
    • 2012
  • 최근 멀티코어 프로세서의 활용이 대중화되고 있다. 멀티코어 시스템에서는 소프트웨어가 동시에 여러 코어를 사용하여 동작을 수행 할 때 성능 향상 효과를 얻을 수 있다. 즉, 하나의 소프트웨어가 여러 코어를 동시에 사용할 수 있는 멀티스레드 프로그래밍 기법을 사용할 때 성능을 높일 수 있다. 이러한 환경에서 효율적인 메모리 할당은 데스크톱, 서버 및 과학 등과 같은 응용에 매우 중요하다. 하지만, 동적으로 메모리를 할당하는 것은 메모리 할당 연산과 반환 연산 및 어떤 스레드가 다른 스레드의 힙 영역에 접근하는 것을 처리하기 위한 동기화 문제로 인한 오버헤드가 발생하여 성능에 영향을 끼치는 문제가 발생하게 된다. 따라서 이와 같은 환경에서 실제로 성능에 어느 정도 영향을 끼칠 것인가를 측정할 수 있는 도구가 필요하다. 이에 멀티코어 환경에서 멀티스레드 기법을 사용하여 메모리 할당 연산이 성능에 어떠한 영향을 끼치는지를 측정 및 평가할 수 있는 시뮬레이터인 MAES(Memory Allocation Evaluation Simulator)를 설계하고 구현한다.

Granular Bidirectional and Multidirectional Associative Memories: Towards a Collaborative Buildup of Granular Mappings

  • Pedrycz, Witold
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.435-447
    • /
    • 2017
  • Associative and bidirectional associative memories are examples of associative structures studied intensively in the literature. The underlying idea is to realize associative mapping so that the recall processes (one-directional and bidirectional ones) are realized with minimal recall errors. Associative and fuzzy associative memories have been studied in numerous areas yielding efficient applications for image recall and enhancements and fuzzy controllers, which can be regarded as one-directional associative memories. In this study, we revisit and augment the concept of associative memories by offering some new design insights where the corresponding mappings are realized on the basis of a related collection of landmarks (prototypes) over which an associative mapping becomes spanned. In light of the bidirectional character of mappings, we have developed an augmentation of the existing fuzzy clustering (fuzzy c-means, FCM) in the form of a so-called collaborative fuzzy clustering. Here, an interaction in the formation of prototypes is optimized so that the bidirectional recall errors can be minimized. Furthermore, we generalized the mapping into its granular version in which numeric prototypes that are formed through the clustering process are made granular so that the quality of the recall can be quantified. We propose several scenarios in which the allocation of information granularity is aimed at the optimization of the characteristics of recalled results (information granules) that are quantified in terms of coverage and specificity. We also introduce various architectural augmentations of the associative structures.

Efficient Memory Update Module for Video Object Segmentation (동영상 물체 분할을 위한 효율적인 메모리 업데이트 모듈)

  • Jo, Junho;Cho, Nam Ik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.561-568
    • /
    • 2022
  • Most deep learning-based video object segmentation methods perform the segmentation with past prediction information stored in external memory. In general, the more past information is stored in the memory, the better results can be obtained by accumulating evidence for various changes in the objects of interest. However, all information cannot be stored in the memory due to hardware limitations, resulting in performance degradation. In this paper, we propose a method of storing new information in the external memory without additional memory allocation. Specifically, after calculating the attention score between the existing memory and the information to be newly stored, new information is added to the corresponding memory according to each score. In this way, the method works robustly because the attention mechanism reflects the object changes well without using additional memory. In addition, the update rate is adaptively determined according to the accumulated number of matches in the memory so that the frequently updated samples store more information to maintain reliable information.

Development of Full Coverage Test Framework for NVMe Based Storage

  • Park, Jung Kyu;Kim, Jaeho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.4
    • /
    • pp.17-24
    • /
    • 2017
  • In this paper, we propose an efficient dynamic workload balancing strategy which improves the performance of high-performance computing system. The key idea of this dynamic workload balancing strategy is to minimize execution time of each job and to maximize the system throughput by effectively using system resource such as CPU, memory. Also, this strategy dynamically allocates job by considering demanded memory size of executing job and workload status of each node. If an overload node occurs due to allocated job, the proposed scheme migrates job, executing in overload nodes, to another free nodes and reduces the waiting time and execution time of job by balancing workload of each node. Through simulation, we show that the proposed dynamic workload balancing strategy based on CPU, memory improves the performance of high-performance computing system compared to previous strategies.

Study on the Relationship between Adolescents' Self-esteem and their Sociality -Focusing on the Moderating Effect of Gender -

  • Kim, Kyung-Sook;Lee, Duk-Nam
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.1
    • /
    • pp.147-153
    • /
    • 2016
  • In this paper, we propose an efficient dynamic workload balancing strategy which improves the performance of high-performance computing system. The key idea of this dynamic workload balancing strategy is to minimize execution time of each job and to maximize the system throughput by effectively using system resource such as CPU, memory. Also, this strategy dynamically allocates job by considering demanded memory size of executing job and workload status of each node. If an overload node occurs due to allocated job, the proposed scheme migrates job, executing in overload nodes, to another free nodes and reduces the waiting time and execution time of job by balancing workload of each node. Through simulation, we show that the proposed dynamic workload balancing strategy based on CPU, memory improves the performance of high-performance computing system compared to previous strategies.

Bayesian Regression Modeling for Patent Keyword Analysis

  • Choi, JunHyeog;Jun, SungHae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.1
    • /
    • pp.125-129
    • /
    • 2016
  • In this paper, we propose an efficient dynamic workload balancing strategy which improves the performance of high-performance computing system. The key idea of this dynamic workload balancing strategy is to minimize execution time of each job and to maximize the system throughput by effectively using system resource such as CPU, memory. Also, this strategy dynamically allocates job by considering demanded memory size of executing job and workload status of each node. If an overload node occurs due to allocated job, the proposed scheme migrates job, executing in overload nodes, to another free nodes and reduces the waiting time and execution time of job by balancing workload of each node. Through simulation, we show that the proposed dynamic workload balancing strategy based on CPU, memory improves the performance of high-performance computing system compared to previous strategies.

The Effects of Cache Memory on the System Bus Traffic (캐쉬 메모리가 버스 트래픽에 끼치는 영향)

  • 조용훈;김정선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.1
    • /
    • pp.224-240
    • /
    • 1996
  • It is common sense for at least one or more levels of cache memory to be used in these day's computer systems. In this paper, the impact of the internal cache memory organization on the performance of the computer is investigated by using a simulator program, which is wirtten by authors and run on SUN SPARC workstation, with several real execution, with several real execution trace files. 280 cache organizations have been simulated using n-way set associative mapping and LRU(Least Recently Used) replacement algorithm with write allocation policy. As a result, 16-way setassociative cache is the best configuration, and when we select 256KB cache memory and 64 byte line size, the bus traffic ratio was decreased compared to that of the noncache system so that a single bus could support almost 7 processors without any delay and degradationof high ratio(hit ratio was 99.21%). The smaller the line size we choose, the little lower hit ratio we can get, but the more processors can be supported by a single bus(maximum 18 processors). Therefore, using a proper cache memory organization can make a single bus structure be able to support multiple processors without any performance degradation.

  • PDF

Garbage Collection Technique for Balanced Wear-out and Durability Enhancement with Solid State Drive on Storage Systems

  • Kim, Sungho;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.4
    • /
    • pp.25-32
    • /
    • 2017
  • Recently, the use of NAND flash memory is being increased as a secondary device to displace conventional magnetic disk. NAND flash memory, as one among non-volatile memories, has many advantages such as low power, high reliability, low access latency, and so on. However, NAND flash memory has disadvantages such as erase-before-write, unbalanced operation speed, and limited P/E cycles, unlike conventional magnetic disk. To solve these problems, NAND flash memory mainly adopted FTL (Flash Translation Layer). In particular, garbage collection technique in FTL tried to improve the system lifetime. However, previous garbage collection techniques have a sensitive property of the system lifetime according to write pattern. To solve this problem, we propose BSGC (Balanced Selection-based Garbage Collection) technique. BSGC efficiently selects a victim block using all intervals from the past information to the current information. In this work, SFL (Search First linked List), as the proposed block allocation policy, prolongs the system lifetime additionally. In our experiments, SFL and BSGC prolonged the system lifetime about 12.85% on average and reduced page migrations about 22.12% on average. Moreover, SFL and BSGC reduced the average response time of 16.88% on average.

Recency and Frequency based Page Management on Hybrid Main Memory

  • Kim, Sungho;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2018
  • In this paper, we propose a new page replacement policy using recency and frequency on hybrid main memory. The proposal has two features. First, when a page fault occurs in the main memory, the proposal allocates it to DRAM, regardless of operation types such as read or write. The page allocated by the page fault is likely to be high probability of re-reference in the near future. Our allocation can reduce the frequency of write operations in PCM. Second, if the write operations are frequently performed on pages of PCM, the pages are migrated from PCM to DRAM. Otherwise, the pages are maintained in PCM, to reduce the number of unnecessary page migrations from PCM. In our experiments, the proposal reduced the number of page migrations from PCM about 32.12% on average and reduced the number of write operations in PCM about 44.64% on average, compared to CLOCK-DWF. Moreover, the proposal reduced the energy consumption about 15.61%, and 3.04%, compared to other page replacement policies.