• Title/Summary/Keyword: garbage

Search Result 373, Processing Time 0.025 seconds

Experiments on the Effectiveness of an Automatic Insertion of Safe Memory Reuses into ML-like Programs (메모리 재사용 명령어 자동 삽입 변환기의 효과)

  • 이욱세;이광근
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.853-855
    • /
    • 2004
  • ML프로그램에 메모리 재사용 명령어를 자동으로 삽입하는 변환기의 효과에 대한 실험 결과를 보인다. 분석 및 변환에 드는 비용은 초당 1,582 줄~29,000 줄이었다. 총 생성 메모리의 3.8%~88.6%를 재사용 하도록 변환함으로써 메모리 최고점(memory peak)을 0.0%~71.9% 감소시켰다. 재사용에 의한 프로그램 실행 속도는 25.4% 단축되거나 42.9% 느려졌다. 프로그램 실행 시간 중에 메모리 수거(garbage collection)의 비중이 높을 경우에만 수행 속도가 단축되었다.

  • PDF

Design and Implementation of Garbage Collection Based On Embedded Java Virtual Machine (임베디드 자바가상머신을 위한 가비지 콜렉션 설계 및 구현)

  • 백대현;박희상;양희권;이철훈
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10c
    • /
    • pp.406-408
    • /
    • 2002
  • 자바의 가장 중요한 특성 중 하나는 플랫폼 독립성이다. 즉, 자바가상머신(Java Virtual Machine: JVM)이 탑재된 모든 플랫폼에서 운영체제의 종류와 상관없이 Java로 작성된 프로그램을 수행시킬 수 있다는 것이다. 이를 위해서는 각각의 플랫폼에 맞는 JVM이 적재되어야 한다. 본 논문에서 구현하게 될 가비지 콜렉션은 JVM의 성능을 좌우하는 중요한 요소이다. 가비지 콜렉션을 구현할 때 이용되는 알고리즘에는 여러 가지가 있다. 본 논문은 stop-copy와 마크-회수 알고리즘에 대해서 설명하고, 마크-회수 알고리즘을 개선한 마크-회수 압축 알고리즘을 이용한 가비지 콜렉션의 설계 및 구현한 내용을 기술하고 있다.

  • PDF

An Efficient Garbage Collector on Java Platform (자바 플랫폼에서 효율적인 쓰레기 수집기)

  • Lee, Eun-Hwa;Youn, Sung-Dae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.195-198
    • /
    • 2004
  • 세대별 쓰레기 수집기의 알고리즘을 사용하는 자바 플랫폼에서 객체의 생명 주기가 짧은 응용프로그램과 객체의 생명 주기가 긴 응용프로그램에 각 각 힙의 크기를 조정하여 가비지 콜렉션 성능 측정과 동일한 힙의 크기일 때 young generation크기 조정을 하여 가비지 콜렉션의 회수와 실행시간의 성능을 향상시키도록 한다.

  • PDF

A wear-leveling improving method by periodic exchanging of cold block areas and hot block areas (Cold 블록 영역과 hot 블록 영역의 주기적 교환을 통한 wear-leveling 향상 기법)

  • Jang, Si-Woong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.175-178
    • /
    • 2008
  • While read operation on flash memory is fast and doesn't have any constraints, flash memory can not be overwritten on updating data, new data are updated in new area. If data are frequently updated, garbage collection, which is achieved by erasing blocks, should be performed to reclaim new area. Hence, because the number of erase operations is limited due to characteristics of flash memory, every block should be evenly written and erased. However, if data with access locality are processed by cost benefit algorithm with separation of hot block and cold block, though the performance of processing is high, wear-leveling is not even. In this paper, we propose CB-MG (Cost Benefit between Multi Group) algorithm in which hot data are allocated in one group and cold data in another group, and in which role of hot group and cold group is exchanged every period. Experimental results show that performance and wear-leveling of CB-MG provide better results than those of CB-S.

  • PDF

NVM-based Write Amplification Reduction to Avoid Performance Fluctuation of Flash Storage (플래시 스토리지의 성능 지연 방지를 위한 비휘발성램 기반 쓰기 증폭 감소 기법)

  • Lee, Eunji;Jeong, Minseong;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.4
    • /
    • pp.15-20
    • /
    • 2016
  • Write amplification is a critical factor that limits the stable performance of flash-based storage systems. To reduce write amplification, this paper presents a new technique that cooperatively manages data in flash storage and nonvolatile memory (NVM). Our scheme basically considers NVM as the cache of flash storage, but allows the original data in flash storage to be invalidated if there is a cached copy in NVM, which can temporarily serve as the original data. This scheme eliminates the copy-out operation for a substantial number of cached data, thereby enhancing garbage collection efficiency. Experimental results show that the proposed scheme reduces the copy-out overhead of garbage collection by 51.4% and decreases the standard deviation of response time by 35.4% on average.

An Efficient Flash Memory B-Tree Supporting Very Cheap Node Updates (플래시 메모리 B-트리를 위한 저비용 노드 갱신 기법)

  • Lim, Seong-Chae
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.706-716
    • /
    • 2016
  • Because of efficient space utilization and fast key search times, B-trees have been widely accepted for the use of indexes in HDD-based DBMSs. However, when the B-ree is stored in flash memory, its costly operations of node updates may impair the performance of a DBMS. This is because the random updates in B-tree's leaf nodes could tremendously enlarge I/O costs for the garbage collecting actions of flash storage. To solve the problem, we make all the parents of leaf nodes the virtual nodes, which are not stored physically. Rather than, those nodes are dynamically generated and buffered by referring to their child nodes, at their access times during key searching. By performing node updates and tree reconstruction within a single flash block, our proposed B-tree can reduce the I/O costs for garbage collection and update operations in flash. Moreover, our scheme provides the better performance of key searches, compared with earlier flash-based B-trees. Through a mathematical performance model, we verify the performance advantages of the proposed flash B-tree.

Tuning the Performance of Haskell Parallel Programs Using GC-Tune (GC-Tune을 이용한 Haskell 병렬 프로그램의 성능 조정)

  • Kim, Hwamok;An, Hyungjun;Byun, Sugwoo;Woo, Gyun
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.8
    • /
    • pp.459-465
    • /
    • 2017
  • Although the performance of computer hardware is increasing due to the development of manycore technologies, software lacking a proportional increase in throughput. Functional languages can be a viable alternative to improve the performance of parallel programs since such languages have an inherent parallelism in evaluating pure expressions without side-effects. Specifically, Haskell is notably popular for parallel programming because it provides easy-to-use parallel constructs based on monads. However, the scalability of parallel programs in Haskell tends to fluctuate as the number of cores increases, and the garbage collector is suspected to be the source of this fluctuations because it affects both the space and the time needed to execute the programs. This paper uses the tuning tool, GC-Tune, to improve the scalability of the performance. Our experiment was conducted with a parallel plagiarism detection program, and the scalability improved. Specifically, the fluctuation range of the speedup was narrowed down by 39% compared to the original execution of the program without any tuning.

A Monitoring System for Working Environments Using Wireless Sensor Networks (무선 센서 네트워크를 이용한 작업환경 모니터링 시스템)

  • Jung, Sang-Joon;Chung, Youn-Ky
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.10
    • /
    • pp.1478-1485
    • /
    • 2009
  • A sensor network which is composed of a large number of sensors that perform various sensing is applied in a variety of fields. The sensor networks can be widely used for various application area like as home automation, fire detection and security area. Development of new sensor to have appropriate functions and deployment of networks for suitable application are served actively. In this paper, we design and implement a system that monitors various factory facilities by deploying sensor network at a working place which threatens the worker's safety. A sensor node reports its sensing data like as temperature and humidity to monitor facilities to a sink node. And the server which is connect to the sink node gathers and provides information by user interface. In addition, digital data which are generated at a work place can be transferred via the sensor network to increase the efficiency of works. The proposed sensor network provides the convenience of working, since it is deployed at a garbage collection company to monitor a temperature and humidity of garbage and to transmit data about the weight of trucks which enters the company.

  • PDF

General Web Cache Implementation Using NIO (NIO를 이용한 범용 웹 캐시 구현)

  • Lee, Chul-Hui;Shin, Yong-Hyeon
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.1
    • /
    • pp.79-85
    • /
    • 2016
  • Network traffic is increased rapidly, due to mobile and social network, such as smartphones and facebook, in recent web environment. In this paper, we improved web response time of existing system using direct buffer of NIO and DMA. This solved the disadvantage of JAVA, such as CPU performance reduction due to the blocking of I/O, garbage collection of buffer. Key values circulated many data due to priority change put on a hash map operated easily and apply a priority modification algorithm. Large response data is separated and stored at a fast direct buffer and improved performance. This paper showed that the proposed method using NIO was much improved performance, in many test situations of cache hit and cache miss.

I/O Scheduler Scheme for User Responsiveness in Mobile Systems (모바일 시스템에서 사용자 반응성을 고려한 입출력 스케줄링 기법)

  • Park, Jong Woo;Yoon, Jun Young;Seo, Dae-Wha
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.11
    • /
    • pp.379-384
    • /
    • 2016
  • NAND flash storage is widely used for computer systems, because of it has faster response time, lower power consumption, and larger capacity per unit area than hard disk. However, currently used I/O scheduler in the operating system is optimized for characteristics of the hard disk. Therefore, the conventional I/O scheduler includes the unnecessary overhead in the case of the NAND flash storage to be applied. Particularly, when the write requests performed intensively, garbage collection is performed intensively. So, it occurs the problem that the processing of the I/O request delay. In this paper, we propose the new I/O scheduler to solve the problem of garbage collection performs intensively, and to optimize for NAND flash storage. In the result of performance evaluation, proposed scheme shows an improvement the user responsiveness by reducing 1% of the average read response time and 78% of the maximum response time.