• Title/Summary/Keyword: 메모리 할당

Search Result 261, Processing Time 0.021 seconds

Block Allocation Method for Efficiently Managing Temporary Files of Hash Joins on SSDs (SSD상에서 해시조인 임시 파일의 효과적인 관리를 위한 블록 할당 방법)

  • Joontae, Kim;Sangwon, Lee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.12
    • /
    • pp.429-436
    • /
    • 2022
  • Temporary files are generated when the Hash Join is performed on tables larger than the memory. During the join process, each temporary file is deleted sequentially after it completes the I/O operations. This paper reveals for that the fallocate system call and file deletion-related trim options significantly impact the hash join performance when temporary files are managed on SSDs rather than hard disks. The experiment was conducted on various commercial and research SSDs using PostgreSQL, a representative open-source database. We find that it is possible to improve the join performance up to 3 to 5 times compared to the default combination depending on whether fallocate and trim options are used for temporary files. In addition, we investigate the write amplification and trim command overhead in the SSD according to the combination of the two options for temporary files.

Low Power Security Architecture for the Internet of Things (사물인터넷을 위한 저전력 보안 아키텍쳐)

  • Yun, Sun-woo;Park, Na-eun;Lee, Il-gu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.199-201
    • /
    • 2021
  • The Internet of Things (IoT) is a technology that can organically connect people and things without time and space constraints by using communication network technology and sensors, and transmit and receive data in real time. The IoT used in all industrial fields has limitations in terms of storage allocation, such as device size, memory capacity, and data transmission performance, so it is important to manage power consumption to effectively utilize the limited battery capacity. In the prior research, there is a problem in that security is deteriorated instead of improving power efficiency by lightening the security algorithm of the encryption module. In this study, we proposes a low-power security architecture that can utilize high-performance security algorithms in the IoT environment. This can provide high security and power efficiency by using relatively complex security modules in low-power environments by executing security modules only when threat detection is required based on inspection results.

  • PDF

The Item Distribution Method for the Party System in the MMORPG Using the Observer Pattern (Observer 패턴을 적용한 MMORPG의 파티 시스템 아이템 배분 방법)

  • Kim, Tai-Suk;Kim, Shin-Hwan;Kim, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.1060-1067
    • /
    • 2007
  • We need various methods to develop MMORPG that is game genre which many users use among various game genre using Internet. Specially, to heighten efficiency of distributing work, Object-oriented language such as C++ is used and we need design techniques that can take advantage of enough object-oriented concept when making large-scale game. There is various pattern that can apply in software breakup design in GoF's design pattern for these design techniques. If you apply Observer pattern to Party System Design for forming community between game users, you can easily add new class and maintain system later. Party Play is one of the important system that is used to form game users' community in MMORPG games. The main point that must be considered in Party-Play-System is to divide evenly experience value and acquisition that is got by Party-Play among users according to each user's level. To implement Party Play System that consider maintenance of system, in this paper, we propose a method using GoF's Observer-Pattern, showing you that proposed method which has advantage to dynamic memory allocation and to virtual method call can be used usefully to change object to real time at program run and to add new class and to maintain system new.

  • PDF

Fast Multi-GPU based 3D Backprojection Method (다중 GPU 기반의 고속 삼차원 역전사 기법)

  • Lee, Byeong-Hun;Lee, Ho;Kye, Hee-Won;Shin, Yeong-Gil
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.2
    • /
    • pp.209-218
    • /
    • 2009
  • 3D backprojection is a kind of reconstruction algorithm to generate volume data consisting of tomographic images, which provides spatial information of the original 3D data from hundreds of 2D projections. The computational time of backprojection increases in proportion to the size of volume data and the number of projection images since the value of every voxel in volume data is calculated by considering corresponding pixels from hundreds of projections. For the reduction of computational time, fast GPU based 3D backprojection methods have been studied recently and the performance of them has been improved significantly. This paper presents two multiple GPU based methods to maximize the parallelism of GPU and compares the efficiencies of two methods by considering both the number of projections and the size of volume data. The first method is to generate partial volume data independently for all projections after allocating a half size of volume data on each GPU. The second method is to acquire the entire volume data by merging the incomplete volume data of each GPU on CPU. The in-complete volume data is generated using the half size of projections after allocating the full size of volume data on each GPU. In experimental results, the first method performed better than the second method when the entire volume data can be allocated on GPU. Otherwise, the second method was efficient than the first one.

  • PDF

An Efficient TCP Buffer Tuning Algorithm based on Packet Loss Ratio(TBT-PLR) (패킷 손실률에 기반한 효율적인 TCP Buffer Tuning 알고리즘)

  • Yoo Gi-Chul;Kim Dong-kyun
    • The KIPS Transactions:PartC
    • /
    • v.12C no.1 s.97
    • /
    • pp.121-128
    • /
    • 2005
  • Tho existing TCP(Transmission Control Protocol) is known to be unsuitable for a network with the characteristics of high RDP(Bandwidth-Delay Product) because of the fixed small or large buffer size at the TCP sender and receiver. Thus, some trial cases of adjusting the buffer sizes automatically with respect to network condition have been proposed to improve the end-to-end TCP throughput. ATBT(Automatic TCP fluffer Tuning) attempts to assure the buffer size of TCP sender according to its current congestion window size but the ATBT assumes that the buffer size of TCP receiver is maximum value that operating system defines. In DRS(Dynamic Right Sizing), by estimating the TCP arrival data of two times the amount TCP data received previously, the TCP receiver simply reserves the buffer size for the next arrival, accordingly. However, we do not need to reserve exactly two times of buffer size because of the possibility of TCP segment loss. We propose an efficient TCP buffer tuning technique(called TBT-PLR: TCP buffer tuning algorithm based on packet loss ratio) since we adopt the ATBT mechanism and the TBT-PLR mechanism for the TCP sender and the TCP receiver, respectively. For the purpose of testing the actual TCP performance, we implemented our TBT-PLR by modifying the linux kernel version 2.4.18 and evaluated the TCP performance by comparing TBT-PLR with the TCP schemes of the fixed buffer size. As a result, more balanced usage among TCP connections was obtained.

SWOSpark : Spatial Web Object Retrieval System based on Distributed Processing (SWOSpark : 분산 처리 기반 공간 웹 객체 검색 시스템)

  • Yang, Pyoung Woo;Nam, Kwang Woo
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.53-60
    • /
    • 2018
  • This study describes a spatial web object retrieval system using Spark, an in - memory based distributed processing system. Development of social networks has created massive amounts of spatial web objects, and retrieval and analysis of data is difficult by using exist spatial web object retrieval systems. Recently, development of distributed processing systems supports the ability to analyze and retrieve large amounts of data quickly. Therefore, a method is promoted to search a large-capacity spatial web object by using the distributed processing system. Data is processed in block units, and one of these blocks is converted to RDD and processed in Spark. Regarding the discussed method, we propose a system in which each RDD consists of spatial web object index for the included data, dividing the entire spatial region into non-overlapping spatial regions, and allocating one divided region to one RDD. We propose a system that can efficiently use the distributed processing system by dividing space and increasing efficiency of searching the divided space. Additionally by comparing QP-tree with R-tree, we confirm that the proposed system is better for searching the spatial web objects; QP-tree builds index with both spatial and words information while R-tree build index only with spatial information.

A Real-Time Multiple Circular Buffer Model for Streaming MPEG-4 Media (MPEG-4 미디어 스트리밍에 적합한 실시간형 다중원형버퍼 모델)

  • 신용경;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.1
    • /
    • pp.13-24
    • /
    • 2003
  • MPEG-4 is a standard for multimedia applications and provides a set of technologies to satisfy the needs of authors, service providers and end users alike. In this paper, we suggest a Real-time Multiple Circular Buffer (M4RM Buffer) model, which is suitable for streaming these MPEG-4 contents efficiently. M4RM buffer generates each structure of the buffer, which matches well with each object composing an MPEG-4 content, according to the transferred information, and manipulates multiple read/write operations only by its reference. It divides the decoder buffer and the composition buffer, which are described in the standard, by the unit of frame allocated to minimize the range of access. This buffer unit of a frame is allocated according to the object description. Also, it processes the objects synchronization within the buffer and provides APIs for an efficient buffer management to process the real-time user events. Based on the performance evaluation, we show that M4RM buffer model decreases the waiting time in a buffer frame, and so allows the real-time streaming of an MPEG-4 content using the smaller size of the memory block than IM1-2D and Window Media Player.

A Design of SPI-4.2 Interface Core (SPI-4.2 인터페이스 코어의 설계)

  • 손승일
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.6
    • /
    • pp.1107-1114
    • /
    • 2004
  • System Packet Interface Level 4 Phase 2(SPI-4.2) is an interface for packet and cell transfer between a physical layer(PHY) device and a link layer device, for aggregate bandwidths of OC-192 ATM and Packet Over Sonet/SDH(POS), as well as 10Gbps Ethernet applications. SPI-4.2 core consists of Tx and Rx modules and supports full duplex communication. Tx module of SPI-4.2 core writes 64-bit data word and 14-bit header information from the user interface into asynchronous FIFO and transmits DDR(Double Data Rate) data over PL4 interface. Rx module of SPI-4.2 core operates in vice versa. Tx and Rx modules of SPI-4.2 core are designed to support maximum 256-channel and control the bandwidth allocation by configuring the calendar memory. Automatic DIP4 and DIP-2 parity generation and checking are implemented within the designed core. The designed core uses Xilinx ISE 5.li tool and is described in VHDL Language and is simulated by Model_SIM 5.6a. The designed core operates at 720Mbps data rate per line, which provides an aggregate bandwidth of 11.52Gbps. SPI-4.2 interface core is suited for line cards in gigabit/terabit routers, and optical cross-connect switches, and SONET/SDH-based transmission systems.

Implementation of Multicore-Aware Load Balancing on Clusters through Data Distribution in Chapel (클러스터 상에서 다중 코어 인지 부하 균등화를 위한 Chapel 데이터 분산 구현)

  • Gu, Bon-Gen;Carpenter, Patrick;Yu, Weikuan
    • The KIPS Transactions:PartA
    • /
    • v.19A no.3
    • /
    • pp.129-138
    • /
    • 2012
  • In distributed memory architectures like clusters, each node stores a portion of data. How data is distributed across nodes influences the performance of such systems. The data distribution scheme is the strategy to distribute data across nodes and realize parallel data processing. Due to various reasons such as maintenance, scale up, upgrade, etc., the performance of nodes in a cluster can often become non-identical. In such clusters, data distribution without considering performance cannot efficiently distribute data on nodes. In this paper, we propose a new data distribution scheme based on the number of cores in nodes. We use the number of cores as the performance factor. In our data distribution scheme, each node is allocated an amount of data proportional to the number of cores in it. We implement our data distribution scheme using the Chapel language. To show our data distribution is effective in reducing the execution time of parallel applications, we implement Mandelbrot Set and ${\pi}$-Calculation programs with our data distribution scheme, and compare the execution times on a cluster. Based on experimental results on clusters of 8-core and 16-core nodes, we demonstrate that data distribution based on the number of cores can contribute to a reduction in the execution times of parallel programs on clusters.

Access Frequency Based Selective Buffer Cache Management Strategy For Multimedia News Data (접근 요청 빈도에 기반한 멀티미디어 뉴스 데이터의 선별적 버퍼 캐쉬 관리 전략)

  • Park, Yong-Un;Seo, Won-Il;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2524-2532
    • /
    • 1999
  • In this paper, we present a new buffer pool management scheme designed for video type news objects to build a cost-effective News On Demand storage server for serving users requests beyond the limitation of disk bandwidth. In a News On Demand Server where many of users request for video type news objects have to be serviced keeping their playback deadline, the maximum numbers of concurrent users are limited by the maximum disk bandwidth the server provides. With our proposed buffer cache management scheme, a requested data is checked to see whether or not it is worthy of caching by checking its average arrival interval and current disk traffic density. Subsequently, only granted news objects are permitted to get into the buffer pool, where buffer allocation is made not on the block basis but on the object basis. We evaluated the performance of our proposed caching algorithm through simulation. As a result of the simulation, we show that by using this caching scheme to support users requests for real time news data, compared with serving those requests only by disks, 30% of extra requests are served without additional cost increase.

  • PDF