• Title/Summary/Keyword: Heterogeneous storage

Search Result 104, Processing Time 0.03 seconds

Energy Efficient and Low-Cost Server Architecture for Hadoop Storage Appliance

  • Choi, Do Young;Oh, Jung Hwan;Kim, Ji Kwang;Lee, Seung Eun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4648-4663
    • /
    • 2020
  • This paper proposes the Lempel-Ziv 4(LZ4) compression accelerator optimized for scale-out servers in data centers. In order to reduce CPU loads caused by compression, we propose an accelerator solution and implement the accelerator on an Field Programmable Gate Array(FPGA) as heterogeneous computing. The LZ4 compression hardware accelerator is a fully pipelined architecture and applies 16 dictionaries to enhance the parallelism for high throughput compressor. Our hardware accelerator is based on the 20-stage pipeline and dictionary architecture, highly customized to LZ4 compression algorithm and parallel hardware implementation. Proposing dictionary architecture allows achieving high throughput by comparing input sequences in multiple dictionaries simultaneously compared to a single dictionary. The experimental results provide the high throughput with intensively optimized in the FPGA. Additionally, we compare our implementation to CPU implementation results of LZ4 to provide insights on FPGA-based data centers. The proposed accelerator achieves the compression throughput of 639MB/s with fine parallelism to be deployed into scale-out servers. This approach enables the low power Intel Atom processor to realize the Hadoop storage along with the compression accelerator.

Asymmetric data storage management scheme to ensure the safety of big data in multi-cloud environments based on deep learning (딥러닝 기반의 다중 클라우드 환경에서 빅 데이터의 안전성을 보장하기 위한 비대칭 데이터 저장 관리 기법)

  • Jeong, Yoon-Su
    • Journal of Digital Convergence
    • /
    • v.19 no.3
    • /
    • pp.211-216
    • /
    • 2021
  • Information from various heterogeneous devices is steadily increasing in distributed cloud environments. This is because high-speed network speeds and high-capacity multimedia data are being used. However, research is still underway on how to minimize information errors in big data sent and received by heterogeneous devices. In this paper, we propose a deep learning-based asymmetric storage management technique for minimizing bandwidth and data errors in networks generated by information sent and received in cloud environments. The proposed technique applies deep learning techniques to optimize the load balance after asymmetric hash of the big data information generated by each device. The proposed technique is characterized by allowing errors in big data collected from each device, while also ensuring the connectivity of big data by grouping big data into groups of clusters of dogs. In particular, the proposed technique minimizes information errors when storing and managing big data asymmetrically because it used a loss function that extracted similar values between big data as seeds.

A Priority Based Transmission Control Scheme Considering Remaining Energy for Body Sensor Network

  • Encarnacion, Nico;Yang, Hyunho
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.25-32
    • /
    • 2015
  • Powering wireless sensors with energy harvested from the environment is coming of age due to the increasing power densities of both storage and harvesting devices and the electronics performing energy efficient energy conversion. In order to maximize the functionality of the wireless sensor network, minimize missing packets, minimize latency and prevent the waste of energy, problems like congestion and inefficient energy usage must be addressed. Many sleep-awake protocols and efficient message priority techniques have been developed to properly manage the energy of the nodes and to minimize congestion. For a WSN that is operating in a strictly energy constrained environment, an energy-efficient transmission strategy is necessary. In this paper, we present a novel transmission priority decision scheme for a heterogeneous body sensor network composed of normal nodes and an energy harvesting node that acts as a cluster head. The energy harvesting node's decision whether or not to clear a normal node for sending is based on a set of metrics which includes the energy harvesting node's remaining energy, the total harvested energy, the type of message in a normal node's queue and finally, the implementation context of the wireless sensor network.

Improving the Quality of Response Surface Analysis of an Experiment for Coffee-supplemented Milk Beverage: II. Heterogeneous Third-order Models and Multi-response Optimization

  • Rheem, Sungsue;Rheem, Insoo;Oh, Sejong
    • Food Science of Animal Resources
    • /
    • v.39 no.2
    • /
    • pp.222-228
    • /
    • 2019
  • This research was motivated by our encounter with the situation where an optimization was done based on statistically non-significant models having poor fits. Such a situation took place in a research to optimize manufacturing conditions for improving storage stability of coffee-supplemented milk beverage by using response surface methodology, where two responses are $Y_1$=particle size and $Y_2$=zeta-potential, two factors are $F_1$=speed of primary homogenization (rpm) and $F_2$=concentration of emulsifier (%), and the optimization objective is to simultaneously minimize $Y_1$ and maximize $Y_2$. For response surface analysis, practically, the second-order polynomial model is almost solely used. But, there exists the cases in which the second-order model fails to provide a good fit, to which remedies are seldom known to researchers. Thus, as an alternative to a failed second-order model, we present the heterogeneous third-order model, which can be used when the experimental plan is a two-factor central composite design having -1, 0, and 1 as the coded levels of factors. And, for multi-response optimization, we suggest a modified desirability function technique. Using these two methods, we have obtained statistical models with improved fits and multi-response optimization results with the predictions better than those in the previous research. Our predicted optimum combination of conditions is ($F_1$, $F_2$)=(5,000, 0.295), which is different from the previous combination. This research is expected to help improve the quality of response surface analysis in experimental sciences including food science of animal resources.

Real-time Task Aware Memory Allocation Techniques for Heterogeneous Mobile Multitasking Environments (이종 모바일 멀티태스킹 환경을 위한 실시간 작업 인지형 메모리 할당 기술 연구)

  • Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.43-48
    • /
    • 2022
  • Recently, due to the rapid performance improvement of smartphones and the increase in background executions of mobile apps, multitasking has become common on mobile platforms. Unlike traditional desktop and server apps, response time is important in most mobile apps as they are interactive tasks, and some apps are classified as real-time tasks with deadlines. In this paper, we discuss how to meet the requirements of heterogeneous multitasking in managing memory of real-time and interactive tasks when they are executed together on a smartphone. To do so, we analyze the memory requirement of real-time tasks, and propose a model that has the ability of allocating memory to multitasking tasks on a smartphone. Trace-driven simulations with real-world storage access traces captured by heterogeneous apps show that the proposed model provides reasonable performance for interactive tasks while guaranteeing the requirement of real-time tasks.

A Study on The Conversion Factor between Heterogeneous DBMS for Cloud Migration

  • Joonyoung Ahn;Kijung Ryu;Changik Oh;Taekryong Han;Heewon Kim;Dongho Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2450-2463
    • /
    • 2024
  • Many legacy information systems are currently being clouded. This is due to the advantage of being able to respond flexibly to the changes in user needs and system environment while reducing the initial investment cost of IT infrastructure such as servers and storage. The infrastructure of the information system migrated to the cloud is being integrated through the API connections, while being subdivided by using MSA (Micro Service Architecture) internally. DBMS (Database Management System) is also becoming larger after cloud migration. Scale calculation in most layers of the application architecture can be measured and calculated from auto-scaling perspective, but the method of hardware scale calculation for DBMS has not been established as standardized methodology. If there is an error in hardware scale calculation of DBMS, problems such as poor performance of the information system or excessive auto-scaling may occur. In addition, evaluating hardware size is more crucial because it also affects the financial cost of the migration. CPU is the factor that has the greatest influence on hardware scale calculation of DBMS. Therefore, this paper aims to calculate the conversion factor for CPU scale calculation that will facilitate the cloud migration between heterogeneous DBMS. In order to do that, we utilize the concept and definition of hardware capacity planning and scale calculation in the on-premise information system. The methods to calculate the conversion factor using TPC-H tests are proposed and verified. In the future, further research and testing should be conducted on the size of the segmented CPU and more heterogeneous DBMS to demonstrate the effectiveness of the proposed test model.

A Novel Memory Hierarchy for Flash Memory Based Storage Systems

  • Yim, Keno-Soo
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.5 no.4
    • /
    • pp.262-269
    • /
    • 2005
  • Semiconductor scientists and engineers ideally desire the faster but the cheaper non-volatile memory devices. In practice, no single device satisfies this desire because a faster device is expensive and a cheaper is slow. Therefore, in this paper, we use heterogeneous non-volatile memories and construct an efficient hierarchy for them. First, a small RAM device (e.g., MRAM, FRAM, and PRAM) is used as a write buffer of flash memory devices. Since the buffer is faster and does not have an erase operation, write can be done quickly in the buffer, making the write latency short. Also, if a write is requested to a data stored in the buffer, the write is directly processed in the buffer, reducing one write operation to flash storages. Second, we use many types of flash memories (e.g., SLC and MLC flash memories) in order to reduce the overall storage cost. Specifically, write requests are classified into two types, hot and cold, where hot data is vulnerable to be modified in the near future. Only hot data is stored in the faster SLC flash, while the cold is kept in slower MLC flash or NOR flash. The evaluation results show that the proposed hierarchy is effective at improving the access time of flash memory storages in a cost-effective manner thanks to the locality in memory accesses.

Robust and Auditable Secure Data Access Control in Clouds

  • KARPAGADEEPA.S;VIJAYAKUMAR.P
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.95-102
    • /
    • 2024
  • In distributed computing, accessible encryption strategy over Auditable data is a hot research field. Be that as it may, most existing system on encoded look and auditable over outsourced cloud information and disregard customized seek goal. Distributed storage space get to manage is imperative for the security of given information, where information security is executed just for the encoded content. It is a smaller amount secure in light of the fact that the Intruder has been endeavored to separate the scrambled records or Information. To determine this issue we have actualize (CBC) figure piece fastening. It is tied in with adding XOR each plaintext piece to the figure content square that was already delivered. We propose a novel heterogeneous structure to evaluate the issue of single-point execution bottleneck and give a more proficient access control plot with a reviewing component. In the interim, in our plan, a CA (Central Authority) is acquainted with create mystery keys for authenticity confirmed clients. Not at all like other multi specialist get to control plots, each of the experts in our plan deals with the entire trait set independently. Keywords: Cloud storage, Access control, Auditing, CBC.

Impact Source Location on Composite CNG Storage Tank Using Acoustic Emission Energy Based Signal Mapping Method (음향방출 에너지 기반 손상 위치표정 기법을 이용한 복합재 CNG 탱크의 충격 신호 위치표정)

  • Han, Byeong-Hee;Yoon, Dong-Jin;Park, Chun-Soo;Lee, Young-Shin
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.36 no.5
    • /
    • pp.391-398
    • /
    • 2016
  • Acoustic emission (AE) is one of the most powerful techniques for detecting damages and identify damage location during operations. However, in case of the source location technique, there is some limitation in conventional AE technology, because it strongly depends on wave speed in the corresponding structures having heterogeneous composite materials. A compressed natural gas(CNG) pressure vessel is usually made of carbon fiber composite outside of vessel for the purpose of strengthening. In this type of composite material, locating impact damage sources exactly using conventional time arrival method is difficult. To overcome this limitation, this study applied the previously developed Contour D/B map technique to four types of CNG storage tanks to identify the source location of damages caused by external shock. The results of the identification of the source location for different types were compared.

WWCLOCK: Page Replacement Algorithm Considering Asymmetric I/O Cost of Flash Memory (WWCLOCK: 플래시 메모리의 비대칭적 입출력 비용을 고려한 페이지 교체 알고리즘)

  • Park, Jun-Seok;Lee, Eun-Ji;Seo, Hyun-Min;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.913-917
    • /
    • 2009
  • Flash memories have asymmetric I/O costs for read and write in terms of latency and energy consumption. However, the ratio of these costs is dependent on the type of storage. Moreover, it is becoming more common to use two flash memories on a system as an internal memory and an external memory card. For this reason, buffer cache replacement algorithms should consider I/O costs of device as well as possibility of reference. This paper presents WWCLOCK(Write-Weighted CLOCK) algorithm which directly uses I/O costs of devices along with recency and frequency of cache blocks to selecting a victim to evict from the buffer cache. WWCLOCK can be used for wide range of storage devices with different I/O cost and for systems that are using two or more memory devices at the same time. In addition to this, it has low time and space complexity comparable to CLOCK algorithm. Trace-driven simulations show that the proposed algorithm reduces the total I/O time compared with LRU by 36.2% on average.