• Title/Summary/Keyword: in-memory computing

Search Result 766, Processing Time 0.029 seconds

Analysis of Linear Time-Invariant Spare Network and its Computer Programming (sparse 행렬을 이용한 저항 회로망의 해석과 전산프로그래밍)

  • 차균현
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.11 no.2
    • /
    • pp.1-4
    • /
    • 1974
  • Matrix inversion is very inefficient for computing direct solutions of the large sparse systems of linear equations that arise in many network problems. This paper describes some computer programming techniques for taking advantage of the sparsity of the admittance matrix. with this method, direct solutions are computed from sparse matrix. It is Possible to gain a significant reduction in computing time, memory and round-off emir[r. Retails of the method, numerical examples and programming are given.

  • PDF

Finite Element Stress Analysis of Coil Springs using a Multi-level Substructuring Method I : Spring Super Element (다단계 부분구조법을 이용한 코일스프링의 유한 요소 응력해석 I : 스프링 슈퍼요소)

  • Kim, Jin-Young;Huh, Hoon
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.8 no.2
    • /
    • pp.138-150
    • /
    • 2000
  • This study is concerned with computerized multi-level substructuring methods and stress analysis of coil springs. The purpose of substructuring methods is to reduce computing time and capacity of computer memory by multiple level reduction of the degrees of freedom in large size problems which are modeled by three dimensional continuum finite elements. In this paper, a super element has been developed for stress analysis of coil springs. The spring super element developed has been examined with tension and torsion simulation of cylindrical bars for demonstrating its validity. The result shows that the super element enhances the computing efficiency while it does not affect the accuracy of the results and it is ready for application to the coil spring analysis.

  • PDF

FlaSim: A FTL Emulator using Linux Kernel Modules (FlaSim: 리눅스 커널 모듈을 이용한 FTL 에뮬레이터)

  • Choe, Hwa-Young;Kim, Sang-Hyun;Lee, Seoung-Won;Park, Sang-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.11
    • /
    • pp.836-840
    • /
    • 2009
  • Many researchers have studied flash memory in order to replace hard disk storages. Many FTL algorithms have been proposed to overcome physical constraints of flash memory such as erase-before-write, wear leveling, and poor write performance. Therefore, these constraints should be considered for testing FTL algorithms and the performance evaluation of flash memory. As doing the experiments, we suffer from several problems with costs and settings in experimental configuration. When we, for example, replay the traces of Oracle to evaluate the I/O performance with flash memory, it is hard to extract exact traces of I/O operations in Oracle. Since there are only write operations in the log, it is impossible to gather read operations. In MySQL and SQLite, we can gather the read operations by changing I/O functions in the source codes. But it is not easy to search for the exact points about I/O and even if we can find out the points, we might get wrong results depending on how we modify source codes to get I/O traces. The FlaSim proposed in this paper removes the difficulties when we evaluate the performance of FTL algorithms and flash memory. Our Linux drivers emulate the flash memory as a hard disk. And we can easily obtain the usage statistics of flash memory such as the number of write, read, and erase operations. The FlaSim can be gracefully extended to support the additional modules implemented by novel algorithms and ideas. In this paper, we describe the structure of FTL emulator, development tools and operating methods. We expect this emulator to be helpful for many experiments and research with flash memory.

Advanced Resource Management with Access Control for Multitenant Hadoop

  • Won, Heesun;Nguyen, Minh Chau;Gil, Myeong-Seon;Moon, Yang-Sae
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.592-601
    • /
    • 2015
  • Multitenancy has gained growing importance with the development and evolution of cloud computing technology. In a multitenant environment, multiple tenants with different demands can share a variety of computing resources (e.g., CPU, memory, storage, network, and data) within a single system, while each tenant remains logically isolated. This useful multitenancy concept offers highly efficient, and cost-effective systems without wasting computing resources to enterprises requiring similar environments for data processing and management. In this paper, we propose a novel approach supporting multitenancy features for Apache Hadoop, a large scale distributed system commonly used for processing big data. We first analyze the Hadoop framework focusing on "yet another resource negotiator (YARN)", which is responsible for managing resources, application runtime, and access control in the latest version of Hadoop. We then define the problems for supporting multitenancy and formally derive the requirements to solve these problems. Based on these requirements, we design the details of multitenant Hadoop. We also present experimental results to validate the data access control and to evaluate the performance enhancement of multitenant Hadoop.

Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing (병렬 연산을 이용한 방출 단층 영상의 재구성 속도향상 기초연구)

  • Park, Min-Jae;Lee, Jae-Sung;Kim, Soo-Mee;Kang, Ji-Yeon;Lee, Dong-Soo;Park, Kwang-Suk
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.443-450
    • /
    • 2009
  • Purpose: Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. Materials and Methods: The preliminary tests for the possibility on virtual manchines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Results: Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Conclusion: Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify.

An Analysis of Utilization on Virtualized Computing Resource for Hadoop and HBase based Big Data Processing Applications (Hadoop과 HBase 기반의 빅 데이터 처리 응용을 위한 가상 컴퓨팅 자원 이용률 분석)

  • Cho, Nayun;Ku, Mino;Kim, Baul;Xuhua, Rui;Min, Dugki
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.4
    • /
    • pp.449-462
    • /
    • 2014
  • In big data era, there are a number of considerable parts in processing systems for capturing, storing, and analyzing stored or streaming data. Unlike traditional data handling systems, a big data processing system needs to concern the characteristics (format, velocity, and volume) of being handled data in the system. In this situation, virtualized computing platform is an emerging platform for handling big data effectively, since virtualization technology enables to manage computing resources dynamically and elastically with minimum efforts. In this paper, we analyze resource utilization of virtualized computing resources to discover suitable deployment models in Apache Hadoop and HBase-based big data processing environment. Consequently, Task Tracker service shows high CPU utilization and high Disk I/O overhead during MapReduce phases. Moreover, HRegion service indicates high network resource consumption for transfer the traffic data from DataNode to Task Tracker. DataNode shows high memory resource utilization and Disk I/O overhead for reading stored data.

Scalable RDFS Reasoning Using the Graph Structure of In-Memory based Parallel Computing (인메모리 기반 병렬 컴퓨팅 그래프 구조를 이용한 대용량 RDFS 추론)

  • Jeon, MyungJoong;So, ChiSeoung;Jagvaral, Batselem;Kim, KangPil;Kim, Jin;Hong, JinYoung;Park, YoungTack
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.998-1009
    • /
    • 2015
  • In recent years, there has been a growing interest in RDFS Inference to build a rich knowledge base. However, it is difficult to improve the inference performance with large data by using a single machine. Therefore, researchers are investigating the development of a RDFS inference engine for a distributed computing environment. However, the existing inference engines cannot process data in real-time, are difficult to implement, and are vulnerable to repetitive tasks. In order to overcome these problems, we propose a method to construct an in-memory distributed inference engine that uses a parallel graph structure. In general, the ontology based on a triple structure possesses a graph structure. Thus, it is intuitive to design a graph structure-based inference engine. Moreover, the RDFS inference rule can be implemented by utilizing the operator of the graph structure, and we can thus design the inference engine according to the graph structure, and not the structure of the data table. In this study, we evaluate the proposed inference engine by using the LUBM1000 and LUBM3000 data to test the speed of the inference. The results of our experiment indicate that the proposed in-memory distributed inference engine achieved a performance of about 10 times faster than an in-storage inference engine.

A Dynamic Reconfiguration Method using Application-level Checkpointing in a Grid Computing Environment with Cactus and Globus (Cactus와 Globus에 기반한 그리드 컴퓨팅 환경에서의 응용프로그램 수준의 체크포인팅을 사용한 동적 재구성 기법)

  • Kim Young Gyun;Oh Gil-ho;Cho Kum Won;Na Jeoung-Su
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.6
    • /
    • pp.465-476
    • /
    • 2005
  • In this paper, we propose a new dynamic reconfiguration method using application-level checkpointing in a grid computing environment with Cactus and Globus. The existing dynamic reconfiguration methods have been dependent on a specific hardware and operating system. But the proposed method performs a dynamic reconfiguration without supporting specific hardwares and operating systems and, an application is programmed without considering a dynamic reconfiguration. In the proposed method, the job starts with an initial configuration of Computing resources and the job restarts including new resources dynamically found at run-time. The proposed method determines whether to include the newly found idle sites by considering processor performance and available memory of the sites. Our method writes the intermediate results of the job on the disks using system-independent application-level checkpointing for real-time visualization during the job runs. After reconfiguring idle sites and idle processors newly found, the job resumes using checkpointing files. The proposed dynamic reconfiguration method is proved to be valid by decreasing total execution time In K*Grid.

Pratical Offloading Methods and Cost Models for Mobile Cloud Computing (모바일 클라우드 컴퓨팅을 위한 실용적인 오프로딩 기법 및 비용 모델)

  • Park, Min Gyun;Zhe, Piao Zhen;La, Hyun Jung;Kim, Soo Dong
    • Journal of Internet Computing and Services
    • /
    • v.14 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • As a way of augmenting constrained resources of mobile devices such as CPU and memory, many works on mobile cloud computing (MCC), where mobile devices utilize remote resources of cloud services or PCs, /have been proposed. A typical approach to resolving resource problems of mobile nodes in MCC is to offload functional components to other resource-rich nodes. However, most of the current woks do not consider a characteristic of dynamically changed MCC environment and propose offloading mechanisms in a conceptual level. In this paper, in order to ensure performance of highly complex mobile applications, we propose four different types of offloading mechanisms which can be applied to diverse situations of MCC. And, the proposed offloading mechanisms are practically designed so that they can be implemented with current technologies. Moreover, we define cost models to derive the most sutilable situation of applying each offloading mechanism and prove the performance enhancement through offloadings in a quantitative manner.

The study on a threat countermeasure of mobile cloud services (모바일 클라우드 서비스의 보안위협 대응 방안 연구)

  • Jang, Eun-Young;Kim, Hyung-Jong;Park, Choon-Sik;Kim, Joo-Young;Lee, Jae-Il
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.1
    • /
    • pp.177-186
    • /
    • 2011
  • Mobile services which are applied PC performance and mobile characteristics are increased with spread of the smartphone. Recently, mobile cloud service is getting the spotlight as a solution of mobile service problems that mobile device is lack of memory, computing power and storage and mobile services are subordinate to a particular mobile device platform. However, mobile cloud service has more potential security threats by the threat inheritance of mobile service, wireless network and cloud computing service. Therefore, security threats of mobile cloud service has to be removed in order to deploy secure mobile cloud services and user and manager should be able to respond appropriately in the event of threat. In this paper, We define mobile cloud service threats by threat analysis of mobile device, wireless network and cloud computing and we propose mobile cloud service countermeasures in order to respond mobile cloud service threats and threat scenarios in order to respond and predict to potential mobile cloud service threats.