• Title/Summary/Keyword: computing speed

Search Result 898, Processing Time 0.034 seconds

Trends in High Speed Fabric-Interconnect-Based Memory Centric Computing Architecture (고속 패브릭 연결망 기반 메모리 중심 컴퓨팅 기술 동향)

  • S.-J. Cha;S.-W. Sok;H.J. Kwon;Y.W. Kim;J. Kim;H.Y. Kim;K.-W. Koh;K.-H. Kim
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.5
    • /
    • pp.98-107
    • /
    • 2024
  • Applications such as artificial intelligence continue to grow in complexity and scale. Thus, the demand for scalable computing is increasing for achieving faster data processing and improved efficiency. This requirement has led to the development of memory-centric computing and high-speed fabric interconnection technologies. Memory-centric computing reduces the latency and enhances the system performance by shifting the focus from the central processing unit to the memory, whereas high-speed fabric interconnects enable efficient data transfer across various computing resources. Technologies such as Gen-Z, OpenCAPI, and CCIX have been integrated into the CXL (Compute Express Link) standard since 2019 to improve communication and cache coherence. Ethernet-based interconnects such as RoCE, InfiniBand, and OmniXtend also play a crucial role in providing high-speed data transfer and low latency. We explore the latest trends and prospects of these technologies, highlighting their benefits and applications.

Hybrid in-memory storage for cloud infrastructure

  • Kim, Dae Won;Kim, Sun Wook;Oh, Soo Cheol
    • Journal of Internet Computing and Services
    • /
    • v.22 no.5
    • /
    • pp.57-67
    • /
    • 2021
  • Modern cloud computing is rapidly changing from traditional hypervisor-based virtual machines to container-based cloud-native environments. Due to limitations in I/O performance required for both virtual machines and containers, the use of high-speed storage (SSD, NVMe, etc.) is increasing, and in-memory computing using main memory is also emerging. Running a virtual environment on main memory gives better performance compared to other storage arrays. However, RAM used as main memory is expensive and due to its volatile characteristics, data is lost when the system goes down. Therefore, additional work is required to run the virtual environment in main memory. In this paper, we propose a hybrid in-memory storage that combines a block storage such as a high-speed SSD with main memory to safely operate virtual machines and containers on main memory. In addition, the proposed storage showed 6 times faster write speed and 42 times faster read operation compared to regular disks for virtual machines, and showed the average 12% improvement of container's performance tests.

Analysis of Open Source Edge Computing Platforms: Architecture, Features, and Comparison (오픈 소스 엣지 컴퓨팅 플랫폼 분석: 구조, 특징, 비교)

  • Lim, Huhnkuk;Lee, Heejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.8
    • /
    • pp.985-992
    • /
    • 2020
  • Edge computing is a technology that can prepare for a new era of cloud computing. Edge computing is not a remote data center where data is processed and computed, but low-latency/high-speed computing is realized by adding computing power and data processing power to the edge side close to an access point such as a terminal device or a gateway. It is possible. The types of edge computing include mobile edge computing, fog computing, and cloudlet computing. In this article, we describes existing open source platforms for implementing edge computing nodes. By presenting and comparing the structure, features of open source edge platforms, it is possible to acquire knowledge required to select the best edge platform for industrial engineers who want to build an edge node using an actual open source edge computing platform.

Real time simulation using multiple DSPs for fossil power plants (병렬처리를 이용한 화력발전소의 실시간 시뮬레이션)

  • 박희준;김병국
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.480-483
    • /
    • 1997
  • A fossil power plant can be modeled by a lot of algebraic equations and differential equations. When we simulate a large, complicated fossil power plant by a computer such as workstation or PC, it takes much time until overall equations are completely calculated. Therefore, new processing systems which have high computing speed is ultimately needed to develope real-time simulators. Vital points of real-time simulators are accuracy, computing speed, and deadline observing. In this paper, we present a enhanced strategy in which we can provide powerful computing power by parallel processing of DSP processors with communication links. We designed general purpose DSP modules, and a VME interface module. Because the DSP module is designed for general purpose, we can easily expand the parallel system by just connecting new DSP modules to the system. Additionally we propose methods about downloading programs, initial data to each DSP module via VME bus, DPRAM and processing sequences about computing and updating values between DSP modules and CPU30 board when the simulator is working.

  • PDF

Analysis of Applying the Mobile BIM Application based on Cloud Computing (클라우드 컴퓨팅 기반의 모바일 BIM 애플리케이션 적용성 분석)

  • Jun, Jin-Woo;Lee, Sang-Heon;Eom, Shin-Jo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.5
    • /
    • pp.342-352
    • /
    • 2012
  • As a futuristic construction model, building information model (BIM) based project management system (PMIS) and mobile BIM simulator apps have been showing visible sign. However, researches on the BIM based 3D simulator using mobile device are hard to find result from limitation of mobile device (slow speed at huge 3D file, display size, and etc.) and undefined standard of business processes. Therefore, this research aims at studying application of mobile BIM apps based on cloud computing. Total 8 BIM cloud apps were selected and analyzed in the 5 application feasibility characteristics (speed, view, inquiry, markup, and usability). This research would be essential phase to construct BIM based mobile project management system using cloud computing in the future.

Optimal Design of a Direct-Driven PM Wind Generator Aimed at Maximum AEP using Coupled FEA and Parallel Computing GA

  • Jung, Ho-Chang;Lee, Cheol-Gyun;Hahn, Sung-Chin;Jung, Sang-Yong
    • Journal of Electrical Engineering and Technology
    • /
    • v.3 no.4
    • /
    • pp.552-558
    • /
    • 2008
  • Optimal design of the direct-driven Permanent Magnet(PM) wind generator, combined with F.E.A(Finite Element Analysis) and Genetic Algorithm(GA), has been performed to maximize the Annual Energy Production(AEP) over the entire wind speed characterized by the statistical model of wind speed distribution. Particularly, the proposed parallel computing via internet web service has contributed to reducing excessive computing times for optimization.

A Design Database for High Speed IC Package Interconnection (고속 집적회로 패키지 인터커넥션을 위한 설계 데이타베이스)

  • ;;;F. Szidarovszki;O.A.Palusinski
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.32A no.12
    • /
    • pp.184-197
    • /
    • 1995
  • In this paper, high speed IC package-to-package interconnections are modeled as lossless multiconductor transmission lines operating in the TEM mode. And, three mathematical algorithms for computing electrical parameters of the lossless multiconductor transmission lines are described. A semi-analytic Green's function method is used in computing per unit length capacitance and inductance matrices, a matrix square root algorithm based on the QR algorithm is used in computing a characteristic impedance matrix, and a matrix algorithm based on the theory of M-matrix is used in computing a diagonally matched load impedance matrix. These algorithms are implemented in a computer program DIME (DIagonally Matched Load Impedance Extractor) which computes electrical parameters of the lossless multiconductor transmission lines. Also, to illustrate the concept of design database for high speed IC package-to-package interconnection, a database for the multi conductor strip transmission lines system is constructed. This database is constructed with a sufficiently small number of nodes using the multi-dimensional cubic spline interpolation algorithm. The maximum interpolation error for diagonally matched load impedance matrix extraction from the database is 1.3 %.

  • PDF

A Study on a Compensation of Decoded Video Quality and an Enhancement of Encoding Speed

  • Sir, Jaechul;Yoon, Sungkyu;Lim, Younghwan
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.3
    • /
    • pp.35-40
    • /
    • 2000
  • There are two problems in H.26X compression technique. One is compressing time in encoding process and the other is degradation of the decoded video quality due to high compression rate. For transferring moving pictures in real-time, it is required to adopt massively high compression. In this case, there are a lot of losses of an original video data and that results in degradation of quality. Especially degradation called by blocking artifact may be produced. The blocking artifact effect is produced by DCT-based coding techniques because they operate without considering correlation between pixels in block boundaries. So it represents discontinuity between adjacent blocks. This paper describes methods of quality compensation for H.26x decoded data and enhancing encoding speed for real-time operation. Our goal of the quality compensation is not to make the decoded video identical to a original video but to make it perceived better through human eyes. We suggest an algorithm that reduces block artifact and clears decoded video in decoder. To enhance encoding speed, we adopt new four-step search algorithm. As shown in the experimental result, the quality compensation provides better video quality because of reducing blocking artifact. And then new four-step search algorithm with $MMX^{TM}$ implementation improves encoding speed from 2.5 fps to 17 fps.

  • PDF

A Clustered Dwarf Structure to Speed up Queries on Data Cubes

  • Bao, Yubin;Leng, Fangling;Wang, Daling;Yu, Ge
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.2
    • /
    • pp.195-210
    • /
    • 2007
  • Dwarf is a highly compressed structure, which compresses the cube by eliminating the semantic redundancies while computing a data cube. Although it has high compression ratio, Dwarf is slower in querying and more difficult in updating due to its structure characteristics. We all know that the original intention of data cube is to speed up the query performance, so we propose two novel clustering methods for query optimization: the recursion clustering method which clusters the nodes in a recursive manner to speed up point queries and the hierarchical clustering method which clusters the nodes of the same dimension to speed up range queries. To facilitate the implementation, we design a partition strategy and a logical clustering mechanism. Experimental results show our methods can effectively improve the query performance on data cubes, and the recursion clustering method is suitable for both point queries and range queries.

Genome Scale Protein Secondary Structure Prediction Using a Data Distribution on a Grid Computing

  • Cho, Min-Kyu;Lee, Soojin;Jung, Jin-Won;Kim, Jai-Hoon;Lee, Weontae
    • Proceedings of the Korean Biophysical Society Conference
    • /
    • 2003.06a
    • /
    • pp.65-65
    • /
    • 2003
  • After many genome projects, algorithms and software to process explosively growing biological information have been developed. To process huge amount of biological information, high performance computing equipments are essential. If we use the remote resources such as computing power, storages etc., through a Grid to share the resources in the Internet environment, we will be able to obtain great efficiency to process data at a low cost. Here we present the performance improvement of the protein secondary structure prediction (PSIPred) by using the Grid platform, distributing protein sequence data on the Grid where each computer node analyzes its own part of protein sequence data to speed up the structure prediction. On the Grid, genome scale secondary structure prediction for Mycoplasma genitalium, Escherichia coli, Helicobacter pylori, Saccharomyces cerevisiae and Caenorhabditis slogans were performed and analyzed by a statistical way to show the protein structural deviation and comparison between the genomes. Experimental results show that the Grid is a viable platform to speed up the protein structure prediction and from the predicted structures.

  • PDF