• Title/Summary/Keyword: in-memory computing

Search Result 766, Processing Time 0.026 seconds

Assessment of maximum liquefaction distance using soft computing approaches

  • Kishan Kumar;Pijush Samui;Shiva S. Choudhary
    • Geomechanics and Engineering
    • /
    • v.37 no.4
    • /
    • pp.395-418
    • /
    • 2024
  • The epicentral region of earthquakes is typically where liquefaction-related damage takes place. To determine the maximum distance, such as maximum epicentral distance (Re), maximum fault distance (Rf), or maximum hypocentral distance (Rh), at which an earthquake can inflict damage, given its magnitude, this study, using a recently updated global liquefaction database, multiple ML models are built to predict the limiting distances (Re, Rf, or Rh) required for an earthquake of a given magnitude to cause damage. Four machine learning models LSTM (Long Short-Term Memory), BiLSTM (Bidirectional Long Short-Term Memory), CNN (Convolutional Neural Network), and XGB (Extreme Gradient Boosting) are developed using the Python programming language. All four proposed ML models performed better than empirical models for limiting distance assessment. Among these models, the XGB model outperformed all the models. In order to determine how well the suggested models can predict limiting distances, a number of statistical parameters have been studied. To compare the accuracy of the proposed models, rank analysis, error matrix, and Taylor diagram have been developed. The ML models proposed in this paper are more robust than other current models and may be used to assess the minimal energy of a liquefaction disaster caused by an earthquake or to estimate the maximum distance of a liquefied site provided an earthquake in rapid disaster mapping.

Load Shedding for Temporal Queries over Data Streams

  • Al-Kateb, Mohammed;Lee, Byung-Suk
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.4
    • /
    • pp.294-304
    • /
    • 2011
  • Enhancing continuous queries over data streams with temporal functions and predicates enriches the expressive power of those queries. While traditional continuous queries retrieve only the values of attributes, temporal continuous queries retrieve the valid time intervals of those values as well. Correctly evaluating such queries requires the coalescing of adjacent timestamps for value-equivalent tuples prior to evaluating temporal functions and predicates. For many stream applications, the available computing resources may be too limited to produce exact query results. These limitations are commonly addressed through load shedding and produce approximated query results. There have been many load shedding mechanisms proposed so far, but for temporal continuous queries, the presence of coalescing makes theses existing methods unsuitable. In this paper, we propose a new accuracy metric and load shedding algorithm that are suitable for temporal query processing when memory is insufficient. The accuracy metric uses a combination of the Jaccard coefficient to measure the accuracy of attribute values and $\mathcal{PQI}$ interval orders to measure the accuracy of the valid time intervals in the approximate query result. The algorithm employs a greedy strategy combining two objectives reflecting the two accuracy metrics (i.e., value and interval). In the performance study, the proposed greedy algorithm outperforms a conventional random load shedding algorithm by up to an order of magnitude in its achieved accuracy.

Computation of Wave Propagation over Multi-Step Topography by Partition Matrix Method (분할행렬법에 의한 다중 계단지형에서의 파랑변형 계산)

  • Seo, Seung-Nam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.4B
    • /
    • pp.377-384
    • /
    • 2009
  • In order to reduce computing time significantly for a large matrix in EFEM of linear waves propagation over ripple beds, each of which is approximated to a multi-step topography, a partition method is presented to calculate reflection coefficients. By use of 10 evanescent modes in the model, the most accurate numerical solutions have been obtained up to date, which show different behaviors of computed reflection coefficient in some cases against the existing results. Both computing time and memory of the present partition model for solving a large matrix are still so much demanding that it is needed to develop an efficient method.

Column-aware Polarization Scheme for High-Speed Database Systems (고속 데이터베이스 시스템을 위한 컬럼-인지 양분화 기법)

  • Byun, Si-Woo
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.83-91
    • /
    • 2012
  • Recently, column-oriented storage has become a progressive model for high-speed database systems because of its superior I/O performance. In this paper, we analysis traditional raw-oriented storage model and then propose a new column-aware storage management model using flash memory drive and assist drive to improve the effective performance of the high-speed column-oriented database system. Our storage management scheme called column-aware polarization improves the performance of update operation by dividing and compressing table columns into active-columns or inactive-columns, and balancing congested update operations using a assist drive in high workload periods. The results obtained from experimental tests show that our scheme improves the update throughput of column-oriented storage by 19 percent, and the response time by up to 49 percent.

Efficient Hardware Implementation of Real-time Rectification using Adaptively Compressed LUT

  • Kim, Jong-hak;Kim, Jae-gon;Oh, Jung-kyun;Kang, Seong-muk;Cho, Jun-Dong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.44-57
    • /
    • 2016
  • Rectification is used as a preprocessing to reduce the computation complexity of disparity estimation. However, rectification also requires a complex computation. To minimize the computing complexity, rectification using a lookup-table (R-LUT) has been introduced. However, since, the R-LUT consumes large amount of memory, rectification with compressed LUT (R-CLUT) has been introduced. However, the more we reduce the memory consumption, the more we need decoding overhead. Therefore, we need to attain an acceptable trade-off between the size of LUT and decoding overhead. In this paper, we present such a trade-off by adaptively combining simple coding methods, such as differential coding, modified run-length coding (MRLE), and Huffman coding. Differential coding is applied to transform coordinate data into a differential form in order to further improve the coding efficiency along with Huffman coding for better stability and MRLE for better performance. Our experimental results verified that our coding scheme yields high performance with maintaining robustness. Our method showed about ranging from 1 % to 16 % lower average inverse of compression ratio than the existing methods. Moreover, we maintained low latency with tolerable hardware overhead for real-time implementation.

A Novel Method for Virtual Machine Placement Based on Euclidean Distance

  • Liu, Shukun;Jia, Weijia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.7
    • /
    • pp.2914-2935
    • /
    • 2016
  • With the increasing popularization of cloud computing, how to reduce physical energy consumption and increase resource utilization while maintaining system performance has become a research hotspot of virtual machine deployment in cloud platform. Although some related researches have been reported to solve this problem, most of them used the traditional heuristic algorithm based on greedy algorithm and only considered effect of single-dimensional resource (CPU or Memory) on energy consumption. With considerations to multi-dimensional resource utilization, this paper analyzed impact of multi-dimensional resources on energy consumption of cloud computation. A multi-dimensional resource constraint that could maintain normal system operation was proposed. Later, a novel virtual machine deployment method (NVMDM) based on improved particle swarm optimization (IPSO) and Euclidean distance was put forward. It deals with problems like how to generate the initial particle swarm through the improved first-fit algorithm based on resource constraint (IFFABRC), how to define measure standard of credibility of individual and global optimal solutions of particles by combining with Bayesian transform, and how to define fitness function of particle swarm according to the multi-dimensional resource constraint relationship. The proposed NVMDM was proved superior to existing heuristic algorithm in developing performances of physical machines. It could improve utilization of CPU, memory, disk and bandwidth effectively and control task execution time of users within the range of resource constraint.

A 95% accurate EEG-connectome Processor for a Mental Health Monitoring System

  • Kim, Hyunki;Song, Kiseok;Roh, Taehwan;Yoo, Hoi-Jun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.4
    • /
    • pp.436-442
    • /
    • 2016
  • An electroencephalogram (EEG)-connectome processor to monitor and diagnose mental health is proposed. From 19-channel EEG signals, the proposed processor determines whether the mental state is healthy or unhealthy by extracting significant features from EEG signals and classifying them. Connectome approach is adopted for the best diagnosis accuracy, and synchronization likelihood (SL) is chosen as the connectome feature. Before computing SL, reconstruction optimizer (ReOpt) block compensates some parameters, resulting in improved accuracy. During SL calculation, a sparse matrix inscription (SMI) scheme is proposed to reduce the memory size to 1/24. From the calculated SL information, a small world feature extractor (SWFE) reduces the memory size to 1/29. Finally, using SLs or small word features, radial basis function (RBF) kernel-based support vector machine (SVM) diagnoses user's mental health condition. For RBF kernels, look-up-tables (LUTs) are used to replace the floating-point operations, decreasing the required operation by 54%. Consequently, The EEG-connectome processor improves the diagnosis accuracy from 89% to 95% in Alzheimer's disease case. The proposed processor occupies $3.8mm^2$ and consumes 1.71 mW with $0.18{\mu}m$ CMOS technology.

Extending Caffe for Machine Learning of Large Neural Networks Distributed on GPUs (대규모 신경회로망 분산 GPU 기계 학습을 위한 Caffe 확장)

  • Oh, Jong-soo;Lee, Dongho
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.4
    • /
    • pp.99-102
    • /
    • 2018
  • Caffe is a neural net learning software which is widely used in academic researches. The GPU memory capacity is one of the most important aspects of designing neural net architectures. For example, many object detection systems require to use less than 12GB to fit a single GPU. In this paper, we extended Caffe to allow to use more than 12GB GPU memory. To verify the effectiveness of the extended software, we executed some training experiments to determine the learning efficiency of the object detection neural net software using a PC with three GPUs.

Prediction of Significant Wave Height in Korea Strait Using Machine Learning

  • Park, Sung Boo;Shin, Seong Yun;Jung, Kwang Hyo;Lee, Byung Gook
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.5
    • /
    • pp.336-346
    • /
    • 2021
  • The prediction of wave conditions is crucial in the field of marine and ocean engineering. Hence, this study aims to predict the significant wave height through machine learning (ML), a soft computing method. The adopted metocean data, collected from 2012 to 2020, were obtained from the Korea Institute of Ocean Science and Technology. We adopted the feedforward neural network (FNN) and long-short term memory (LSTM) models to predict significant wave height. Input parameters for the input layer were selected by Pearson correlation coefficients. To obtain the optimized hyperparameter, we conducted a sensitivity study on the window size, node, layer, and activation function. Finally, the significant wave height was predicted using the FNN and LSTM models, by varying the three input parameters and three window sizes. Accordingly, FNN (W48) (i.e., FNN with window size 48) and LSTM (W48) (i.e., LSTM with window size 48) were superior outcomes. The most suitable model for predicting the significant wave height was FNN(W48) owing to its accuracy and calculation time. If the metocean data were further accumulated, the accuracy of the ML model would have improved, and it will be beneficial to predict added resistance by waves when conducting a sea trial test.

Dynamic Subspace Clustering for Online Data Streams (온라인 데이터 스트림에서의 동적 부분 공간 클러스터링 기법)

  • Park, Nam Hun
    • Journal of Digital Convergence
    • /
    • v.20 no.2
    • /
    • pp.217-223
    • /
    • 2022
  • Subspace clustering for online data streams requires a large amount of memory resources as all subsets of data dimensions must be examined. In order to track the continuous change of clusters for a data stream in a finite memory space, in this paper, we propose a grid-based subspace clustering algorithm that effectively uses memory resources. Given an n-dimensional data stream, the distribution information of data items in data space is monitored by a grid-cell list. When the frequency of data items in the grid-cell list of the first level is high and it becomes a unit grid-cell, the grid-cell list of the next level is created as a child node in order to find clusters of all possible subspaces from the grid-cell. In this way, a maximum n-level grid-cell subspace tree is constructed, and a k-dimensional subspace cluster can be found at the kth level of the subspace grid-cell tree. Through experiments, it was confirmed that the proposed method uses computing resources more efficiently by expanding only the dense space while maintaining the same accuracy as the existing method.