• Title/Summary/Keyword: Sparse linear system

Search Result 46, Processing Time 0.027 seconds

SPLITTING METHOD OF DENSE COLUMNS IN SPARSE LINEAR SYSTEMS AND ITS IMPLEMENTATION

  • Oh, Seyoung;Kwon, Sun Joo
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.10 no.1
    • /
    • pp.147-159
    • /
    • 1997
  • It is important to solve the large sparse linear system appeared in many application field such as $AA^Ty={\beta}$ efficiently. In solving this linear system, the sparse solver using the splitting method for the relatively dense column is experimentally better than the direct solver using the Cholesky method.

  • PDF

A Study on the Sparse Matrix Method Useful to the Solution of a Large Power System (전력계통 해석에 유용한 "스파스"행렬법에 관한 연구)

  • 한만춘;신명철
    • 전기의세계
    • /
    • v.23 no.3
    • /
    • pp.43-52
    • /
    • 1974
  • The matrix inversion is very inefficient for computing direct solutions of the large spare systems of linear equations that arise in many network problems as a large electrical power system. Optimally ordered triangular factorization of sparse matrices is more efficient and offers the other important computational advantages in some applications with this method. The direct solutions are computed from sparse matrix factors instead of a full inverse matrix, thereby gaining a significant advantage is speed and computer memory requirements. In this paper, it is shown that the sparse matrix method is superior to the inverse matrix method to solve the linear equations of large sparse networks. In addition, it is shown that the sparse matrix method is superior to the inverse matrix method to solve the linear equations of large sparse networks. In addition, it is shown that the solutions may be applied directly to sove the load flow in an electrical power system. The result of this study should lead to many aplications including short circuit, transient stability, network reduction, reactive optimization and others.

  • PDF

DATA MINING AND PREDICTION OF SAI TYPE MATRIX PRECONDITIONER

  • Kim, Sang-Bae;Xu, Shuting;Zhang, Jun
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.1_2
    • /
    • pp.351-361
    • /
    • 2010
  • The solution of large sparse linear systems is one of the most important problems in large scale scientific computing. Among the many methods developed, the preconditioned Krylov subspace methods are considered the preferred methods. Selecting a suitable preconditioner with appropriate parameters for a specific sparse linear system presents a challenging task for many application scientists and engineers who have little knowledge of preconditioned iterative methods. The prediction of ILU type preconditioners was considered in [27] where support vector machine(SVM), as a data mining technique, is used to classify large sparse linear systems and predict best preconditioners. In this paper, we apply the data mining approach to the sparse approximate inverse(SAI) type preconditioners to find some parameters with which the preconditioned Krylov subspace method on the linear systems shows best performance.

An experimental study on parallel implementation of an iterative method for large scale, sparse linear system (반복기법을 이용한 대규모, 소선형시스템의 병렬처리에 관한 연구)

  • 김상원;장수영
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1991.10a
    • /
    • pp.6-22
    • /
    • 1991
  • This thesis presents a parallel implementation of an iterative method for large scale, sparse linear system and gives result of computational experiments performed on both single transputer and multi transputer parallel computers. To solve linear system, we use conjugate gradient method and develope data storage techinique, data communication scheme. In addition to the explanation of conjugate gradient method, the result of computational experiment is summarized.

  • PDF

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC POSITIVE DEFINITE MATRICES

  • Salkuyeh, Davod Khojasteh
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.5_6
    • /
    • pp.1131-1141
    • /
    • 2010
  • We develop an algorithm for computing a sparse approximate inverse for a nonsymmetric positive definite matrix based upon the FFAPINV algorithm. The sparse approximate inverse is computed in the factored form and used to work with some Krylov subspace methods. The preconditioner is breakdown free and, when used in conjunction with Krylov-subspace-based iterative solvers such as the GMRES algorithm, results in reliable solvers. Some numerical experiments are given to show the efficiency of the preconditioner.

Improving Performance of Large Sparse Linear System Solvers On Distributed Memory Systems By Asynchronous Algorithms (비동기 알고리즘을 이용한 분산 메모리 시스템에서의 초대형 선형 시스템 해법의 성능 향상)

  • Park, Pil-Seong;Sin, Sun-Cheol
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.439-446
    • /
    • 2001
  • The main stream of parallel programming today is using synchronous algorithms, where processor synchronization for correct computation and workload balance are essential. Overall performance of the whole system is dependent upon the performance of the slowest processor, if workload is not well-balanced or heterogeneous clusters are used. Asynchronous iteration is a way to mitigate such problems, but most of the works done so far are for shared memory systems. In this paper, we suggest and implement a parallel large sparse linear system solver that improves performance on distributed memory systems like clusters by reducing processor idle times as much as possible by asynchronous iterations.

  • PDF

Parallel Algorithm of Conjugate Gradient Solver using OpenGL Compute Shader

  • Va, Hongly;Lee, Do-keyong;Hong, Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.1-9
    • /
    • 2021
  • OpenGL compute shader is a shader stage that operate differently from other shader stage and it can be used for the calculating purpose of any data in parallel. This paper proposes a GPU-based parallel algorithm for computing sparse linear systems through conjugate gradient using an iterative method, which perform calculation on OpenGL compute shader. Basically, this sparse linear solver is used to solve large linear systems such as symmetric positive definite matrix. Four well-known matrix formats (Dense, COO, ELL and CSR) have been used for matrix storage. The performance comparison from our experimental tests using eight sparse matrices shows that GPU-based linear solving system much faster than CPU-based linear solving system with the best average computing time 0.64ms in GPU-based and 15.37ms in CPU-based.

A NOVEL UNSUPERVISED DECONVOLUTION NETWORK:EFFICIENT FOR A SPARSE SOURCE

  • Choi, Seung-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.336-338
    • /
    • 1998
  • This paper presents a novel neural network structure to the blind deconvolution task where the input (source) to a system is not available and the source has any type of distribution including sparse distribution. We employ multiple sensors so that spatial information plays a important role. The resulting learning algorithm is linear so that it works for both sub-and super-Gaussian source. Moreover, we can successfully deconvolve the mixture of a sparse source, while most existing algorithms [5] have difficulties in this task. Computer simulations confirm the validity and high performance of the proposed algorithm.

  • PDF

A systolic Array to Effectively Solve Large Sparce Matrix Linear System of Equations (대형 스파스 메트릭스 선형방정식을 효율적으로 해석하는 씨스톨릭 어레이)

  • 이병홍;채수환;김정선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.739-748
    • /
    • 1992
  • A CGM iterative systolic algorithm to solve large sparse linear systems of equations is presented. For implementation of the algorithm, a systolic array using the stripe structure is proposed. The matrix A is decomposed into a strictly lower triangular matrix, a diagonal matrix, and a strictly up-per triangular matrix, and the two formers and the tatter· are concurrently computed by different linear arrays. Hence, the execution time of this approach Is reduced to half of the execution time of the that a linear array is used. computation of the Irregularly distributed sparse matrix can be executed effectively by using the stripe structure.

  • PDF

A Noisy Videos Background Subtraction Algorithm Based on Dictionary Learning

  • Xiao, Huaxin;Liu, Yu;Tan, Shuren;Duan, Jiang;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.1946-1963
    • /
    • 2014
  • Most background subtraction methods focus on dynamic and complex scenes without considering robustness against noise. This paper proposes a background subtraction algorithm based on dictionary learning and sparse coding for handling low light conditions. The proposed method formulates background modeling as the linear and sparse combination of atoms in the dictionary. The background subtraction is considered as the difference between sparse representations of the current frame and the background model. Assuming that the projection of the noise over the dictionary is irregular and random guarantees the adaptability of the approach in large noisy scenes. Experimental results divided in simulated large noise and realistic low light conditions show the promising robustness of the proposed approach compared with other competing methods.