DOI QR코드

DOI QR Code

PERFORMANCE COMPARISON OF PRECONDITIONED ITERATIVE METHODS WITH DIRECT PRECONDITIONERS

  • Yun, Jae Heon (Department of Mathematics, Chungbuk National University) ;
  • Lim, Hyo Jin (Department of Mathematics, Chungbuk National University) ;
  • Kim, Kyoum Sun (Department of Mathematics, Chungbuk National University)
  • Received : 2013.11.15
  • Accepted : 2014.02.25
  • Published : 2014.05.30

Abstract

In this paper, we first provide comparison results of preconditioned AOR methods with direct preconditioners $I+{\beta}L$, $I+{\beta}U$ and $I+{\beta}(L+U)$ for solving a linear system whose coefficient matrix is a large sparse irreducible L-matrix, where ${\beta}$ > 0. Next we propose how to find a near optimal parameter ${\beta}$ for which Krylov subspace method with these direct preconditioners performs nearly best. Lastly numerical experiments are provided to compare the performance of preconditioned iterative methods and to illustrate the theoretical results.

Keywords

1. Introduction

In this paper, we consider the following nonsingular linear system

where A = (aij) ∈ ℝn×n is a large sparse irreducible L-matrix. Throughout the paper, we assume that A = I − L − U, where I is the identity matrix, and L and U are strictly lower triangular and strictly upper triangular matrices, espectively. Then the AOR iterative method [3] for solving the linear system (1) can be expressed as

where x0 is an initial vector, ω and r are real parameters with ω ≠ 0,

and Mr,ω = ω(I − rL)−1. The Tr,ω is called an iteration matrix of the AOR iterative method.

In order to accelerate the convergence of iterative method for solving the linear system (1), the original linear system (1) is transformed into the following preconditioned linear system

where P, called a preconditioner, is a nonsingular matrix. Various types of preconditioners P, which are nonnegative matrices with unit diagonal entries, have been proposed by many researchers [2,4,5,6,7,9,11,14,15,17]. In this paper, we study comparison results of preconditioned iterative methods corresponding to direct preconditioners P = Pl = I + βL, P = Pu = I + βU and P = Pb = I + β(L + U), where β is a positive real number. Here, direct preconditioner means that the preconditioner can be constructed without any computational step. The preconditioner Pu was first introduced by Kotakemori et al [4], and it has been studied further in [9,11,15]. The preconditioner Pl for β = 1 was first proposed by Usui et al [11], so it is worth studying further Pl for β > 0 in this paper.

Let Au = PuA and UL = Γ + E + F, where Γ is a diagonal matrix, E is a strictly lower triangular matrix, and F is a strictly upper triangular matrix. Then, one obtains

where Du = I − βΓ, Lu = L + βE, and Uu = (1 − β)U + βU2 + βF.

Let Al = PlA and LU = Γ1 + E1 + F1, where Γ1 is a diagonal matrix, E1 is a strictly lower triangular matrix, and F1 is a strictly upper triangular matrix. Then, one obtains

where Dl = I − βΓ1, Ll = (1 − β)L + βL2 + βE1, and Ul = U + βF1.

Recently, Wang and Song [14] studied convergence of the preconditioned AOR method with preconditioner Pb = I+β(L+U), where 0 < β ≤ 1. Let Ab = PbA.

Then, one obtains

where Db = I − βΓ − βΓl, Lb = (1 − β)L + βL2 + βE + βE1, and Ub = (1 − β)U + βU2 + βF + βF1.

If we apply the AOR iterative method to the preconditioned linear system (4), then we get the preconditioned AOR iterative method whose iteration matrix is

Notice that the computational costs for constructing Au, Al and Ab will not be expensive since A is assumed to be a large sparse matrix.

The purpose of this paper is to provide performance comparison of preconditioned iterative methods with direct preconditioners Pl, Pu and Pb for solving a linear system whose coefficient matrix is a large sparse irreducible L-matrix satisfying some weaker conditions than those used in the existing literature. This paper is organized as follows. In Section 2, we present some notation, definitions and preliminary results. In Section 3, we provide comparison results of preconditioned AOR methods with preconditioner Pu. In Section 4, we provide comparison results of preconditioned AOR methods with preconditioners Pl and Pb. In Section 5, we propose how to find a near optimal parameter β for which Krylov subspace method with the direct preconditioners Pl, Pu and Pb performs nearly best. In Section 6, numerical experiments are provided to compare the performance of preconditioned iterative methods and to illustrate the theoretical results in Sections 3 to 5. Lastly, some conclusions are drawn.

 

2. Preliminaries

A matrix A = (aij) ∈ ℝn×n is called a Z-matrix if aij ≤ 0 for i ≠ j , an L-matrix if A is a >Z-matrix and aii > 0 for i = 1, 2, . . . , n, and an M-matrix if A is a Z-matrix and A−1 ≥ 0. For a vector x ∈ ℝn, x ≥ 0 (x > 0) denotes that all components of x are nonnegative (positive). For two vectors x, y ∈ ℝn, x ≥ y (x > y) means that x − y ≥ 0 (x − y > 0). These definitions carry immediately over to matrices. For a square matrix A, ρ(A) denotes the spectral radius of A, and A is called irreducible if the directed graph of A is strongly connected [13]. Some useful results which we refer to later are provided below.

Theorem 2.1 (Varga [13]). Let A ≥ 0 be an irreducible matrix. Then

Theorem 2.2 (Berman and Plemmons [1]). Let A ≥ 0 be a matrix. Then the following hold.

Theorem 2.3 (Li and Sun [6]). Let A be an irreducible matrix, and let A = M − N be an M-splitting of A with N ≠ 0. Then there exists a vector x > 0 such that M−1Nx = ρ(M−1N)x and ρ(M−1N) > 0.

 

3. Comparison results for preconditioner Pu

We first provide a comparison result for the preconditioned Gauss-Seidel method with the preconditioner Pu.

Theorem 3.1. Let A = (aij) ∈ ℝn×n be an irreducible L-matrix. Suppose that 0 < (UL)ii < 1 for all 1 ≤ i ≤ n − 1, where Let T = (I − L)−1U and Tu = (Du − Lu) −1Uu, where Du, Lu and Uu are defined by (5). If 0 < β ≤ 1, then

Proof. Since A is an L-matrix, D, L and U are nonnegative. Since 0 < β ≤ 1 and (UL)ii < 1, Du, Lu and Uu are nonnegative and thus Tu ≥ 0. Since A = (I − L) − U is an M-splitting of A and U ≠ 0, from Theorem 2.3 there exists a vector x > 0 such that Tx = λx, where λ = ρ(T). From Tx = λx, one easily obtains

Using (5) and (9),

Let y = (Γ + E + λ−1U2)x. Since (UL) ii > 0 for all 1 ≤ i ≤ n − 1, y ≥ 0 is a nonzero vector whose first (n−1) components are positive and last component is zero. Since A is irreducible, Lu = L+βE ≥ 0 is a strictly lower triangular matrix whose last row vector is nonzero and thus (Du − Lu)−1y is a positive vector. It follows that Tux < λx from (10) if λ < 1. From Theorem 2.2, ρ(Tu) < ρ(T) < 1. For the cases of λ = 1 and λ > 1, Tux = λx and Tux > λx are obtained from (10), respectively. Hence, the theorem follows from Theorem 2.2.

We now provide a comparison result for the preconditioned AOR method with the preconditioner Pu.

Theorem 3.2. Let A = (aij) ∈ ℝn×n be an irreducible L-matrix. Suppose that 0 < r ≤ ω ≤ 1 (r ≠ 1) and 0 < (UL)ii < 1 for all 1 ≤ i ≤ n − 1, where Let Tr,ω and Tu,r,ω be defined by (3) and (8), respectively. If 0 < β ≤ 1, then

Proof. Notice that Tr,ω can be expressed as Tr,ω = (1−ω)I+ω (1−r)L+ωU+H, where H is a nonnegative matrix. By assumptions, it can be seen that Tr,ω ≥ 0 is irreducible and Tu,r,ωx ≥ 0. From Theorem 2.1, there exists a vector x > 0 such that Tr,ωx = λx, where λ = λ(Tr,ωx). From Tr,ωx = λx, one easily obtains

Using (5) and (11),

Let y = (Γ + rE + λ−1((1 − ω)U + (ω − r)UL + ωU2))x. Since 0 < r ≤ ω ≤ 1 and (UL)ii > 0 for all 1 ≤ i ≤ n−1, y ≥ 0 is a nonzero vector whose first (n−1) components are positive and last component is zero. Since A is irreducible and r ≠ 0, Lu = L + βE ≥ 0 is a strictly lower triangular matrix whose last row vector is nonzero and thus (Du − rLu)−1y is a positive vector. It follows that Tu,r,ωx < λx from (12) if λ < 1. From Theorem 2.2, ρ(Tu,r,ωx) < ρ Tr,ω) < 1. For the cases of λ = 1 and λ > 1, Tu,r,ωx = λx and Tu,r,ωx > λx are obtained from (12), respectively. Hence, the theorem follows from Theorem 2.2.

If 0 < β < 1 in Theorem 3.2, the assumptions for (UL)ii can be weakened.

Theorem 3.3. Let A = (aij) ∈ ℝn×n be an irreducible L-matrix. Suppose that 0 < r ≤ ω ≤ 1 (r ≠ 1), (UL)ii < 1 (1 ≤ i ≤ n − 1) and (UL)ii > 0 for at least one i, where Let Tr,ω and Tu,r,ω be defined by (3) and (8), respectively. If 0 < β < 1, then

Proof. Since A is irreducible and β < 1, Au is also irreducible. Hence it can be easily shown that both Tr,ω and Tu,r,ω are nonnegative and irreducible. From Theorem 2.1, there exists a vector x > 0 such that Tr,ωx = λx, where λ = λ(Tr,ω). From equation (12), one obtains

If λ < 1, then from (13) Tu,r,ωx ≤ λx and Tu,r,ωx ≠ λx. Since Tu,r,ω is irreducible, Theorem 2.2 implies that ρ(Tu,r,ω) < ρ (Tr,ω) < 1. For the cases of λ = 1 and λ > 1, Tu,r,ωx = λx and Tu,r,ωx ≥ λx (with Tu,r,ωx ≠ = λx) are obtained from (13), respectively. Hence, the theorem follows from Theorem 2.2.

By combining Theorems 3.1 and 3.2, we can obtain the following theorem.

Theorem 3.4. Let A = (aij) ∈ ℝn×n be an irreducible L-matrix. Suppose that 0 < r ≤ ω ≤ 1 and 0 < (UL)ii < 1 for all 1 ≤ i ≤ n - 1, where Let Tr,ω and Tu,r,ω be defined by (3) and (8), respectively. If 0 < β < 1, then

Proof. If r = 1, then ω = 1 and thus the theorem follows from Theorem 3.1. If r ≠ 1, then the theorem follows from Theorem 3.2.

 

4. Comparison results for preconditioner Pl and Pb

We first provide a comparison result for the preconditioned Gauss-Seide method with the preconditioner Pl.

Theorem 4.1. Let A = (aij) ∈ ℝn×n be an irreducible L-matrix. Suppose that 0 < (LU)ii < 1 for all 2 ≤ i ≤ n, where Let T = (I − L)−1U and Tl = (Dl − Ll)−1Ul, where Dl, Ll and Ulare defined by (6). If 0 < β ≤ 1, then

Proof. Since A is an L-matrix, D, L and U are nonnegative. Since 0 < β ≤ 1 and (LU)ii < 1, Dl, Ll and Ul are nonnegative and thus Tl ≥ 0. Since A = (I−L)−U is an M-splitting of A and U ≠ 0, from Theorem 2.3 there exists a vector x > 0 such that Tx = λx, where λ = ρ(T). From Tx = λx, one easily obtains

Using (6) and (14),

Let y = (Γ1+E1)x. Since (LU)ii > 0 for all 2 ≤ i ≤ n, y ≥ 0 is a nonzero vector whose first component is zero and last (n − 1) components are positive. Thus z = (Dl −Ll)−1y ≥ 0 is also a nonzero vector whose first component is zero and last (n − 1) components are positive. Let

where z2 > 0, T12 ∈ ℝ1×(n−1) and T12 ∈ ℝ(n−1)×(n−1). From (15) and (16),

Since z2 > 0, T22x2 < λx2 from (17) if λ < 1. From Theorem 2.2, ρ(T22) < ρ(T) < 1. Since ρ(Tl) = ρ(T22), ρ(Tl) < ρ (T) is obtained. For the cases of λ = 1 and λ > 1, T22x2 = λx2 and T22x2 > λx2 are obtained from (17), respectively. Hence, the theorem follows from Theorem 2.2.

We now provide a comparison result for the preconditioned AOR method with the preconditioner Pl.

Theorem 4.2. Let A = (aij) ∈ ℝn×n be an irreducible L-matrix. Suppose that 0 ≤ r,ω ≤ 1 (ω ≠ 0, r ≠ 1) and (LU)ii < 1 for all 2 ≤ 2 ≤ i ≤ n, where Let Tr,ω and Tl,r,ω be defined by (3) and (8), respectively. If 0 < β < 1, then

Proof. By simple calculation, one can obtain

where H and Hl are nonnegative matrices. Since 0 ≤ r, ω ≤ 1 (ω ≠ 0, r ≠ 1) and 0 < β < 1, it can be easily shown from (18) that both Tr, ω and Tl,r,ω are nonnegative and irreducible. Hence, there exists a vector x > 0 such that Tr,ωx = λx, where λ = ρ(Tr,ω). Using (6) and Tr,ωx = λx, one obtains

Let y = (Γ1 + rE1 + (1 − r) L)x. Since A is irreducible and r ≠ 1, y ≥ 0 is a nonzero vector whose first component is zero. If λ < 1, then from (19) Tl,r,ωx ≤ λx and Tl,r,ωx ≠ λx. Since Tl,r,ω is irreducible, Theorem 2.2 implies that ρ (Tl,r,ω) < ρ(Tr,ω) < 1. For the cases of λ = 1 and λ > 1, Tl,r,ωx = λx and Tl,r,ωx ≥ λx (with Tl,r,ωx ≠ λx) are obtained from (13), respectively. Hence, the theorem follows from Theorem 2.2.

Notice that Theorem 4.2 for preconditioner Pl does not require the assump-tions r ≤ ω and (LU)ii > 0 as compared with Theorem 3.2 for preconditioner Pu. If β = 1, the strict inequalities in Theorem 4.2 may not hold and only inequalities are guaranteed.

Lemma 4.3. Let A = (aij) ∈ ℝn×n be an L-matrix. If A = I − L − U is an M-matrix, then (LU)ii < 1 and (UL)ii < 1 for all i = 1, 2, . . . , n.

Proof. It is easy to show that (I +U)A and (I +L)A are Z-matrices. Since A is an M-matrix, there exists a vector x > 0 such that Ax > 0. Thus (I +L)Ax > 0 and (I + U)Ax > 0. It follows that (I + L)A and (I + U)A are M-matrices and so all diagonal components of (I + L)A and (I + U)A are positive. Hence 1 − (LU)ii > 0 and 1 − (UL) ii > 0 for all i = 1, 2, . . . , n, which completes the proof.

From Lemma 4.3, it can be seen that if A is an M-matrix, all theorems in Sections 3 and 4 do not require the assumptions (LU)ii < 1 or (UL)ii < 1. Lastly, we provide a comparison result of the preconditioned AOR method for preconditioner Pb.

Lemma 4.4 ([8]). Suppose that A1 = M1 − N1 and A2 = M2 − N1 are weak regular splittings of the monotone matrices A1 and A2, respectively, such that If there exists a positive vector x such that 0 ≤ A1x ≤ A2x, then for the monotonic norm associated with x

In particular, if has a positive Perron vector, then

Theorem 4.5. Let A be an irreducible M-matrix. Suppose that 0 ≤ r ≤ ω ≤ 1 (ω ≠ 0). Let Tr,ω, Tl,r,ω and Tb,r,ω be defined by (3) and (8), respectively. If 0 < β ≤ 1, then

Proof. Since ρ(Tl,r,ω) ≤ ρ(Tr,ω) < 1 was shown in [14], it suffices to show ρ(Tb,r,ω) ≤ ρ(Tl,r,ω). Since A is an M-matrix and 0 < β ≤ 1, there exists a vector x > 0 such that Ax > 0, and PlA and PbA are also Z-matrices. It follows that PlA and PbA are M-matrices. Notice that PbAx ≥ PlAx > 0 and (Db − rLb)−1 ≥ (Dl − rLl)−1. Since A is irreducible and 0 < β < 1, PlA is also irreducible. Since is an M -splitting of PlA with Ul ≠ 0, Tl,r,ω has a positive Perron vector from Theorem 2.3. Using Lemma 4.4, ρ(Tb,r,ω) ≤ ρ(Tl,r,ω) for 0 < β < 1. By continuity of spectral radius, ρ(Tb,r,ω) ≤ ρ(Tl,r,ω) is also true for β = 1. Therefore, the proof is complete.

 

5. A near optimal parameter β for Krylov subspace method

In this section, we propose how to find a near optimal parameter β for which Krylov subspace method with preconditioners Pl, Pu and Pb performs nearly best. Before proceeding to the analysis for finding a near optimal parameter β, we define the following notations. For X and Y in ℝn×n, we define the inner product ⟨X, Y⟩F = tr(XTY), where tr(Z) denotes the trace of the square matrix Z and XT denotes the transpose of the matrix X. The associated norm is the well-known Frobenius norm denoted by ∥ · ∥F.

If P is a good preconditioner for Krylov subspace method, then PA is close to the identity matrix I. Thus it would be a good idea to determine a value β such that ∥PA − I∥F is minimized, where P is Pl, Pu or Pb. We call such a value β a near optimal parameter for the preconditioned Krylov subspace method. We first provide a near optimal parameter β for the preconditioner P = Pl.

Theorem 5.1. Let A = I − L − U be a large sparse nonsingular matrix, and let Pl = I + βL be a preconditioner for Krylov subspace method. Then, a near optimal parameter β is given by

Proof. Note that (I+βL)A−I = βLA−(L+U). By definition of the Frobenius norm, one obtains

The equation (20) is a quadratic equation in β, so the minimum occurs when

Hence the proof is complete.

Similarly to the proof of Theorem 5.1, one can obtain a near optimal parameter β for the preconditioner P = Pu.

Theorem 5.2. Let A = I − L − U be a large sparse nonsingular matrix, and let Pu = I + βU be a preconditioner for Krylov subspace method. Then, a near optimal parameter β is given by

Lastly, we provide a near optimal parameter β for the preconditioner P = Pb.

Theorem 5.3. Let A = I −L−U ∈ ℝn×n be a large sparse nonsingular matrix, and let Pb = I+β(L+U) be a preconditioner for Krylov subspace method. Then, a near optimal parameter β is given by

Proof. Note that (I+β(L+U))A−I = (β−1)(L+U)−β(L+U)2. By property of the Frobenius norm, one obtains

From (21), it is sufficient to minimize ∥(β−1)I−β(L+U)∥F in order to determine a near optimal parameter β. Since tr(L + U) = 0, one obtains

The equation (22) is a quadratic equation in β, so the minimum occurs when

Hence the proof is complete.

 

6. Numerical experiments

In this section, we provide numerical experiments to compare the performance of preconditioned iterative methods and to illustrate the theoretical results obtained in Sections 3 to 5. All numerical experiments are carried out on a PC equipped with Intel Core i5-4570 3.2GHz CPU and 8GB RAM using MATLAB R2013a. The preconditioned iterative methods used for numerical experiments are the preconditioned AOR method and the right preconditioned BiCGSTAB method [10,12].

In Table 3, Iter denotes the number of iteration steps, CPU denotes the elapsed CPU time in seconds, P denotes the preconditioner to be used, ILU(0) denotes the incomplete factorization preconditioner without fill-ins, and β denotes a near optimal value computed by the formula given in Section 5. For numerical tests using the right preconditioned BiCGSTAB, all nonzero elements of sparse matrices are stored using sparse storage format which saves a lot of CPU time, the initial vectors are set to the zero vector, and the iterations are terminated if the current approximation xk satisfies where ∥·∥2 refers to L2-norm.

Example 6.1. Consider the two dimensional convection-diffusion equation [15]

with the Dirichlet boundary condition on ∂Ω which denotes the boundary of Ω. When the central difference scheme on a uniform grid with m × m interior node is applied to the discretization of the equation (23), we can obtain a linear system Ax = b whose coefficient matrix A ∈ ℝn×n is of the form

where n = m2, Im denotes the identity matrix of order m, ⊗ denotes the Kronecker product, are m × m tridiagonal matrices with the step size It is clear that this matrix A is an irreducible nonsymmetric L-matrix. Numerical results of the preconditioned AOR method for n = 502 = 2500 are provided in Table 1, and numerical results of the right preconditioned BiCGSTAB method for various values of n are listed in Table 3.

Example 6.2. Consider the two dimensional Poisson’s equation

with the Dirichlet boundary condition on ∂Ω. When the central difference scheme on a uniform grid with m×m interior node is applied to the discretization of the equation (24), we obtain a linear system Ax = b whose coefficient matrix A ∈ ℝn×n is given by

where n = m2, are m × m tridiagonal matrices. Note that this matrix A is an irreducible symmetric L-matrix. Numerical results of the preconditioned AOR method for n = 502 = 2500 are provided in Table 2.

In Figure 1, we depict the eigenvalue distributions of the preconditioned matrices corresponding to 6 different preconditioners for Examples 6.1 when n = 302. From Figure 1, it can be seen that eigenvalues of PbA are more clustered than those of any other preconditioners. From Tables 1 and 2, it can be seen that ρ(Tl,r,ω) < ρ(Tr,ω) does not hold for ω > 1 and r > 1, and ρ(Tb,r,ω) ≤ ρ(Tl,r,ω) does not hold for β > 1. For test problems used in this paper, the preconditioner Pu yields better optimal performance than other preconditioners Pl and Pb, and the optimal values of β, ω and r for the preconditioned AOR method with Pu are greater than or equal to 1 (see Tables 1 to 2). In other words, ω = r is around 1.3 and β = 1 for Examples 6.1 and 6.2. Further research is required to study how to find optimal or near optimal values of β, ω and r for the preconditioned AOR method.

From Table 3, it can be seen that the preconditioner Pb with a near optimal parameter β performs much better than the ILU(0) preconditioner which is one of the powerful preconditioners that are commonly used. It can be also seen that the preconditioners Pl and Pu with near optimal parameters β perform better than the preconditioner (I −L)−1. The performance of BiCGSTAB with preconditioner (I −U)−1 is not provided here since its performance is similar to that with the preconditioner (I −L)−1. Notice that a near optimal parameter β proposed in Section 5 can be easily computed by MATLAB.

TABLE 1.Numerical results for ρ(Tr,ω), ρ(Tu,r,ω), ρ(Tl,r,ω) and ρ(Tb,r,ω) with various values of β, r and ω for Example 6.1.

TABLE 2.Numerical results for ρ(Tr,ω), ρ(Tu,r,ω), ρ(Tl,r,ω) and ρ(Tb,r,ω) with various values of β, r and ω for Example 6.2.

FIGURE 1.Spectra of the preconditioned matrices corresponding to several preconditioners for Example 6.1 when n = 302

TABLE 3.Numerical results of preconditioned BiCGSTAB for Example 6.1.

 

7. Conclusions

In this paper, we provided comparison results of preconditioned AOR methods with direct preconditioners Pl, Pu and Pb for solving a linear system whose coefficient matrix is a large sparse irreducible L-matrix, which holds under some weaker conditions than those used in the existing literature. These theoretical results are in good agreement with the numerical results (see Tables 1 and 2). However, further research is required to study how to find optimal or near optimal values of β, ω and r for the preconditioned AOR method.

We also proposed how to find a near optimal parameter β for which Krylov subspace method with preconditioners Pl, Pu and Pb performs nearly best. Numerical experiments showed that BiCGSTAB with the preconditioner Pb with a near optimal parameter β performs much better than the ILU(0) preconditioner which is one of the powerful preconditioners that are commonly used. It was also seen that BiCGSTAB with the preconditioners Pl and Pu with near optimal parameters β perform better than the preconditioner (I − L)−1 (see Table 3). Notice that a near optimal parameter β proposed in Section 5 can be easily computed by MATLAB.

References

  1. A. Berman and R.J. Plemmoms, Nonnegative matrices in the mathematical sciences, SIAM, Philadelphia, PA, 1994.
  2. A. Gunawardena, S. Jain, and L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl. 154/156 (1991) 123-143. https://doi.org/10.1016/0024-3795(91)90376-8
  3. A. Hadjidimos, Accelerated overrelaxation method, Math. Comp. 32 (1978) 149-157. https://doi.org/10.1090/S0025-5718-1978-0483340-6
  4. H. Kotakemori, H. Niki, and N. Okamoto, Accelerated iterative method for Z-matrices, J. Comput. Appl. Math. 75 (1996) 87-97. https://doi.org/10.1016/S0377-0427(96)00061-1
  5. Y. Li, C. Li and S. Wu, Improving AOR method for consistent linear systems, Appl. Math. Comput. 186 (2007) 379-388. https://doi.org/10.1016/j.amc.2006.07.097
  6. W. Li and W. Sun, Modified Gauss-Seidel methods and Jacobi type methods for Z-matrices, Linear Algebra Appl. 317 (2000) 223-240.
  7. J.P. Milaszewicz, Improving Jacobi and Gauss-Seidel iterations, Linear Algebra Appl. 93 (1987) 161-170. https://doi.org/10.1016/S0024-3795(87)90321-1
  8. M. Neumann and R.J. Plemmons, Convergence of parallel multisplitting iterative methods for M-matrices, Linear Algebra Appl. 88/89 (1987) 559-573. https://doi.org/10.1016/0024-3795(87)90125-X
  9. A.Y. Ozban, On accelerated iterative methods for the solution of systems of linear equations, Intern. J. Computer Math. 79 (2002) 765-773. https://doi.org/10.1080/00207160211290
  10. Y. Saad, Iterative methods for sparse linear systems, PWS Publishing Company, Boston, 1996.
  11. M. Usui, H. Niki and T. Kohno, Adaptive Gauss Seidel method for linear systems, Intern. J. Computer Math. 51 (1994) 119-125. https://doi.org/10.1080/00207169408804271
  12. H.A. van der Vorst, Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems. SIAM J. Sci. Statist. Comput. 13, 631-644 (1992). https://doi.org/10.1137/0913035
  13. R.S. Varga, Matrix iterative analysis, Springer, Berlin, 2000.
  14. L. Wang and Y. Song, Preconditioned AOR iterative method for M-matrices, J. Comput. Appl. Math. 226 (2009) 114-124. https://doi.org/10.1016/j.cam.2008.05.022
  15. M. Wu, L. Wang and Y. Song, Preconditioned AOR iterative method for linear systems, Applied Numerical Mathematics 57 (2007) 672-685. https://doi.org/10.1016/j.apnum.2006.07.029
  16. D.M. Young, Iterative solution of large linear systems, Academic Press, New York, 1971.
  17. J.H. Yun, Comparison results of the preconditioned AOR methods for L-matrices, Appl. Math. Comput. 218 (2011) 3399-3413. https://doi.org/10.1016/j.amc.2011.08.085

Cited by

  1. ESOR METHOD WITH DIAGONAL PRECONDITIONERS FOR SPD LINEAR SYSTEMS vol.33, pp.1_2, 2015, https://doi.org/10.14317/jami.2015.111