DOI QR코드

DOI QR Code

ON THE PURE IMAGINARY QUATERNIONIC LEAST SQUARES SOLUTIONS OF MATRIX EQUATION

  • WANG, MINGHUI (Department of Mathematics, Qingdao University of Science and Technology) ;
  • ZHANG, JUNTAO (Department of Mathematics, Qingdao University of Science and Technology)
  • Received : 2015.03.23
  • Accepted : 2015.06.29
  • Published : 2016.01.30

Abstract

In this paper, according to the classical LSQR algorithm forsolving least squares (LS) problem, an iterative method is proposed for finding the minimum-norm pure imaginary solution of the quaternionic least squares (QLS) problem. By means of real representation of quaternion matrix, the QLS's correspongding vector algorithm is rewrited back to the matrix-form algorthm without Kronecker product and long vectors. Finally, numerical examples are reported that show the favorable numerical properties of the method.

Keywords

1. Introduction

Let R, Q = R + Ri + Rj + Rk and IQm×n denote the real number field, the quaternion field and the set of all m × n pure imaginary quaternion matrices, respectively, where i2 = j2 = k2 = −1, ij = −ji = k. For any x = x1 + x2i + x3j + x4k ∈ Q, the conjugate of quaternion x is = x1 − x2i − x3j − x4k.

Let Fm×n denotes the set of m × n matrices on F. For any A ∈ Fm×n, AT , Ā and AH present the transpose, conjugate and conjugate transpose of A, respectively; A(i : j, k : l) represents the submatrix of A containing the intersection of rows i to j and columns k to l.

For any A = (a1,..., an) ∈ Fm×n, define vec(A) = . The inverse mapping of vec(·) from Rmn to Rm×n which is denoted by mat(·) satisfies mat(vec(A)) = A.

Quaternions and quaternion matrices have many applications in quaternionic quantum mechanics and field theory. Based on the study of [5], we also discuss the quaternion matrix equation

where A and B are given matrices of suitable size defined over the quaternion field. In this paper, we will derive an operable iterative method for finding the minimum-norm pure imaginary solution of the QLS problem, which is more appropriate to large scale system.

Many people have studied the matrix equation (1) and others constrained matrix equation, see [1,2,12,13,14, etc.] . For the real, complex and quaternion matrix equations, there are many results, see [3,4,5,6,7,8,9,10, etc.].

In [5], the least squares pure imaginary solution with the least norm was given of the quaternion matrix equation (1) by using the complex representation of quaternion matrix and the Moore-Penrose. For A = A1 + A2j ∈ Qs×m, B = B1 + B2j ∈ Qs×n, let , Q1 = Re(Q), Q2 = Im(Q), E1 = (Re(B1) Re(B2) Im(B1) Im(B2))T and ,

And the set of solution JL is expressed as

where Y is an arbitrary matrix of appropriate size. However, the method is not easy to realize in large scale system which motivated us to find an operable iterative method. Also Au-Yeung and Cheng in [6] studied the pure imaginary quaternionic solutions of the Hurwitz matrix equations.

Firstly, let us review the real least squares problem. In the LS problem, given A ∈ Rm×n and B ∈ Rn×p for finding a real matrix X so that

where ║·║F denotes the Frobenius norm. And the unique minimum-norm solution of the LS problem given by

where A† denotes the Moore-Penrose of A.

 

2. Preliminary

For any A = A1 + A2i + A3j + A4k ∈ Qm×n and Al ∈ Rm×n(l = 1, 2, 3, 4), define

The real matrix AR is known as real representation of the quaternion matrix A. The set of all matrices shaped as (3) is denoted by . Obviously, the relation between Qm×n and is one-to-one correspondence.

Let

Then Pt,Qt,Rt, St are unitary matrices, and by the definition of real representation, we can obtain the following results which given by T. Jang [4] and M. Wang [8].

Proposition 2.1. Let A,B ∈ Qm×n, C ∈ Qn×s, α ∈ R. Then

Remark 2.1. Form above property (a), we know that the mapping Qm×n → is an isomorphism.

Theorem 2.2. For any V ∈ R4m×n, (V,QmV,RmV,SmV) is a real representation matrix of some quaternion matrix.

Based on the definition of quaternion matrix norm in [8], which denoted by ║·║(F) can be proved a natural generality of Frobenius norm for complex matrices, it has the following properties:

Then we review the LSQR algorithm proposed in [11] for solving the following LS problem:

with given M ∈ Rm×n and vector f ∈ Rm, whose normal equation is

The algorithm is summarized as follows.

Algorithm LSQR

We can choose

as convergence criteria, where τ > 0 is a small tolerance. Obviously, there is no storage requirement for all the vector vi and ui.

And we can easily obtain the following theorem that if linear equation (5) has a solution x∗ ∈ R(MTM) ∈ R(MT), then x∗ generated by Algorithm LSQR is the minimum norm solution of (4). So we can have the solution generated by Algorithm LSQR is the minimum-norm solution of problem (4). Specifically, it was shown in [11] that this method is numerically more reliable even if M is ill-conditioned.

 

3. The matrix-form LSQR method for QLS problem

In this section, we give the definition of quaternionic least squares (QLS) problem on the basis of quaternion matrix norm which is shown in section 2, for

with given matrices A ∈ Qm×n and B ∈ Qm×p. Then we can find problem (6) is equivalent to

which is a constrained LS problem with given matrices

Next, we will deduce the iterative method to find the pure imaginary quaternionic solution of the QLS problem (1). For any

define

Obviously, there is an one to one linear mapping from the long-vector space vec(R4n×4p) to the independent parameter space veci(R3n×p). Let Ƒ denote the pure imaginary quaternionic constrained matrix which defines linear mapping from veci(R3n×p) to vec(R4n×4p), that is

Theorem 3.1. Suppose Ƒ is a pure imaginary quaternionic constrained matrix, then

where

Proof. First, we know

and

Hence, we have

Therefore, let

and from the above we have

Then because of

we can know that F is of full column rank and

that is

Because

where M ⊗ N denote the Kronecker product of matrices M and N, the QLS problem (6) is equivalent to

with

Now, we will apply Algorithm LSQR to problem (8) and the vector iteration of it will be transformed into matrix form so that the Kronecker product and Ƒ can be released. Then we transform the matrix-vector product of Mv and MTu back to a matrix-matrix form so as to let vector v and u be matrix V and U respectively, which required in Algorithm LSQR.

Let mat(α) represent the matrix form of a vector α, For any v ∈ R3np and u = vec(U) ∈ R16mp, where . Let

Then we have

where

Therefore, we can get the following algorithm.

Algorithm LSQR-P.

Algorithm LSQR-P can compute the minimum-norm solution x = veci(XR) of (8), that is

Again,

so we have the following result.

Theorem 3.2. The solution generated by Algorithm LSQR-P is the minimum- norm solution of problem (6).

 

4. Numerical examples

In this section, we give three examples to illustrate the efficiency and investigate the performance of Algorithm LSQR-P which shown to be numerically reliable in various circumstances. All functions are defined by Matlab 7.0.

Example 4.1. Given [m, n, p] = N, A = A1 + A2i + A3j + A4k, X = X1 + X2i + X3j + X4k, B = AX, with A1,A2,A3,A4 defined by rand(m, n) respectively. Given X1 = zeros(n, p) and X2,X3,X4 defined by rand(n, p) respectively. Then Fig. 4.1 plots the relation between error εk = log10(║AX − B║(F)) and iteration number K.

Fig. 4.1The relation between error εk and iterative number K with different N

Notice that in the above case, the equation AX = B is consistent and has a unique solution. From Fig. 4.1 we find our algorithm is effective.

Example 4.2. Given [m, n, p] = N, A = A1 + A2i + A3j + A4k, B = B1 + B2i + B3j + B4k, with A1,A2,A3,A4,B1,B2,B3,B4 defined by rand(m, n) respectively. Let ηk = log10(║MT(Mx − f)║2) where M, f defined by (9). Then Fig. 4.2 plots the relation between error ηk and iteration number K.

Fig. 4.2The relation between error ηk and iterative number K with different N

Notice that in the above case, the equation AX = B is not consistent and we use ηk = ║MT(f −Mxk)║2 = < τ = 10−12 as convergence criteria. From Fig. 4.2, we also find our algorithm work well.

Example 4.3. Given m = n = p = 10, A = A1 + A2i + A3j + A4k, X = X1 + X2i + X3j + X4k, B = AX, with A1 = hilb(m), A2 = pascal(m), A3 = ones(m, n), A4 = pascal(m). Given X1 = zeros(n, p) and X2, X3, X4 defined by rand(n, p) respectively. In this case, the condition number of M is 3.9927 × 109, therefore this system is ill-conditioned. Then Fig. 4.3 plots the relation between error εk = log10(║AX − B║(F) ), ηk = log10(║X − Xk║F/║X║F) and iteration number K.

Fig. 4.3The relation between error ηk, εk and iterative number K

Notice that the equation (1) is consistent and has a unique solution. The algorithm performance is not very well when the system very ill-conditioned. From Fig. 4.3 we find our algorithm is also effective.

References

  1. L. Wu and B. Cain, The re-nonnegative definite solutions to the matrix inverse problem AX = B, Linear Algebra Appl. 236 (1996), 137-146. https://doi.org/10.1016/0024-3795(94)00142-1
  2. C.J. Meng, X.Y. Hu and L. Zhang, The skew symmetric orthogonal solution of the matrix equation AX = B, Linear Algebra Appl. 402 (2005), 303-318. https://doi.org/10.1016/j.laa.2005.01.022
  3. Zhigang Jia, Musheng Wei and Sitao Ling, A new structure-preserving method for quater-nion Hermitian eigenvalue problems, Journal of Comput. and Appl. Math. 239 (2013), 12-24. https://doi.org/10.1016/j.cam.2012.09.018
  4. T. Jang and L. Chen, Algebraic algorithm for least squares problem in quaternionic quantum theory, Comput. Phys. Comm. 176 (2007), 481-485. https://doi.org/10.1016/j.cpc.2006.12.005
  5. Shifang Yuan, Qingwen Wang and Xuefeng Duan, On solutions of the quaternion matrix equation AX = B and their application in color image restoration, Appl. Math. Comput. 221 (2013), 10-20. https://doi.org/10.1016/j.amc.2013.05.069
  6. Y.H. Au-Yeung and C.M. Cheng, On the pure imaginary quaternionic solutions of the Hurwitz matrix equations, Linear Algebra Appl. 419 (2006), 630-642. https://doi.org/10.1016/j.laa.2006.06.005
  7. Q.W. Wang and C.K. Li, Ranks and the least-norm of the general solution to a system of quaternion matrix equation, Linear Algebra Appl. 430 (2009), 1626-1640. https://doi.org/10.1016/j.laa.2008.05.031
  8. M.H. Wang, M.S. Wei and Y. Feng, An iterative algorithm for least squares problem in quaternionic quantum theory, Comput. Phys. Comm. 179 (2008), 203-207. https://doi.org/10.1016/j.cpc.2008.02.016
  9. T. Jang, Algebraic mehtods for diagonalization of a quaternion matrix in quaternionic quantum theory, J. Math. Phys. 46 (2005), 052106. https://doi.org/10.1063/1.1896386
  10. M. Wang, On positive definite solutions of quaternionic matrix equations, World Academy of Science Engineering and Technology, 37 (2010), 535-537.
  11. C.C. Paige and A. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, Appl. Math. Comput. 8 (1982), 43-71.
  12. F. Toutounian and S. Karimi, Global least squares method (Gl-LSQR) for solving general linear systems with several right-hand sides, Appl. Math. Comput. 178 (2006), 452-460. https://doi.org/10.1016/j.amc.2005.11.065
  13. Sitao Ling, Hermitian tridiagonal solution with the least norm to quaternionic least squares problem, Comput. Phys. Comm. 181 (2010), 481-488. https://doi.org/10.1016/j.cpc.2009.10.019
  14. Z.Y. Peng, A matrix LSQR iterative method to solve matrix equation AXB = C, International Journal of Computer Mathematics 87 (2010), 1820-1830. https://doi.org/10.1080/00207160802516875