1. Introduction
In current multiuser wireless networks where some resources such as frequency, time, and space are usually shared among different users, interference appears and becomes the bottleneck of the throughput of the system. Many scholars have focused on interference management techniques to eliminate the interference [1] and to maximize the sum rate [2], and interference alignment (IA) is a novel scheme that can mitigate the interference and thus increase the total throughput significantly [3]. Cadambe and Jafar showed that the degrees of freedom (DoFs) of the multiple-input multiple-output (MIMO) communication system scale proportionally to the number of users by employing IA in the high signal-to-noise ratio (SNR) regime in [4]. The essential idea of IA is to separate the desired signal and interference into different signal subspaces by using the proper precoding matrices and to reconstruct the former as well as to suppress the latter by using the proper decoding matrices [5]. Due to its promising performance in eliminating the interference and increasing the sum rate, IA has become an intensive research topic in recent years [6-8].
The feasibility condition for the K-user IA system of M transmitting antennas, N receiving antennas, and d DoFs was provided by Yetis et al. in [9] as K≤(M+N)/d-1. However, the closed-form solutions to IA can be obtained only in some very special situations [10], and finding the optimal IA solutions still remains an open problem. Schmidt et al. analyzed these challenges in [11] and noted that the number of IA solutions increases dramatically with the number of users and antennas. In addition, the sum rate that different IA solutions can achieve varies considerably, and a brutal search is obviously infeasible due to the large computational cost. Furthermore, this problem was proved to be non-convex and NP-hard in [12]. Therefore, many scholars have developed different strategies to determine the sub-optimal IA solutions. Gomadam et al. developed the minimizing interference leakage (MinIL) algorithm in [13] to seek perfect IA points, and its convergence was proved. However, the MinIL algorithm does not consider the direct channel and might suffer from the loss of the desired signal power. Therefore, a maximal signal-to-interference-plus-noise ratio (Max-SINR) algorithm was proposed in [14], which aims at maximizing the SINR of each data stream by designing proper precoding and decoding matrices. The minimum mean square error (MMSE) criterion was introduced in [15] and [16] to optimize the sum rate in the broadcast and interference channels, respectively. Shrestha et al. developed the maximal signal-to-leakage-and-noise ratio (Max-SLNR) algorithm in [17], which has the advantages of best sum capacity in low and moderate SNR regimes, stable performance with respective to antenna order, and not requiring channel reciprocity. The gradient descent method was leveraged in [18] to maximize the sum rate. An excellent survey on the current IA techniques was conducted by Schmidt et al. in [11], where the objectives, convergence performance, and sum rate of different beamforming strategies are compared in great detail.
Among the current IA algorithms, the Max-SINR method can achieve a high sum rate and has received considerable attention [11,19]. As noted in [11], the Max-SINR algorithm can achieve the maximum DoFs and can converge faster than many other algorithms such as MMSE. In addition, the throughput that the Max-SINR algorithm achieves is considerably higher than that of the MinIL algorithm in the low and medium SNR regimes. However, the iterations and computational time required by the Max-SINR algorithm might increase dramatically when the number of users K is close to the IA feasibility bound (M+N)/d-1, especially in the situations of larger M, N, and K. In practical systems, the precoding and decoding matrices should be obtained before the channel state information (CSI) changes, and longer calculating times might cause a misalignment and loss of sum rate when the outdated precoding and decoding matrices are applied.
In this paper, we focus on developing the low-complexity Max-SINR algorithm with rapid convergence. The main contributions of this paper are summarized as follows: 1. The behavior of the Max-SINR algorithm in the neighborhood of the convergence solution is analyzed thoroughly, and it is found that the relationship of the consecutive iteration deviations can be approximately expressed by linear transformations. 2. The principle direction of the transformations is introduced, which can approximately point to the convergence solution. 3. A novel principle direction search (PDS) algorithm is proposed, which can increase the convergence rate and reduce the overall computational cost significantly.
The remaining parts of the paper are organized as follows. In Section 2, the system model is presented. The properties of the traditional Max-SINR algorithm are investigated in Section 3. The proposed PDS algorithm is developed in Section 4. In Section 5, simulation results are provided, followed by the conclusions in Section 6.
Notation: Re{A}, Im{A}, AT, A*, AH, Tr[A], ||A||, and A(l) represent the real part, imaginary part, transpose, conjugate, conjugate transpose, trace, Frobenius norm, and the lth column of matrix A, respectively. denotes the value of matrix Ak at the jth iteration. Particularly, is the lth column of matrix Ak obtained at the jth iteration, and is the conjugate transpose of Ak(l). We use ℝ and ℂ to represent the sets of real and complex numbers, respectively. I, ℂM×N, and ℂℕ(μ, Δ) denote the identity matrix, M×N complex matrix, and complex Gaussian vector with mean μ and covariance matrix Δ, respectively. The notation (M×N, d)K is employed to represent an MIMO interference channel with K users, M transmitting antennas, N receiving antennas, and d data streams.
2. System Model
In this paper, we mainly concentrate on the (M×N, d)K channels. For the kth user, the transmitted symbol vector sk∈ℂd×1 is first precoded by the transmitting beamforming matrix Pk∈ℂM×d, and the transmitted signal vector xk∈ℂM×1 can be obtained as:
The transmitted signal vectors of all users will be sent to the receivers, and the received signal vector of the kth user yk∈ℂN×1 can be expressed as:
where zk∈ℂN×1 represents the received noise vector with zk~ℂℕ(0, σ2I) and HkjℂN×M denotes the channel coefficient matrix from transmitter j to receiver k. In this paper, perfect CSI is assumed to be known by each user, and the reader can refer to [20] and [21] for more information on CSI. The receiver k employs the decoding matrix Dk∈ℂN×d to extract the desired symbol vector and to suppress the interference from other users. The final reconstructed symbol vector of the kth user can be attained as:
where are the desired symbol vector, residual interference, and noise, respectively.
The interference in the network can be eliminated completely if the precoding and decoding matrices satisfy the following conditions:
However, when perfect IA cannot be achieved, interference leakage appears, and it can be given by:
In addition, the desired signal and the interference-plus-noise covariance matrices of the lth data stream of receiver k are defined, respectively, as [14]:
Thus the SINR of the lth data stream of receiver k can be calculated as:
According to the Max-SINR algorithm [14], the decoding vector that maximizes (10) can be given by:
Using the channel reciprocity, the channel coefficients matrices, precoding matrices, and decoding matrices in the reciprocal channel are defined, respectively, as:
Similarly, the desired signal and the interference-plus-noise covariance matrices in the reciprocal channel, i.e., , can be attained accordingly.
In this paper, we focus on increasing the convergence rate and reducing the computational complexity of the conventional Max-SINR algorithm in the high SNR regime. The properties of the Max-SINR algorithm are studied, and a rapid convergent IA algorithm is proposed.
3. Properties of the Traditional Max-SINR Algorithm
The Max-SINR algorithm is a promising method to approach the channel capacity, and the sum rate it achieves outperforms many other algorithms. In the low to medium SNR regimes where the sum rate is dominated by noise instead of interference, the Max-SINR algorithm can achieve a significantly higher sum rate than many other algorithms. However, in the high SNR regime, the throughput will be determined by interference rather than noise, and the Max-SINR algorithm has to align the interference properly with considerably more iterations and computational time. The large computational cost and long iteration time might constitute a major impediment to its application in practical systems. Therefore, the properties of the Max-SINR algorithm in the high SNR regime will be studied in this section to provide the theoretical foundation of the later proposed rapid convergent algorithm.
3.1 The mechanism of the Max-SINR Algorithm
The Max-SINR algorithm, as shown in Algorithm 1, leverages the channel reciprocity and maximizes the SINR of each data stream alternatively. It is mainly composed of two procedures, i.e., the forward and backward iterations. The most important point of the algorithm lies in the design of the precoding and decoding matrices. Gomadam et al. [14] did not provide the matrices design mechanism, and we present Theorem 1 to explain that the maximal SINR problem is essentially the optimization of the generalized Rayleigh quotient of matrices Akl and Bkl.
Theorem 1: The unit-norm vector Dk(l) in (11) is the generalized eigenvector associated with the maximal generalized eigenvalue of matrices Akl and Bkl. In particular, SINRkl in (10) is the maximal generalized eigenvalue. Similarly, this theorem holds for the reciprocal channel.
Proof: From (8) and (11), we have
Define
and (17) can be rewritten as:
Substitute (11) into (19), and the following equation can be satisfied:
Therefore, θkl is the generalized eigenvalue of matrices Akl and Bkl with the corresponding generalized eigenvector of Dk(l). Particularly, θkl is the generalized Rayleigh quotient and equates to SINRkl.
Left-multiply both sides of (20) by , and the generalized eigenvalue problem can be equivalent to the ordinary eigenvalue problem, i.e.,
Thus, θkl and Dk(l) are the eigenvalue and the associated eigenvector of matrix , respectively. As HkkPk(l)∈ℂN×1, we have rank(HkkPk(l))=rank(HkkPk(l))H=1. Then,
Therefore, rank(Akl) = 1 . Because rank() = rank(Akl) = 1 , has only one nonzero eigenvalue. Equivalently, there is only one nonzero generalized eigenvalue of matrices Akl and Bkl. As θkl equates to SINRkl, it can be shown that θkl>0. Therefore, θkl is the only nonzero generalized eigenvalue of Akl and Bkl. Obviously, θkl is the maximal Rayleigh quotient and the maximal SINR. Similarly, the proof above can be extended to the reciprocal channel to show that Pk(l) serves as the generalized eigenvector associated with the maximal generalized eigenvalue of matrices .
Theorem 1 provides a way for us to change the SINR optimization into a generalized eigenvalue problem, and some interesting properties can also be explored based on this theorem.
3.2 The Principle Direction of the Max-SINR Algorithm
In this section, we will develop the principle direction of the Max-SINR method, which will serve as the foundation of the latter proposed rapid convergent algorithm. Based on Theorem 1, we continue to develop Theorem 2 to show that the relationship for the consecutive iteration deviations of the Max-SINR algorithm can be approximately expressed by iteration-independent linear transformations. Properties of the transformations are later studied in Theorem 3 and Property 4. From these theorems, the principle direction of the transformations is then introduced in Theorem 5, and its mechanism for increasing the convergence rate is verified. Finally, because the principle direction cannot be attained directly, a convenient method to approximate the principle direction is developed in Theorem 6.
Before introducing Theorem 2, the following notations have to be defined beforehand. Define the convergence values of , respectively. The deviations from the ith iteration to the convergence value, i.e., , satisfy
Next, several column vectors are constructed to represent the overall precoding and decoding vectors as follows:
In addition, we can separate the real and imaginary parts of the vectors above to construct the associated real vectors, i.e.,
It is obvious that the following relations can be satisfied:
The change in the overall deviations is studied, and the following theorem can be given:
Theorem 2: In the high SNR regime, if the Max-SINR algorithm can converge to a near IA point, there exists iteration-independent transformations TP∈ℝ2MKd×2MKd and TD∈ℝ2NKd×2NKd, respectively, that exert on the deviation vectors . When and are located within the small neighborhoods of and , respectively, the following approximation can be obtained:
Proof: See Appendix A.
It is obvious that TD and TP determine the behavior of the Max-SINR algorithm in the neighborhoods around the convergence solutions. Therefore, a thorough analysis on the transformations will be executed in Theorem 3 and Property 4 to provide theoretical insight into the algorithm and to explore the potential of increasing the convergence rate of the Max-SINR algorithm.
Theorem 3: The transformations TD and TP of the Max-SINR algorithm have the same eigenvalues.
Proof: For matrices E∈ℂm×n and F∈ℂn×m, EF has the same eigenvalues as FE [22]. Therefore, TD=TPD TDP and TP=TDP TPD have the same eigenvalues.
In the (M×M, d)K case, define the eigenvalues of TD and TP as {λ1, λ2,…, λ2MKd} and the associated eigenvectors as {e1,e2,…,e2MKd} and {f1,f2,…,f2MKd}, respectively. For the high complexity of TD and TP, their properties are mainly studied by simulations, and one important feature can be observed as follows:
Property 4: In the (M×M, d)K channel, the eigenvalues and eigenvectors of TD and TP are real and satisfy:
In particular, the last several eigenvalues tend to be zeros.
Here, we refer to the eigenvectors associated with the maximal eigenvalue, i.e., e1 and f1, as the principle directions. When TD and TP are successively exerted on and , respectively, the following theorem can be obtained.
Theorem 5: For the (M×M, d)K channel in the high SNR regimes, if the Max-SINR algorithm can converge to a near IA point, when and are located within a small region of and , they will converge approximately along the principle directions of TD and TP, respectively.
Proof: From Property 4, {e1,e2,…,e2MKd} and {f1,f2,…,f2MKd} can serve as the bases of the 2MKd real space. Therefore, and can be expanded as:
At the (i+j)th iteration, from (30), (31), (35), and (36), and can be given by:
From (32), when j is sufficiently large, λ1 will take the dominant position, i.e.,
Then, and can be approximated by the principle directions as follows:
From (28), (29), (40), and (41), and can be written as:
Thus, and will converge to and approximately along the principle directions of e1 and f1, respectively.
If the principle directions can be obtained, we can search along these directions and go to the destination directly. However, the principle directions require knowledge of the convergence values, which is impossible to attain beforehand. Nevertheless, the following theorem provides a way for us to approximate the principle directions.
Theorem 6: In the high SNR regime, if the Max-SINR algorithm can converge to a near IA point, when and are located within a small region of and , respectively, the principle directions can be approximated by the difference in the consecutive iteration results of the Max-SINR algorithm. In particular, the convergence solutions can be approximated as:
Proof: At the (i+j+1)th iteration, from (42) and (43), the decoding and precoding vectors can be expressed as:
Subtract (46) by (42) and (47) by (43), and we have
Therefore, the principle directions can be approximated by the difference in the consecutive iterations results. Substitute (48) and (49) into (46) and (47), respectively, and the convergence solutions can be approximated as:
Replace i+j+1 with i and -λ1/(λ1-1) with s. Then, (50) and (51) can be expressed as:
Substitute (25) through (27) into (52) and (53), then (44) and (45) can be formulated.
Theorem 6 provides a direct way to approach the convergence solutions. It is expected that the iterations and computational time can be reduced by searching along the principle directions instead of the circuitous route of the Max-SINR algorithm. Based on the analysis in this section, the rapid convergent maximal SINR algorithm is straightforward.
4. The Principle Direction Search Algorithm
The original Max-SINR algorithm has satisfactory performance of a high sum rate. However, in the high SNR regime where the throughput is mainly limited by interference instead of noise, the Max-SINR algorithm will have to align the interference properly at the expense of a large amount of iterations and computational time. Particularly, the complexity will become significantly larger when the number of users is close to the IA feasibility bound (M+N)/d-1. Moreover, the complexity of the Max-SINR algorithm will increase with the number of users and antennas. These disadvantages might serve as a bottleneck for the application of the Max-SINR algorithm. In this section, a highly efficient maximal SINR algorithm, namely, a principle direction search algorithm, is proposed based on the theoretical analysis in Section 3 with the advantage of considerably less complexity than the traditional Max-SINR algorithm.
4.1 The Procedure of the PDS Algorithm
The procedure of the PDS algorithm is stated in Algorithm 2. As shown in Theorem 5, the principle direction search can be employed to increase the convergence rate of the Max-SINR algorithm when the current iteration results are located within a small neighborhood of the convergence solutions. In the proposed algorithm, PDS will be started after the intermediate sum rate is higher than the product of the parameter η and the initial sum rate R1. The principle directions are attained by the difference in the intermediate results as shown in (44) and (45). In addition, PDS will be executed every ω iterations because some iterations have to be performed so that (39) can be satisfied and the principle direction will point to the convergence solution more accurately.
4.2 The Step Size Calculation
The step size calculation is one of the crucial procedures in the PDS algorithm and plays an important role in increasing the convergence rate. It is desirable that the optimal step size in the sense of maximal SINR or sum rate could be provided analytically. However, the closed-form solution of the optimal step size under this criterion remains unknown. Nevertheless, in the high SNR regime, the sum rate is mainly limited by the interference, and minimal interference leakage can be employed as the optimization objective. In addition, the optimal step size along the principle direction under the minimal interference leakage criterion can be formulated analytically. Therefore, instead of maximizing the sum rate, we minimize the interference leakage by choosing the proper step size.
As shown in (7), (54), (55), (56), and (57), when and are applied, the interference leakage can be calculated as:
where
It is obvious that β4>0, and the quartic function (58) must have the global minimal value. Let the derivatives of (58) be zero, i.e.,
The cubic equation (68) has three real solutions at most, and the optimal step size is chosen as the real solution that has the smallest function value.
5. Simulation Results
In this section, we evaluate the performances of the proposed PDS algorithm. We assume narrowband frequency non-selective fading channels. Except for the performance evaluation in Fig. 2, all of the other experiments are averaged over 250 channel realizations. Each element of the channel coefficients is independent and complies with the ℂℕ(0,1) distribution. The parameters of the PDS algorithm are first analyzed and selected through simulations in section 5.1. Then, a comparison of the PDS, MinIL, and Max-SINR algorithms is executed in section 5.2.
5.1 Parameter Analysis
As shown in Algorithm 2, the parameters η and ω can affect the convergence rate of the PDS algorithm. The closed-form solutions of the optimal parameters remain unknown, and simulations are employed to determine them in Fig. 1.
Fig. 1.Convergence of the average sum rate of the PDS algorithm in (5×5, 1)9 under different parameters with SNR=30 dB
The parameter η determines when the PDS procedure starts. As shown in Theorem 5, the principle direction will approximately point to the convergence solution only when the intermediate iteration results are within a small region of the convergence point. Because the convergence solution is not available beforehand, it is impossible to measure the distance between the intermediate and the convergence points directly. Nevertheless, the sum rate can be leveraged as one type of measure of the distance because it increases as the algorithm approaches the destination. Therefore, we start PDS when the current sum rate is larger than the product of η and R1. In this experiment, η is selected as 1, 1.5, 2, 2.5, and 3, and ω is preset to be 10. The convergence of the average sum rate with SNR=30 dB in (5×5, 1)9 is depicted in Fig. 1 (a). As shown in the figure, the convergence curves with different η nearly overlap, indicating that the algorithm is insensitive to this parameter. In the simulations afterward, the parameter is chosen as η=1.5.
The parameter ω controls the interval for executing PDS. As shown in (39), (40), and (41), the maximal eigenvalue and the principle direction will take the dominating position only after several iterations are executed. Therefore, PDS is performed every ω iterations. In this experiment, ω is selected as 1, 5, 10, and 20, and η is preset to be 1.5. The convergence of the average sum rate with SNR=30 dB in (5×5, 1)9 is depicted in Fig. 1 (b). As shown in the figure, ω=1 has the worst performance because the non-principle eigenvectors in (37) and (38) cannot be ignored and the principle direction cannot point to the convergence solution properly when only one iteration is performed. On the other hand, if ω is set to be too large, as in the ω=20 case, the PDS algorithm will fail to increase the convergence rate in time, and the performance of ω=20 is inferior to the performances of ω=5 and ω=10. Because ω=5 has the best convergence rate, it will be chosen as the selected interval afterward.
5.2 Comparison of Different Algorithms
In this section, the PDS, MinIL, and Max-SINR algorithms are compared. To gain an overview on the efficiency of the PDS algorithm, the convergence curves of the interference leakage and the sum rate during the first several iterations are provided in Fig. 2 for a randomly generated channel realization. In contrast to the smooth and slow variance on the interference leakage and the sum rate of the MinIL and Max-SINR algorithms, there is an instant decrease in interference leakage and an instant increase in sum rate after PDS is applied. Therefore, the PDS algorithm can suppress the interference leakage and increase the sum rate more effectively compared with the MinIL and Max-SINR algorithms.
Fig. 2.Convergence of the interference leakage and sum rate of the PDS, MinIL, and Max-SINR algorithms for a randomly generated channel realization in (10×10, 1)19 with SNR=30 dB
To further compare the convergence rate of the algorithms, the average interference leakage and the sum rate with respect to the computational time for (5×5, 1)9 are depicted in Fig. 3. As shown in the figures, the average interference leakage of the MinIL and Max-SINR algorithms decreases in the same pattern during the first several seconds and reaches the level of 3×10-3 at 2 s. In contrast, the PDS algorithm has a higher convergence rate and can suppress the interference leakage to the level of 2×10-4 at 2 s. Both the PDS and Max-SINR algorithms converge at the same level of interference leakage, while the former algorithm takes only 3 s compared with the 20 s required by the latter algorithm. Although the MinIL algorithm can achieve a lower level of interference leakage after 8.5 s, its sum rate is considerably inferior to those of the PDS and Max-SINR algorithms. The average sum rate of the PDS algorithm converges at approximately 2 s with a convergence value of 104 bps/Hz, while the Max-SINR algorithm takes as long as 9 s to reach the same sum rate.
Fig. 3.Convergence of the average interference leakage and sum rate of the PDS, MinIL, and Max-SINR algorithms in (5×5, 1)9 with SNR=30 dB
The proposed algorithm can be applied not only to the single data stream channel but also to the multi-data stream scenarios. The convergence of the average sum rate of the algorithms is compared in the (10×10, 2)9 and (10×10, 4)4 channels in Fig. 4. As shown in Fig. 4 (a), the Max-SINR algorithm requires as much as 19.5 s to reach the sum rate of 190 bps/Hz, while the PDS algorithm only takes 6.5 s to reach the same sum rate. For the scenario of even more data streams as shown in Fig. 4 (b), the PDS algorithm can achieve the highest sum rate and exhibit considerably faster convergence rate than the Max-SINR algorithm.
Fig. 4.Convergence of the average sum rate of the PDS, MinIL, and Max-SINR algorithms in (10×10, 2)9 and (10×10, 4)4 with SNR=30 dB
Although we are mainly concerned with the high SNR regime, it is desirable to evaluate the sum rate in terms of different SNRs. As shown in Fig. 5 (a), the average sum rates of the PDS, MinIL, and Max-SINR algorithms with respect to SNR are compared under sufficient iterations of 10,000. It can be seen that both the PDS and Max-SINR algorithms can achieve the same sum rate, which is higher than those of the MinIL algorithm from the low to high SNR regimes. However, these sufficient iterations are nearly impossible for practical applications, and limited iterations are often required. As revealed in Fig. 5 (b), under 100 iterations, both the PDS and Max-SINR algorithms can achieve a higher sum rate than the MinIL algorithm for all SNRs. In the low SNR regime, the PDS and Max-SINR algorithms share the same sum rate. However, the former algorithm can achieve a higher sum rate in the high SNR regime. In addition, the difference in the sum rate between the PDS and Max-SINR algorithms increases with an increase in SNR, from a difference of 2 bps/Hz at SNR=20 dB to 23 bps/Hz at SNR=40 dB. When the number of iterations allowed is even smaller as shown in Fig. 5 (c), the advantage of the PDS algorithm over the MinIL and Max-SINR algorithms is even more significant. Therefore, the proposed PDS algorithm has practical significance in that it can achieve a higher sum rate than the Max-SINR algorithm under limited iterations.
Fig. 5.Average sum rate versus SNR under iterations of 10,000, 100, and 20 for (5×5, 1)9
To further evaluate the performances of the algorithms, the average iterations and computational time under different numbers of users, data streams, and antennas are listed in Table 1. We consider the (M×M, d)K channel with K=2M/d-1, the maximal user number in the IA networks [9]. All three algorithms are run with sufficient iterations of 10,000, and the final sum rate R can be obtained. Then, we calculate the iterations that are required to reach the sum rate of 0.9R and 0.99R as I0.9 and I0.99, respectively. By averaging I0.9, I0.99, and R over 250 realizations of channel coefficients, we attain IAV0.9, IAV0.99, and RAV for each algorithm. Similarly, we can attain TAV0.9 and TAV0.99 for the computational time. In addition, the ratios of the average iteration and computational time between PDS and Max-SINR algorithms are provided in parentheses under the corresponding values of the PDS algorithm in Table 1. For the single data stream situation, the PDS and Max-SINR algorithms can achieve the same sum rate, which is comparably higher than those of the MinIL algorithm. From the perspective of average iterations, the PDS algorithm can reach the sum rate of 0.9RAV and 0.99RAV with only 29~30% and 25~33% of the iterations required by the Max-SINR algorithm. Furthermore, in terms of computational time, the PDS algorithm can reach the sum rate of 0.9RAV and 0.99RAV with only 51~55% and 44~61% of the time required by the Max-SINR algorithm. For the multi-data stream situation, the PDS algorithm can achieve a higher sum rate than the Max-SINR algorithm with lower computational complexity and more rapid convergence.
Table 1Statistics on the average iterations, computational time, and sum rate of different algorithms for (M×M, d)K with SNR=30 dB
6. Conclusions
In this paper, we have focused on reducing the execution time and computational complexity of the traditional Max-SINR algorithm for IA. The properties of the Max-SINR method have been investigated, and the principle direction has been introduced, which is found to approximately point to the convergence solutions. A rapid convergent low-complexity PDS algorithm has been proposed that approaches the convergence point directly along the principle direction. Furthermore, the analytical form of the optimal step size along this direction has been formulated under the minimal interference leakage criterion. Numerical results have verified that the proposed PDS algorithm can achieve a significantly higher convergence rate and lower overall computational complexity than the conventional MinIL and Max-SINR algorithms while maintaining the high sum rate. This paper has developed a rapid convergent algorithm and provided theoretical insight into the Max-SINR algorithm.