DOI QR코드

DOI QR Code

An Efficient Model Based on Smoothed ℓ0 Norm for Sparse Signal Reconstruction

  • Li, Yangyang (College of Electronic Information and Optical Engineering, Nankai University) ;
  • Sun, Guiling (College of Electronic Information and Optical Engineering, Nankai University) ;
  • Li, Zhouzhou (College of Electronic Information and Optical Engineering, Nankai University) ;
  • Geng, Tianyu (College of Electronic Information and Optical Engineering, Nankai University)
  • Received : 2018.01.21
  • Accepted : 2018.10.07
  • Published : 2019.04.30

Abstract

Compressed sensing (CS) is a new theory. With regard to the sparse signal, an exact reconstruction can be obtained with sufficient CS measurements. Nevertheless, in practical applications, the transform coefficients of many signals usually have weak sparsity and suffer from a variety of noise disturbances. What's worse, most existing classical algorithms are not able to effectively solve this issue. So we proposed an efficient algorithm based on smoothed ${\ell}_0$ norm for sparse signal reconstruction. The direct ${\ell}_0$ norm problem is NP hard, but it is unrealistic to directly solve the ${\ell}_0$ norm problem for the reconstruction of the sparse signal. To select a suitable sequence of smoothed function and solve the ${\ell}_0$ norm optimization problem effectively, we come up with a generalized approximate function model as the objective function to calculate the original signal. The proposed model preserves sharper edges, which is better than any other existing norm based algorithm. As a result, following this model, extensive simulations show that the proposed algorithm is superior to the similar algorithms used for solving the same problem.

Keywords

1. Introduction

In recent years, the research of compressed sensing(CS) [1-2] has received more attention as amean to process the sparse signal (i.e., the number of nonzero elements in the vector is small). CS is a new signal processing technology, by solving underdetermined linear systems, toeffectively acquire and reconstruct the signal. It was proposed by Donoho, Candes and Tao Zhexuan in 2006. It is very magical that CS theory can exactly recover the sparse signal or compressible signals by a sampling rate which does not satisfy the Nyquist-Shannon sampling theorem. But many papers have demonstrated that CS can effectively get the key information from sample value with fewer non-correlative measurements. The key advantage of CS is that it allows both compression and sampling to run simultaneously. CS technique can reduce the hardware requirements, further reduce the sampling rate, improve the signal quality, save onsignal processing and transmission costs. Currently, CS has been widely used in wirelesssensor networks, information theory, signal processing, medical image, optical/microwave imaging, SAR image, wireless communications, atmosphere, geology and other fields [3-5].

The research of CS theory is mainly divided into three aspects: 1) the sparse representation of signals; 2) the uncorrelated sampling[6]; 3) sparse reconstruction [7]. The design of sparsereconstruction algorithm is the most important one. It is the huge challenge for the researchers to propose an efficient reconstruction algorithm with reliable accuracy.

Theoretically, under the condition of sparse assumption, one hope to reconstrct the
signal \(\boldsymbol{x} \in \boldsymbol{R}^{N}\) , for example, \(\boldsymbol{y} \in \boldsymbol{R}^{M}\) is a known vector. For the reconstruction of the sparse signal x , which can be solved through the following non-convex problem:

\(\min \|x\|_{0} \quad \text { s.t. } y=\Phi x\)       (1)

Where\(\|x\|_{0}\) is the zero-norm of the x .Φ ∈ RMxN × is the measurement matrix. The aboveformula (1) is NP-hard. To solve the issue of formula (1), we have to get a solution with theleast non-zero elements in all solutions. It is not practical to directly solve that problem. This paper has proved that if the measurement matrix obeys a constraint known as the Restricted Isometry Property(RIP) [8-10], an equivalent solution can be get for the optimization problem (1) based on \(\ell_{1}\) norm. With regard to the measurement matrix Φ , it considers the sparse signal \(\boldsymbol{x}\left(\|\boldsymbol{x}\|_{0}=\boldsymbol{k}\right)\), when find a suitable constant δk satisfies:

\(\left(1-\delta_{k}\right)\|x\|_{2}^{2} \leq\|\Phi x\|_{2}^{2} \leq\left(1+\delta_{k}\right)\|x\|_{2}^{2}\)       (2)

In (2), δk follows 0 <δk ≤ 1. Generally speaking, if the δk is very closed to 1, then it ispossible that the measurement y can not preserve any information on x when the \(\|\Phi x\|_{2}^{2} \approx 0\).

As a result, it is nearly impossible to reconstruct the sparse signal x by using the greedy algorithms.

If RIP is satisfied, the solution based on the ℓ1 norm problem is the convex relaxation of the ℓ0 norm:

\(\min \|x\|_{1} \quad \text { s.t. } y=\Phi x\)       (3)

For (3), Many existing methods can solve this issue. Equation (3) is a convex problem, and the methods to solve equation (3) is called a convex optimization algorithm [11], such as the basis pursuit algorithm (BP) [12]. and linear programming algorithm. But this kind of algorithm hashigh computational complexity.

A series of greedy algorithms has been receiving great interest due to their low complexity and simple geometric interpretation, such as the Orthogonal Matching Pursuit (OMP)[13], the Stage wise OMP (StOMP)[14] algorithms, and stagewise weak gradient pursuits (SWOMP) [15]. The feature of those algorithms is to seek the sparse position of the unknownsignal step by step. And the smoothed ℓ0 norm algorithms have also recevied significantattention, such as the SL0 algorithm, Thresholded SL0(TSL0)[16]. The simulation for typical reconstruction problems, including one-dimensional and two-dimensional signal reconstruction, show that the proposed model is superior to the existing advanced reconstruction algorithms. But Anti-noise performance of the greedy algorithms is poor. Evensmall additive noise is likely to lead to the bad signal recovery effect.

In a sense, the ℓ0 norm is robust to noise, it gives the highest possibility of sparsereconstruction with fewer measurements. That also motivates the use of continuous approximate function to solve (1).

We introduce an efficient algorithm based on smoothed ℓ0 norm for sparse signal reconstruction in this paper. To design an suitable iterative sequence of smoothed function and get an optimized solution of the ℓ0 norm problem, we also come up with a generalized approximate function model as the objective function to calculate the original signal. The proposed model preserves sharper edges, which is better than any other existing norm regulized algorithm. The results of experiment verified that the new algorithm based on the generalized approximate function model is better than other similar algorithms used forsolving the same problem.

Other parts of this paper is arranged as follows. Part 2 introduces the basic ideas of the proposed algorithm. Part 3, we discuss the procedure of the new algorithm. Part 4 is simulation result and analysis, and the last part is the conclusion.

2. Main Idea

The fundamental idea of CS theory is to extract vector x from y . To solve the issue of ℓ0norm of discontinuity, the existing idea is to approximate this discontinuous function by asuitable continuous one. And there are many kinds of approximations smooth function. Forexample, the most classic Gaussian Function. But this paper proposed a generalized approximate function model, and it has a more accurate approximation effect.

\(f_{\sigma}(x)=\frac{e^{\beta x^{2} / \sigma^{2}}-e^{-\beta x^{2} / \sigma^{2}}}{e^{\beta x^{2} / \sigma^{2}}+e^{-\beta x^{2} / \sigma^{2}}}\)       (4)

In (4), β represents a positive number. The parameter σ determines the reconstructed quality of the sparse signal \(x\) by the approximations smooth function. The smaller σ, the better approximation, and the lager σ, the smoother approximation. And note that:

\(\lim _{\sigma \rightarrow 0} f_{\sigma}(x)=\left\{\begin{array}{ll} {1 ;} & {\text { if } x \neq 0} \\ {0 ;} & {\text { if } x=0} \end{array}\right.\)       (5)

Or approximately that:

\(f_{\sigma}(x)=\left\{\begin{array}{l} {1 ; \text { if } x \gg \sigma} \\ {0 ; \text { if } x \ll \sigma} \end{array}\right.\)       (6)

Then we can define:

\(\lim _{\boldsymbol{\sigma} \rightarrow 0} \boldsymbol{F}_{\boldsymbol{\sigma}}(\boldsymbol{x})=\sum_{i=1}^{N} \boldsymbol{f}_{\boldsymbol{\sigma}}\left(\boldsymbol{x}_{i}\right)\)       (7)

It can be learned from the above formula that \(\|x\|_{0}=F_{\sigma}(x)\). In this paper [17], it considers the continuous Gaussian function for the smoothed approximations.

\(g_{\sigma}(x)=e^{-x^{2} / 2 \sigma^{2}}\)       (8)

 

Fig. 1. Comparison of the approximate 0 norm functions

For (4) (8), the approximation performance of ℓ0 norm, however, is different. To further prove the superiority of the proposed generalized approximation model as a smooth continuous approximation function, we have experimentally compared the distribution of the proposed generalized approximate function with the standard Gaussian function for different parameters β at interval [-1, 1] when σ=0.1 . The comparison results are shown in Fig. 1.

As can be seen from the Fig. 1, the proposed generalized approximate function model has steeper properties. Therefore, it would be more precise to estimate the ℓ0 norm.

Remark 1. When β> 0.5, the generalized approximation function model proposed in this paper has a better approximation than other models. It can be proved by comparison with standard gaussian function.

Proof of remark 1: Take \(u(x)=f(x)-(1-g(x)), \text { when } \beta>0.5, u(x) \geq 0\) . To simplify the proof, let  β=α/2

\(\begin{aligned} u(x) &=\frac{e^{\alpha x^{2} / 2 \sigma^{2}}-e^{-\alpha x^{2} / 2 \sigma^{2}}}{e^{\alpha x^{2} / 2 \sigma^{2}}+e^{-\alpha x^{2} / 2 \sigma^{2}}}-1+e^{-x^{2} / 2 \sigma^{2}} \\ &=\left(1-\frac{2 e^{-\alpha x^{2} / 2 \sigma^{2}}}{e^{\alpha x^{2} / 2 \sigma^{2}}+e^{-\alpha x^{2} / 2 \sigma^{2}}}\right)-1+e^{-x^{2} / 2 \sigma^{2}} \\ &=e^{-x^{2} / 2 \sigma^{2}}-\frac{2 e^{-(\alpha+1) x^{2} / 2 \sigma^{2}}-2 e^{-\alpha x^{2} / 2 \sigma^{2}}}{e^{\alpha x^{2} / 2 \sigma^{2}}+e^{-\alpha x^{2} / 2 \sigma^{2}}} \\ &=\frac{e^{-\alpha x^{2} / 2 \sigma^{2}}\left(e^{(2 \alpha-1) x^{2} / 2 \sigma^{2}}+e^{-x^{2} / 2 \sigma^{2}}-2\right)}{e^{\alpha x^{2} / 2 \sigma^{2}}+e^{-\alpha x^{2} / 2 \sigma^{2}}} \end{aligned}\)       (9)

Introduce auxiliary function h(x) :

\(\begin{aligned} h(x) &=e^{(2 \alpha-1) x^{2} / 2 \sigma^{2}}+e^{-x^{2} / 2 \sigma^{2}}-2 \\ & \geq 2 \sqrt{e^{(\alpha-1) x^{2} / \sigma^{2}}}-2 \geq 2\left(e^{(\alpha-1) x^{2} / 2 \sigma^{2}}-1\right) \end{aligned}\)       (10)

In (10), when  α>1  (i.e. β>0.5 ), \(u(x)\) >0 , which completes the proof. In summary, if \(\beta\)> 0.5 , then the proposed generalized approximate function model is steeper between the-0.2 and 0.2. So the approximation of the ℓ0 norm is more efficient. Furthermore, it's easy tosee that when β gets bigger, the better the effect of function approximation will be. But & beta; is not infinitely large, and it usually achieves the perfect result between 1 and 10.

3. The Proposed Algorithm

In this part, we introduce the proposed generalized approximations model to solve the sparsesignal reconstruction problem and give the mathematical analysis. At the same time, the improved quasi-newton method is used as the target search direction to strongly accelerate the convergence speed. Finally, we design a novel reconstruction algorithm to recover the original signal.

From (7), the minimization of the ℓ0 norm is equivalent to the minimization of \(\boldsymbol{F}_{\boldsymbol{\sigma}}(\boldsymbol{x})\) for sufficiently small σ.

\(\min F_{\sigma}(x), \quad \text { s.t. } y=\Phi x\)       (11)

Many algorithms can be used to solve equation (11), the most representative of which is thesteepest descent method. The steepest descent, however, has a severe notched effect, which can seriously affect the convergence speed of the algorithm. Therefore, we use the improved Newton method to solve this problem. First, the Newton direction is calculated according to the generalized approximation function model

\(d=-\nabla^{2} F_{\sigma}(x)^{-1} \nabla F_{\sigma}(x)\)       (12)

where

\(\begin{aligned} \nabla \boldsymbol{F}_{\boldsymbol{\sigma}}(\boldsymbol{x})=\left[\frac{8 \boldsymbol{\beta} \boldsymbol{x}_{1}}{\boldsymbol{\sigma}^{2}\left(e^{\boldsymbol{\beta} x_{1}^{2} / \boldsymbol{\sigma}^{2}}+\boldsymbol{e}^{-\boldsymbol{\beta} x_{1}^{2} / \sigma^{2}}\right)^{2}}\right.\\ \frac{8 \boldsymbol{\beta} \boldsymbol{x}_{2}}{\sigma^{2}\left(e^{\boldsymbol{\beta} x_{2}^{2} / \sigma^{2}}+e^{-\beta x_{2}^{2} / \sigma^{2}}\right)^{2}}, \cdots \cdots \\ \left.\frac{8 \boldsymbol{\beta} \boldsymbol{x}_{N}}{\sigma^{2}\left(e^{\beta x_{N}^{2} / \sigma^{2}}+e^{-\beta x_{N}^{2} / \sigma^{2}}\right)^{2}}\right]^{T} \end{aligned}\)       (13)

 

\(\begin{aligned} &\nabla^{2} \boldsymbol{F}_{\boldsymbol{\sigma}}\left(\boldsymbol{x}_{1}\right)\\ &=\frac{8 \beta \sigma^{2}\left(e^{\beta x_{1}^{2} / \sigma^{2}}+e^{-\beta x_{1}^{2} / \sigma^{2}}\right)-32 \beta^{2} x_{1}^{2}\left(e^{\beta x_{1}^{2} / \sigma^{2}}+e^{-\beta x_{1}^{2} / \sigma^{2}}\right)}{\sigma^{4}\left(e^{\beta x_{1}^{2} / \sigma^{2}}+e^{-\beta x_{1}^{2} / \sigma^{2}}\right)^{3}}\\ &=\frac{\frac{8 \beta}{\sigma^{2}}\left[\left(e^{\beta x_{1}^{2} / \sigma^{2}}+e^{-\beta x_{1}^{2} / \sigma^{2}}\right)-\frac{4 \beta x_{1}}{\sigma^{2}}\left(e^{\beta x_{1}^{2} / \sigma^{2}}+e^{-\beta x_{1}^{2} / \sigma^{2}}\right)\right]}{\left(e^{\beta x_{1}^{2} / \sigma^{2}}+e^{-\beta x_{1}^{2} / \sigma^{2}}\right)^{3}}\\ &=\frac{\frac{8 \beta}{\sigma^{2}}\left[\left(1-\frac{4 \beta x_{1}^{2}}{\sigma^{2}}\right) e^{\beta x_{1}^{2} / \sigma^{2}}+\left(1+\frac{4 \beta x_{1}^{2}}{\sigma^{2}}\right) e^{-\beta x_{1}^{2} / \sigma^{2}}\right]}{\left(e^{\beta x_{1}^{2} / \sigma^{2}}+e^{-\beta x_{1}^{2} / \sigma^{2}}\right)^{3}} \end{aligned}\)       (14)

In order to make sure that Newton direction is a descent direction, the matrix \(\nabla^{2} \boldsymbol{F}_{\boldsymbol{\sigma}}(\boldsymbol{x})\) must be a positive definite matrix. So the matrix \(​​\nabla^{2} \boldsymbol{F}_{\boldsymbol{\sigma}}(\boldsymbol{x})\) should be improved. Then, we canset up a new matrix:

\(\boldsymbol{G}=\nabla^{2} \boldsymbol{F}_{\boldsymbol{\sigma}}(\boldsymbol{x})+\boldsymbol{\varepsilon} \boldsymbol{I}\)       (15)

Where I is the identity matrix, εis a suitable set of improvement coefficients, and the diagonal elements are positive in matrix G . For example, from (14), we can choose

ε\(\varepsilon=\frac{8 \beta\left[\frac{5 \beta x^{2}}{\sigma^{2}} e^{\beta x^{2} / \sigma^{2}}-\frac{3 \beta x^{2}}{\sigma^{2}} e^{-\beta x^{2} / \sigma^{2}}\right]}{\sigma^{2}\left(e^{\beta x^{2} / \sigma^{2}}+e^{-\beta x^{2} / \sigma^{2}}\right)^{3}}\)       (16)

as the improvement coefficients. Then matrix G can be shown as

\(\boldsymbol{G}=\left[\begin{array}{cccc} {\boldsymbol{G}\left(x_{1}\right)} & {0} & {\cdots} & {0} \\ {0} & {G\left(x_{2}\right)} & {\cdots} & {0} \\ {\vdots} & {\vdots} & {} & {\vdots} \\ {0} & {0} & {\cdots} & {G\left(x_{N}\right)} \end{array}\right]\)       (17 )

\(G\left(x_{i}\right)=\frac{8 \beta\left(1+\frac{\beta x_{i}^{2}}{\sigma^{2}}\right)}{\sigma^{2}\left(e^{\beta x_{i}^{2} / \sigma^{2}}+e^{-\beta x_{i}^{2} / \sigma^{2}}\right)}\)       (18)

So, it can be obtained that

\(\begin{aligned} \boldsymbol{d} &=-\boldsymbol{G}^{-1} \nabla \boldsymbol{F}_{\boldsymbol{\sigma}}(\boldsymbol{x}) \\ &=\left[\frac{-\boldsymbol{\sigma}^{2} \boldsymbol{x}_{1}}{\boldsymbol{\sigma}^{2}+\boldsymbol{\beta} \boldsymbol{x}_{1}^{2}}, \frac{-\boldsymbol{\sigma}^{2} \boldsymbol{x}_{2}}{\boldsymbol{\sigma}^{2}+\boldsymbol{\beta} \boldsymbol{x}_{2}^{2}}, \cdots, \frac{-\boldsymbol{\sigma}^{2} \boldsymbol{x}_{N}}{\boldsymbol{\sigma}^{2}+\boldsymbol{\beta} \boldsymbol{x}_{N}^{2}}\right]^{T} \end{aligned}\)       (19)

And in general, parameter σ is chosen as \(\sigma_{k}=\varphi \sigma_{k-1}, k=2,3, \ldots, K, \text { and } \varphi \in(0.4,1)\) . Let \(\boldsymbol{\sigma}_{1}=\max \left(\mathbb{\Phi}_{x}^{\dagger} \boldsymbol{y} |\right) \cdot \Phi_{x}^{\dagger}\) is the Moore-Penrose Pseudoinverse [18] of Φ .

\(\Phi_{x}^{\dagger}=\left(\Phi_{x}^{\prime} \Phi_{x}\right)^{-1} \Phi_{x}^{\prime}\)        (20)

Using the above derivation, the main steps of using the generalized approximate function model proposed in this paper to reconstruct sparse signals are shown in Table 1. The corresponding algorithm is called gSL0. In the following section, we give a detailed comparison and description between our new algorithm and existing excellent algorithms.

We first initialize the following parameters: min σ min(the minimum value of min σ min that should be a very small positive mumber), L (the number of iterations for decreasing σk ). \(\hat{\boldsymbol{x}}\) (the initial solution that can be obtained using pseudo-inverse which has the minimum ℓ2 norm and corresponds to σ→∞ ).

Table 1. The proposed gSL0 Algorithm

 

4. Experimental Results and Analysis

In this part, some experiments are carried out for illustrating the performance of the proposed gSLO algorithm. The measurement matrix Φ is acquired by extracting the random matrix M rows of the N×N dimension. The proposed gSL0 model is contrasted with the latest greedy algorithms such as OMP, StOMP, and SWOMP and with the smooth ℓ0 norm algorithm suchas SL0, ASL0, and TSL0 in the aspects of signal reconstruction and image reconstruction. With no exception, the whole experiments are realized, in Matlab 2014a of PC which contain 3.2 GHZ Intel Core i5 processor and 8.0 GB running memory Windows 7 system.

Experiment 1: One-dimensional Signal.

In this experiment, the Gaussian random sparse signal length and noise interference are respectively set to N=256 and Gaussian white noise. A large number of simulations areproceeded to compare the reconstruction capabilities among different reconstructionalgorithms. In view of randomness of the proposed model, all the simulation results are gained from averaged results of 1,000 independent tests results. For checking the proposed algorithm’s performances for signal reconstruction, the reconstruction effects are estimated by exact reconstruction probability, averaged running time, and reconstructed relative error(Re).

In the first simulation, we fix sparsity K=20 and N=256 (signal length) and the measurement number M varies between 40 and 100. According to the exact reconstruction rate, the 7 different algorithms are estimated in the aspect of reconstruction which is presented in Fig. 2. We can find the proposed algorithm gSL0 in this paper which possesses a betterreconstructed rate with regard to the different measurements. gSL0 also obviously precedesthe most advanced greedy algorithms while the measurement number is greater then 60.

 

Fig. 2. In gaussian noise, reconstruction performance of gSL0, OMP, StOMP, SWOMP, SL0, TSL0 and ASL0 with the measurement number changing from 40 to 100.

In the second simulation, we fix N=256 and M=80 and the sparsity level K varies between 10 and 45. Fig. 3 shows the experimental results of different algorithms. It shows thosealgorithm’s recovery probability under different sparsity level. We can find that the proposed algorithm can reconstruct the sparse signal with higher precision for different sparsity. And we can know that gSL0 performs much better than the other five algorithms in exactreconstruction rate in Fig. 3.

In the thrid simulation, we set N=256 and sparsity K=20 respectively and the measurement M varies between 40 and 100. From the Fig. 4, the simulation result shows that the averagerunning time of gSL0 changes slowly with the increase of the sampling data. Considering the average running time of the different algorithms, we see that gSL0 performs relatively fast in the case of large measurement number. Since TSL0 introduces the mechanism of threshold selection to accelerate the inner iteration. We can also find that TSL0 is fatser than the proposed gSL0 in Fig. 4. However, we can learn that the reconstruction effect of TSL0 undernoisy is very bad from Fig. 2 and Fig. 3.

 

Fig. 3. Simulations for Gaussian sparse signals with Gaussian noise. The probability of exactreconstruction with different sparsity level.

 

Fig. 4. In gaussian noise, average running time of gSL0, OMP, StOMP, SWOMP, SL0, TSL0 and ASL0 with the measurement number varies between 40 and 100 for the fixed N=256, K=20.

Finally, the relative error of one-dimensional signal reconstruction is defined as follows:

\(\begin{equation} \mathbf{R e}=\frac{|x-\tilde{x}|}{\|x\|_{2}} \end{equation}\)       (21)

Obviously, the Re is lower, the effect of signal reconstruction will be better.

 

Fig. 5. reconstruction relative error of gSL0, SWOMP.

We can learn from Fig. 5 that the Re of gSL0 is lower. It is to say that the proposed algorithm can more accurately reconstruct the original signal.

Experiment 2: Algorithm Performances Comparison of Image Reconstrction.

In this part, to verify the effectiveness of the proposed algorithm, three standard images, i.e. Lena image, Camera image and Boat image, are adopted as the input to conduct the comparison analysis of different algorithms. Furthermore, the peak signal-to-noise ratio(PSNR) is employed in this paper to evaluate the reconstruction performance of eachalgorithm, which measures the quality of reconstruction of sparse codes and is defined as

\(P S N R=10 \log _{10}\left(\frac{M \times N \times 255^{2}}{\|x-\tilde{x}\|_{2}}\right)\)       (22)

The measurement matrix Φ is acquired by extracting the random matrix M rows of the N× N dimension. Sampling ratio is M / N=0.42. The size of each image is 256×256. Thereconstruction effects are estimated by PSNR, Re, averaged running time and visual effects of reconstruction. The mean values of the PSNR, Re and Time over 20 independent tests aregiven in Table 2. We can see the reconstruction effects of different algorithms ontwo-dimensional images in Fig. 6.

 

Fig. 6. The quality of the reconstruction with different algorithms in different images.

From Fig. 6, we can learn that the proposed gSL0 is better than other algorithms (SL0, ASL0, OMP) in the PSNR of the reconstructed image. And we can see that the gSL0 algorithm has better reconstruction effect for different kinds of images. So the gSL0 algorithm canaccurately recover the original signal.

The reconstruction relative error of two-dimensional signal is defined as follows:

\(\mathbf{R e}=\frac{\|\boldsymbol{x}-\tilde{\boldsymbol{x}}\|_{2}}{\|\boldsymbol{x}\|_{2}}\)       (23)

Table 2. Reconstruction effect of SL0, ASL0, OMP, SWOMP and the proposed gSL0 for different images, each image with the fixed measurement rate 0.4 (M / N).

 

From Table 2, it shows that the proposed gSL0 algorithm can achieve the bestreconstruction performance in PSNR and Re of all test images. However, we also can find that the speed of reconstruction is a little slower than SWOMP. As we can see from Fig. 6 and Table 2, the proposed algorithm gSL0 precedes the most advanced algorithms in the aspect of image reconstruction.

5. Conclusion

In this paper, we proposed a generalized approximate function model to reconstruct ℓnormand design the gSL0 algorithm based on the proposed generalized model. To accelerate the convergence, we use the improved Newton direction as the search direction. Through the proof and simulation, we can get that the generalized approximate model has better &ld quo;steepnature &rd quo; and the estimation of the ℓ0 norm is more precise. The extensive test results show that the proposed model has a good recovery effect not only for the one-dimensional signal but als ofor the two-dimensional image. The proposed gSL0 is compared with some known algorithms based on a smoothed ℓ0 norm and existing excellent greedy algorithms, it has a high probability of reconstruction and faster reconstruction, even with Gaussian white noise. State-of-the-artreconstruction effect is achieved by the gSL0 algorithm for different images.

References

  1. D. L. Donoho, "Compressed Sensing," IEEE Trans. Information Theory, vol.52, no.4, pp.1289-1306, Apr, 2006. https://doi.org/10.1109/TIT.2006.871582
  2. E. J. Candes, M. B. Wakin, "An introduction to compressive sampling," IEEE Signal Process, vol.25, no.2, pp.21-30, Mar, 2008. https://doi.org/10.1109/MSP.2007.914731
  3. H.Q. Gao, R.F. Song, "Distributed compressive sensing based channel feedback scheme for massive antenna arrays with spatial correlation," KSII Transactions on Internet and Information Systems, vol.8, no.1, pp.108-122, Jan, 2014. https://doi.org/10.3837/tiis.2014.01.007
  4. Austin ACM, Neve MJ, "Efficient Field Reconstruction Using Compressive Sensing," IEEE Transactions on Antennas and Propagation, vol.66, no.3, pp.1624-1627, Mar, 2018. https://doi.org/10.1109/TAP.2018.2794371
  5. H. Anh, I. Koo, "Primary user localization using Bayesian compressive sensing and path-loss exponent estimation for cognitive radio networks," KSII Transactions on Internet and Information Systems, vol.7, no.10, pp.2338-2356, Oct, 2013. https://doi.org/10.3837/tiis.2013.10.001
  6. Haupt J, Bajwa W U, Raz G, et al, "Toeplitz compressed sensing matrices with applications to sparse channel estimation," IEEE Transactions on Information Theory, vol.56, no.11, pp.5862-5875, Nov, 2010. https://doi.org/10.1109/TIT.2010.2070191
  7. Wang Q, Qu G, "A new greedy algorithm for sparse recovery," Neurocomputing, vol.275, pp.137-143, Jan., 2018. https://doi.org/10.1016/j.neucom.2017.05.022
  8. Wen, JM , Wang, J , Zhang, QY, "Nearly Optimal Bounds for Orthogonal Least Squares," IEEE Transactions on Signal Processing, vol. 65, no.20, pp. 5347-5356, Oct, 2017. https://doi.org/10.1109/TSP.2017.2728502
  9. Cohen, Albert, W. Dahmen, and R. Devore, "Orthogonal Matching Pursuit Under the Restricted Isometry Property," Constructive Approximation, vol.45, no.1, pp.113-127, Feb., 2017. https://doi.org/10.1007/s00365-016-9338-2
  10. Voroninski V, Xu Z, "A strong restricted isometry property, with an application to phaseless compressed sensing," Applied and computational harmonic analysis, vol.40, no.2, pp.386-395, Mar, 2016. https://doi.org/10.1016/j.acha.2015.06.004
  11. P.-Y. Chen, I.W. Selesnick, "Group-sparse signal denoising: non-convex regularization, convex optimization," IEEE Trans. Signal Process, vol.62, no.13, pp.3464-3478, Jul., 2014. https://doi.org/10.1109/TSP.2014.2329274
  12. Donoho D L, Elad M, "On the stability of the basis pursuit in the presence of noise," Signal Processing, vol.86, no.3, pp.511-532, Mar., 2006. https://doi.org/10.1016/j.sigpro.2005.05.027
  13. Cai T T, Wang L, "Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise," IEEE Transactions on Information Theory, vol.57, no.7, pp.4680-4688, Jul, 2011. https://doi.org/10.1109/TIT.2011.2146090
  14. Donoho D L, Tsaig Y, Drori I, et al, "Sparse Solution of Underdetermined Systems of Linear Equations by Stagewise Orthogonal Matching Pursuit," IEEE Transactions on Information Theory, vol.58, no.2, pp.1094-1121, Feb, 2012. https://doi.org/10.1109/TIT.2011.2173241
  15. Blumensath T, Davies M E, "Stagewise Weak Gradient Pursuits," IEEE Transactions on Signal Processing, vol.57, no.11, pp.4333-4346, Nov, 2009. https://doi.org/10.1109/TSP.2009.2025088
  16. Wang H, Guo Q, Zhang G, et al, "Thresholded Smoothed ℓ0 Norm for Accelerated Sparse Recovery," IEEE Communications Letters, vol.19, no.6, pp.953-956, Jun., 2015. https://doi.org/10.1109/LCOMM.2015.2416711
  17. Mohimani H, Babaie-Zadeh M, Jutten C, "A Fast Approach for Overcomplete Sparse Decomposition Based on Smoothed ${\ell}_0$ Norm," IEEE Transactions on Signal Processing, vol.57, no.1, pp.289-301, Jan, 2009. https://doi.org/10.1109/TSP.2008.2007606
  18. Barata JCA, Hussein MS, "The Moore-Penrose Pseudoinverse: A Tutorial Review of the Theory," Brazilian Journal of Physics, vol.42, no.1-2, pp.146-165, Apr. 2012. https://doi.org/10.1007/s13538-011-0052-z