DOI QR코드

DOI QR Code

Adaptive Selective Compressive Sensing based Signal Acquisition Oriented toward Strong Signal Noise Scene

  • Wen, Fangqing (Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education) ;
  • Zhang, Gong (Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education) ;
  • Ben, De (Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education)
  • Received : 2014.10.27
  • Accepted : 2015.07.20
  • Published : 2015.09.30

Abstract

This paper addresses the problem of signal acquisition with a sparse representation in a given orthonormal basis using fewer noisy measurements. The authors formulate the problem statement for randomly measuring with strong signal noise. The impact of white Gaussian signals noise on the recovery performance is analyzed to provide a theoretical basis for the reasonable design of the measurement matrix. With the idea that the measurement matrix can be adapted for noise suppression in the adaptive CS system, an adapted selective compressive sensing (ASCS) scheme is proposed whose measurement matrix can be updated according to the noise information fed back by the processing center. In terms of objective recovery quality, failure rate and mean-square error (MSE), a comparison is made with some nonadaptive methods and existing CS measurement approaches. Extensive numerical experiments show that the proposed scheme has better noise suppression performance and improves the support recovery of sparse signal. The proposed scheme should have a great potential and bright prospect of broadband signals such as biological signal measurement and radar signal detection.

Keywords

1. Introduction

As an alternative paradigm to the Shannon-Nyquist sampling theorem, compressive sensing (CS) enables sparse signal to be acquired by sub-Nyquist analog-to-digital converters (ADC), thus launch a revolution in signal collection, transmission and processing. The CS theory points out that if the signal is compressible or sparse in a transform domain, it can be recovered exactly with high probability from fewer measurements via l1 -norm optimization [1]. Rather than the classical Shannon-Nyquist sampling theorem, which requires sampling signals at twice the bandwidth, CS promises to reduce the sampling bandwidth, which depends on the sparsity of the signal. Compared with the traditional radio frequency (RF) signal acquisition system, the sampling front-end of CS operates at a lower speed and then lowers the cost of the front-end sensor (such as size, weight and power consumption). The parts with intensive computation of the acquisition process are removed from the front-end sensor and are transferred to a central processing back-end. Due to the potential use in signal processing applications, CS has attracted vast interests in signal acquisition [2], radar detection [3], cognitive radio [4] and Massive antenna arrays [5].

CS has been considered from an adaptive perspective in [6]-[10]. The parameterized Bayesian model [6] is proposed in to dynamically determine whether a sufficient number of CS measurements have been performed. In [7], an empirical Bayesian model based multitask learning algorithm is developed to improve the performance of the inversion. An analogous work has been done in localization in wireless LANs [8]. These Bayesian methods have been demonstrated to achieve better recovery performance. However, they often require fewer noisy observations to recover sparse signals than nonadaptive competitors in practice. In [9], an adaptive optimal measurement matrix design has been studied in CS-based multiple-input multiple-output (MIMO) radar to improve the detection accuracy. In [10], an adaptive CS radar scheme is proposed where the transmission waveform and measurement matrix can be updated by the target scene information fed back by the recovery algorithm, which achieves better detection performance than the traditional CS radar system.

Generally speaking, the measurement noise in CS can be classified into two categories in terms of the generation mechanism [11]. The first category is the signal noise, i.e., jammers and interference in the transmission channel. The second category is the processing noise caused by the processing and acquisition hardware, i.e., the quantization error in the acquisition system. Most of the previous literatures focus on CS acquisition and recovery with the processing noise. The recent works in CS show that the measurement process would causes the noise folding phenomenon [12], which implies that the noise in the signal eventually is amplified by the measuring process. The study of has raised concerns from some scholars [13][14]. In [13], the authors evaluate the performance of the CS based wideband radio receiver in both signal noise and processing noise environments, and some effective suggestions are given for the CS receiver evaluation. In [14], an enhanced l1 minimization recovery algorithm is developed for signal noise suppression, which has been proven that the algorithm providing relatively simple and precise theoretical guarantees. All the above studies can be summarized as optimization methods after sparse signals and noise have been acquired. Once the acquisition system faced with strong signal noise scene, the benefit from these optimization methods may be diminished. Nonetheless, signal noise suppression has not been taken into account from the perspective of measurement matrix optimization in the signal acquisition current system.

In this paper, we provide a new insight into sparse signal acquisition oriented toward strong signal noise scene. The mechanism of how does the signal noise exacerbates the recovery performance is investigated. An adapted selective compressive sensing (ASCS) scheme is proposed for signal noise suppression in the acquisition system. The measurement matrix can be adapted according to the noise strength so as to selective measure the signal noise, thus provides fewer noisy measurements. For robust noise priori estimation, the multiple measurement vectors (MMV) [15] model is used. A method of joint projection filtering in the compressive domain and the subspace estimation are proposed in this paper. We evaluate the performance via simulations and compare the proposed scheme with a non-adaptive implementation.

The rest of the paper is organized as follows. In section 2, we present the signal model and analyze the impact of measurement process on signal noise. Section 3 provides the proposed ASCS scheme. The simulation results are given in section 4. Finally, conclusions are given in Section 5.

Notation: Lower case and capital letters in bold denote, respectively, vectors and matrices. The superscript(•)T , (•)H , (•)-1 and (•)† represent the operators of transpose, Hermitian transpose, inverse and pseudo-inverse, respectively; The subscript ∥•∥i• and ∥•∥•j accounts for the i -th row and j -th column of a matrix; ∥•∥1 , ∥•∥2 and ∥•∥F separately denote the l1 -norm, l2 -norm and Frobenius norm.

 

2. Signal Model

2.1 Compressive Sensing

The CS theory states that if a signal is compressible or sparse in a transform domain, it can be recovered exactly with high probability from much fewer samples than that required by the traditional Shannon-Nyquist sampling theorem. Without loss of generality, for any x ∈ ¡ N×1 , if there exist unique coefficients such that

where Ψ denote an N × N orthogonal transform basis with the n -th column given by φn ∈ ¡ N×1 , and s = [s1,s2,L ,sN]T is a complex-valued vector with length of N . The signal x is called K -sparse if no more than K elements of its sparse representation s are nonzero, i.e. ∥s∥0 = K with K = N . The support of x is

In order to recover x one must identify supp (x) . Therefore, a natural recovery strategy for signal recovery is support identification.

Now we consider a linear projection operator that computes M ( K < M < N ) inner products between x and a set of vectors

We collect the measurements and form a vector y = [y1,y2,L ,yM]T . By arranging the projection operators as rows of an M×N measurement matrix Φ , the noisy measurement process in (3) can be represented as

where e = [e1,e2,L ,eM]T represents the noisy environment effects with each entry em being zero mean Gaussian random variable with variance . As M is typically much smaller than N , the matrix Θ = ΦΨ represents a dimensionality reduction since it maps ¡ N into ¡ M . (4) is turned to be an underdetermined system. The sparse solutions to the linear inverse problem from (4) can be formulated as the following convex problem

In general, this problem is NP-hard. [16] states that the l0 -norm optimization in (5) can be approximated by the l1 -norm relaxation with a bounded error under certain conditions

To ensure stable recovery of sparse vector s by l1 -norm minimization, the matrix Θ need satisfying the restricted isometry property (RIP) [17] of the order K with a very small constant δK , so that

In other word, Θ acts as an approximate isometry on the set of vectors that are K -sparse. Note that Gaussian matrices, Bernoulli matrix and uniformly random partial Fourier matrix provide reasonable constants for RIP. A typical means of solving (6) is through an unconstrained l1 -norm regularized formulation

where η is a tradeoff parameter balancing the estimation quality. The basic framework in (8) can be solved by techniques such as greedy algorithms [18] and Bayesian algorithms.

2.2 Noise Folding in CS

The basic CS model in (4) is adequate when faced with the measured error or noise. However, in many practical scenarios, the signal itself is contaminated by the signal noise, which is not accounted for in (4). In [11], the authors present a generalized CS model

where n stands for the white signal noise with variance , and e represents the processing noise. Basically, this is equivalent to stating that is only approximately sparse. The noise situation in (9) is subtly different from the basic setting because the signal noise has been acted upon by the matrix Θ , and it is possible that Θn could be potentially rather large. Our chief interest here is to understand how n impacts the recovery performance.

Before establishing our main result concerning white signal noise, some useful assumptions are suggested for our deduction. We suppose that the measurement matrix Θ ∈ £M×N fulfills the RIP of the order K and constant δK . Furthermore, we suppose that:

1). each row Θm•(m = 1,2,L , M) of Θ is orthogonal to others, i.e., , and each column of Θ•n(n = 1,2,L , N) is normalized to 1, namely ∥Θ•n∥2 = 1 .

2). each row has the same norm. Since , with the hypothesis in 1), we have .

3). acquisition noise e is ignored in our discussion, i.e. e=0 .

In our formulation, we use the notation λj(Θ) to denote the j -th largest eigenvalue of Θ , and we use sj(Θ) to denote the j -th largest singular value of Θ , thus we obtainsj(Θ) = λ(ΘHΘ) . To establishing our main result concerning white signal noise, a useful lemma is firstly cited, which has been proven in [19].

Lemma (Lemma 7.1 of [19]). Suppose that Θ is a M×N matrix and let Λ be a set of indices with ∥Λ∥0 ≤ K . If Θ satisfies the RIP of order K and constant δK , for k = 1,2,L ,K we have

We begin by noting that , the expectation of the measured noise power is

which establishes . From the RIP, we can , which implies that the sparse signal power hardly changed during the measurement process. In order to quantify the impaction of signal noise to the random measurement process, we defined the impaction factor Gainnoise as ratio of the recovered noise power to the power of the noise component that attached to the sparse signal , therefore . Let Λ to be the indices set with the elements represent the indexes corresponding to the location of nonzero elements in s , i.e. Λ = supp(x) . The least-squares optimal recovery of s restricted to the index set Λ is given by

Since Θn is a white Gaussian process, we have

Combining (13) with (10) yields . In the event that the noise n is a white random vector, there exists , thus

From which we observe that the noise added to the signal itself can be highly amplified by the measurement process as M = N . In the literature, this effect is known as noise folding.

 

3. Adaptive Compressive Sensing

Although prior research have validated the benefits of exploiting RIP in measurement design [9][10], such as improving the recovery probability, decreasing the recovery error and so on, these benefits diminished when faced with strong signal noise scene. From the above analysis, the expected Gainnoise closely related to parameters M and N , which account for the number of rows and columns in Θ . Generally, M is related to the RIP condition (which is bound by K , N and δK ), and N in Gainnoise is related to the measured support of noise in Θ . However, only ΘΛ contribute to the sparse vector s and entirely measured the signal noise. In the traditional Shannon-Nyquist sampling system, to avoid the noise off the passband, an antialiasing filter is applied before the sampling process. Inspired by the necessity of antialiasing filtering in bandpass signal sampling, a selective measuring scheme is proposed in this paper. The measurement matrix would only sense the interested spectrum, where most likely the sparse spectrum lying.

The measurement matrix in our scheme is modified into

where A ∈ □ M×N is a random matrix, Ω is an index set. ℑΩ(A) is defined as a selective operation which setting the n -th (n ∈ Ω) column of A to zeroes, act as an antialiasing filter in our scheme.

3.1 Projection Filtering in the Compressed Domain

The core of the proposed scheme is estimation of the index set Ω, where most likely the noise spectrum lying. It is necessary for us to extract the information that each vector Θ•n hide in y . The simplest way is to projection y to each vector Θ•n . However, due to the nonorthogonality between the columns of the matrix Θ , the projection results would interfere with each other, and the low SNR scene increased the difficulty of information extraction. To minimize the projection interference, a set of projection filters are applied. The output of the n -th (n = 1,2,L ,N) filter is formed as

The output energy is defined as with the correlation matrix of the measured signal Ry = E{yyH} . Our objective function is minimized the output energy. In order to avoid the trivial solution such as hn = 0 , a set of linear constraints are added to the objective function, which can be expressed as

The minimum output energy can be achieved with a proper choice of hnopt . We can solve the general constrained minimization problem of equation (17) to obtain hnopt by applying Lagrange multiplier method, which resulting in the following unconstrained objective function

To force the gradient of the objective function to be zero, i.e., , we obtain . The optimal values for hnopt that minimize the objective function can be evaluated as follows

The optimal output energy of zn is and the desired output of the filter banks z = [z1,z2,L ,zN ] are

with and D = diag(d1,d2,L ,dN) denotes a diagonal matrix with principal diagonal elements being d1,d2,L ,dN in turn, where . Note that in the ideal case the matrix Q can be estimated precisely. Therefore, the Lagrange method converges to the optimal solution in a single iteration, as expected for a quadratic objective function.

3.2 Noise Information Estimation Using Subspace Method

The model in (9) is a typical single measurement vector (SMV) model. When a sequence of measurement vectors are available, (9) can be extended to the multiple measurement vectors (MMV) model, which provides informative coupling between the vectors. The noisy MMV problem can be stated as solving the following underdetermined systems of equations

where L is the number of measurement vectors. Since the matrix Θ is common to each of the representation problem, (21) can be rewritten as

where Y = [y1,y2,…,yL] , S = [s1,s2,…,sL] and E = [e1,e2,…,eL] . Additional assumptions are that the solution vectors s1,s2,…,sL are sparse and have the same sparsity profile. It is equal to state that S is an unknown source matrix with nonzero rows representing the targets. In many applications, such as wireless communication and radar detection, the spectrum that signals occupied is slowly time-varying, hence the common sparsity assumption is valid.

The presence of multiple measurements can be helpful in estimating the set Ω. With multiple measurements, the desired output in (25) can be represented as

where N = QHE and H = QHΘ . The covariance matrix of the filtered signal is Rz = E{ZZH} . The eigenvalue decomposition of Rz is

where Σ = diag(λ1,λ2,…,λN) , the eigenvalues are complied with λ1 ≥ … ≥ λK > λK+1 = … = λN = . The eigenvectors u1, u2,…,uk corresponding to the K larger eigenvalues λ1,λ2,…,λN construct signal subspace Us =[u1, u2,…,uk] , with Σs = [λ1,λ2,…,λK] . Similarly, the later N - K eigenvalue are depending on the noise and their numeric values are . The eigenvectors uK+1, uK+2,…,uN corresponding to λK+1,λK+2,…,λN construct noise subspace Un = [uK+1, uK+2,…,uN] , and Σn = [λK+1,λK+2,…,λN] . Let Λ stands for the index set corresponding to the K nonzero rows of S , we have

where . It can be seen from (25) that . Since RSΛ is a non-singular matrix, we get , thus . This indicates that the column vectors in HΛ is orthogonal to the subspace of the noise. The spectrum function of sparse location can be deduced

With the change of n , there would be K large values in (26), which correspond to the sparse position. The peak values are obvious with high SNR, but this superiority dwindles under the condition of low SNR. However, the non-orthogonality between HΛ and Un barely affected. Hence the index corresponding to the smallest N - K values could be treated as the positions of the noise, which should be ignored by the measurement process for noise suppression. In order to avoid causing any confusion with strong signal noise level, we consider the index corresponding to the smallest P(2K < P < N) values in (26) seemed a high possibility to be noise. (26) also can be expressed as

where , which represents the projection matrix of H•n .

3.3 Signal Reconstruction

Recovery of the signal from the linear projections can be accomplished by solving (8). A variety of optimization algorithms are available for the recovery problem, such as Orthogonal Matching Pursuit (OMP), Compressive Sampling Matched Pursuit (CoSaMP), FOCal Underdetermined System Solver (FOCUSS) and Sparse Bayesian Learning (SBL). The regularized M-FOCUSS [15] is chosen for its perfect compromise between computation complexity and reconstruction accuracy, which can be summarized as the following iteration steps

where β stands for the regularization parameter, the p -norm always set to p = 0.8 as suggested by the authors for robust solution.

The regularized M-FOCUSS algorithm can be treat as solving at each iteration a weighted least squares. The initial solution was firstly set to a nonzero weight matrix, with the iteration of the algorithm, would tend to be stable. The algorithm could be terminated once the maximum iteration number reached or a convergence criterion has been satisfied

where ε is a user-selected parameter. The proposed adaptive CS scheme can be summarized as following

(1). Initialize measurement matrix Φ as (15), set Ω = Ø . Collect the compressed data Y , and calculate Ζ using (23).

(2). Estimate the compressed signal covariance matrix RZ , then perform an EVD for RZ , and isolate the subspace of the noise Un .

(3). Compute the spectrum function in (26) or in (27), select the index corresponding to the P smallest value in (26) or the P largest value in (27).

(4). Update measurement matrix Φ using (15).

(5). Measure the signal x using the updated measurement matrix Φ , recovery the sparse information using the iterations of (28) until (29) being satisfied.

 

4. Experimental Results and Analysis

Extensive computer experiments have been conducted and a few representative and informative results are presented. We consider a signal sparse in Fourier domain. Unless specifically stated otherwise, the following conditions are applied. We set N = 150 and K = 3 , the compressive measured dimension is M = 50 , the dimension of the multiple measurement vectors is L = 10 , and the selective parameter is set to P = 50 . In our simulation, the SNR is defined as SNR = 20log (∥S∥2/∥N∥2) , where N stands for the signal noise matrix. The proposed adaptive method is compared to the adaptive compressive sensing (ACS) method in [10] (using M-FOCUSS algorithm for sparse reconstruction) and traditional nonadaptive scheme with the recovery algorithms OMP, M-FOCUSS and MSBL. To assess the optimization performance of the proposed scheme, 1000 Monte Carlo simulations are conducted. In each trial the initial measurement matrix was created with columns uniformly drawn from the surface of a unit hypersphere, and the source matrix S ∈ ¡ N×L was randomly generated with K nonzero rows (i.e., sources). In each trial the indexes of the sources were randomly chosen. Two measures were applied for performance assessment, the first one is the failure rate defined in [20], and a failed trial was recognized if the indexes of estimated sources with the largest norms were not the same as the true indexes. The second one is the mean square error (MSE) defined as , where represents the reconstructed sources S .

We explored the recovery performance with different signal noise levels. Fig. 1 depicts the performance curve, from which we conclude that the adaptive scheme outperform the ACS method and the nonadaptive one with the same noise environment. With the increasing SNR, as expected, both schemes would achieve better performance. But meanwhile we noticed when SNR ≤ −4dB, the benefit from multiple measurement vectors diminished, and the failure rate deteriorate sharply. One obvious observation is that the proposed adaptive scheme would achieve lower failure rate with extreme noise conditions. According to the RIP in CS, once M ≥ Cμ(Θ)KlogN ( C is a constant and μ(Θ) is defined as the maximum absolute value of the normalized inner product between all columns in Θ ), one could accurately recovery the sparse vector with high probability. In our setup, the RIP is satisfied, and therefore further optimization for the measurement matrix couldn’t improve the performance significantly. However, our ASCS scheme would suppress the signal noise, hence provides high-precision recovery performance.

Fig. 1.Performance comparison with various SNR

Fig. 2 depicts simulation results with different signal sparsity, the SNR is set to 0dB. As this figure shows, the increasing of sparsity K leads to the decreasing of the recovery performance, but the proposed adaptive method still achieves better performance with respect to failure rate and MSE. This phenomenon can be explained as follows. The configured parameters in our simulation is only robust for K ≤ 6 according to the RIP[15]. In the case of K ≥ 7, the RIP diminished, thus the recovery algorithm fail to recovery S with high probability. The proposed adaptive method enables fewer signal noise being measured through the measurement process, therefore the adaptive method performs better than the nonadaptive ones with the same configuration.

Fig. 2.Performance comparison with different sparsity K

Fig. 3 shows that the failure rate decreases exponentially with the number of the measurement vectors L, and the increasing L narrows the performance gap between the adaptive scheme and the nonadaptive one. In practical applications, under the common sparsity assumption of source S, we cannot obtain many measurement vectors, as the sparsity profile of practical signals is time-varying, such as frequency hopping system. So the common sparsity assumption is valid for only a small L in the MMV model. Future research will pay much attention to this problem.

Fig. 3.Performance under different number of measurements L

Finally, we investigated the application of the proposed method for diection-of-arrival (DOA) estimation in spatial CS based multip-input and multip-output (MIMO) radar system [21]. In this application, the MIMO radar system is configurated with 10 transmit antennas, 10 receiver antennas, the snapshots number is 5, and 3 targets are located in the far field with DOA θ = [15,40,65] . Unlike the Fourier basis that used in the above simulation, sparse dictionary in the application is consist of a series of interesting steering vectors with angel range from 0˚ -90˚ and resolution is 0.25˚ . Fig. 4 depicts the performance comparson with different SNR and different measurements. As shown in the figures, the OMP method owns high failure rate, this is caused by the severe mutual coherence between the atoms of the dictionary. The greedy OMP algorithm ensures the residual is orthogonal to the atoms that chosen in the last iteration, which may destroy the information hiding in the residual when updating the new residual. Thanks for the noise suppression function, the proposed scheme could provide almost precise estimation results.

Fig. 4.Performance comparison with applied in spatial compressive sensing MIMO radar

 

5. Conclusion

In this paper, we proposed an ASCS scheme for signal noise suppression in CS based signal acquisition system. A computational framework for the measurement matrix design is investigated, which transforms the measurement matrix design into the noise priori estimation. A two-step process is developed for locating the noise spectrum precisely. A set of projection filter banks are firstly used for minimizing the projection interferences. A subspace method is then applied for the noise information estimation. Simulation results demonstrated the effectiveness of the proposed scheme. From the view point of future implementation, measurement noise should be taken into consideration in the system, and more efficient algorithms have to be developed for source pre-estimation with low SNR. On the other hand, how to deal with the real world signal (e.g., image, video, or audio) is a problem need for further study.

References

  1. D.L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, April 2006. Article (CrossRef Link) https://doi.org/10.1109/TIT.2006.871582
  2. F. Wen, Y. Tao, G. Zhang, “Analog to Information Conversion using Multi-Comparator based Integrate-and-Fire Sampler,” Electronics Letters, vol. 51, no. 3, pp. 246-247, February 2015. Article (CrossRef Link) https://doi.org/10.1049/el.2014.1950
  3. Y.J. Li, R.F. Song, “A new compressive feedback scheme based on distributed compressed sensing for time-correlated mimo channel,” KSII Transactions on Internet and Information Systems, vol.6, no.2, pp.580-592, Feb. 2012. Article (CrossRef Link) https://doi.org/10.3837/tiis.2012.02.008
  4. H. Anh, I.Koo, “Primary user localization using Bayesian compressive sensing and path-loss exponent estimation for cognitive radio networks,” KSII Transactions on Internet and Information Systems, vol.7, no.10, pp.2338-2356, OCT. 2013. Article (CrossRef Link) https://doi.org/10.3837/tiis.2013.10.001
  5. H.Q. Gao, R.F. Song, “Distributed compressive sensing based channel feedback scheme for massive antenna arrays with spatial correlation,” KSII Transactions on Internet and Information Systems, vol.8, no.1, pp.108-122, Jan 2014. Article (CrossRef Link) https://doi.org/10.3837/tiis.2014.01.007
  6. S.H. Ji, X. Ya, L. Carin, “Bayesian Compressive Sensing,” IEEE Transactions on Signal Processing, vol.56, no.6, pp.2346-2356, June 2008. Article (CrossRef Link) https://doi.org/10.1109/TSP.2007.914345
  7. S.H. Ji; D. Dunson, L. Carin, “Multitask Compressive Sensing,” IEEE Transactions on Signal Processing, vol.57, no.1, pp.92-106, Jan. 2009. Article (CrossRef Link) https://doi.org/10.1109/TSP.2008.2005866
  8. R.P. Li; Z.F. Zhao; Y. Zhang, J. Palicot, H.G. Zhang, “Adaptive multi-task compressive sensing for localization in wireless local area networks,” IET Communications, vol.8, no.10, pp.1736-1744, July 2014. Article (CrossRef Link) https://doi.org/10.1049/iet-com.2013.1019
  9. Y. Yao; A.P. Petropulu; H.V. Poor, “Measurement matrix design for compressive sensing-based mimo radar,” IEEE Transactions on Signal Processing, vol.59, no.11, pp.5338-5352, Nov. 2011. Article (CrossRef Link) https://doi.org/10.1109/TSP.2011.2162328
  10. J.D. Zhang, D.Y. Zhu, G. Zhang, “Adaptive compressed sensing radar oriented toward cognitive detection in dynamic sparse target scene,” IEEE Transactions on Signal Processing, vol.60, no.4, pp.1718-1729, April 2012. Article (CrossRef Link) https://doi.org/10.1109/TSP.2012.2183127
  11. M.A. Iwen, A.H. Tewfik, “Adaptive strategies for target detection and localization in noisy environments,” IEEE Transactions on Signal Processing, vol.60, no.5, pp.2344-2353, May 2012. Article (CrossRef Link) https://doi.org/10.1109/TSP.2012.2187201
  12. E. Arias-Castro, Y.C. Eldar, “Noise Folding in Compressed Sensing,” IEEE Signal Processing Letters, vol.18, no.8, pp.478-481, Aug. 2011. Article (CrossRef Link) https://doi.org/10.1109/LSP.2011.2159837
  13. M.A. Davenport, J.N. Laska, J. Treichler, R.G. Baraniuk, “The pros and cons of compressive sensing for wideband signal acquisition: noise folding versus dynamic range,” IEEE Transactions on Signal Processing, vol.60, no.9, pp.4628-4642, Sept. 2012. Article (CrossRef Link) https://doi.org/10.1109/TSP.2012.2201149
  14. A. Marco, F. Massimo, P. Steffen, "Damping noise-folding and enhanced support recovery in compressed sensing," arXiv preprint arXiv: 1307-5725, 2013. Article (CrossRef Link)
  15. S.F. Cotter, B.D. Rao, K. Engan, K. Kreutz-Delgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,” IEEE Transactions on Signal Processing, vol.53, no.7, pp.2477-2488, July 2005. Article (CrossRef Link) https://doi.org/10.1109/TSP.2005.849172
  16. E.J. Candes, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathematique, vol.346, no.9, pp.589-592, 2008. Article (CrossRef Link) https://doi.org/10.1016/j.crma.2008.03.014
  17. E.J. Candes, T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol.51, no.12, pp.4203-4215, Dec. 2005. Article (CrossRef Link) https://doi.org/10.1109/TIT.2005.858979
  18. J.A. Tropp, A.C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol.53, no.12, pp.4655-4666, Dec. 2007. Article (CrossRef Link) https://doi.org/10.1109/TIT.2007.909108
  19. M.A. Davenport, "Random observations on random observations: sparse signal acquisition and processing," PhD thesis, Rice University, 2010. Article (CrossRef Link)
  20. D.P. Wipf, B.D. Rao, “An empirical Bayesian strategy for solving the simultaneous sparse approximation problem,” IEEE Transactions on Signal Processing, vol.55, no.7, pp.3704-3716, July 2007. Article (CrossRef Link) https://doi.org/10.1109/TSP.2007.894265
  21. M. Rossi, A. M. Haimovich, Y. C. Eldar, “Spatial compressive sensing for MIMO radar”, IEEE Transactions on Signal Processing, vol.62, no.2, pp.419-430, Jan.15, 2014. Article (CrossRef Link) https://doi.org/10.1109/TSP.2013.2289875