DOI QR코드

DOI QR Code

Binary Sequence Family for Chaotic Compressed Sensing

  • Lu, Cunbo (Institute of Applied Physics and Computational Mathematics) ;
  • Chen, Wengu (Institute of Applied Physics and Computational Mathematics) ;
  • Xu, Haibo (Institute of Applied Physics and Computational Mathematics)
  • Received : 2019.01.09
  • Accepted : 2019.03.30
  • Published : 2019.09.30

Abstract

It is significant to construct deterministic measurement matrices with easy hardware implementation, good sensing performance and good cryptographic property for practical compressed sensing (CS) applications. In this paper, a deterministic construction method of bipolar chaotic measurement matrices is presented based on binary sequence family (BSF) and Chebyshev chaotic sequence. The column vectors of these matrices are the sequences of BSF, where 1 is substituted with -1 and 0 is with 1. The proposed matrices, which exploit the pseudo-randomness of Chebyshev sequence, are sensitive to the initial state. The performance of proposed matrices is analyzed from the perspective of coherence. Theoretical analysis and simulation experiments show that the proposed matrices have limited influence on the recovery accuracy in different initial states and they outperform their Gaussian and Bernoulli counterparts in recovery accuracy. The proposed matrices can make the hardware implement easy by means of linear feedback shift register (LFSR) structures and numeric converter, which is conducive to practical CS.

Keywords

1. Introduction

 Different from Nyquist sampling theorem, compressed sensing (CS) is a new revolutionary signal sampling framework proposed by Candès, Romberg, Tao and Donoho in 2006 [1, 2]. It can improve the sampling efficiency by sampling sparse signals at a rate far lower than the Nyquist rate. By exploiting the sparsity property, the original high-dimensional sparse signal can be recovered exactly from the lower-dimensional measurement vector with high probability by solving an optimization problem. The new idea of CS has caused the extensive attention of academic circles and has been applied to various research fields, such as image processing, information theory, wireless communication, encryption and radar imaging. CS has also potential applications in areas, such as big video data [3] and object tracking [4, 5]. The process of CS can be viewed as having two stages: data sampling and signal recovery. Let \(\mathbf{x}=\left\{x_{i}\right\}_{i=1}^{N} \in \mathbf{R}^{N}\) be a \(k\) - sparse original signal, where \(|| x||_{0}=\left|\left\{i | x_{i} \neq 0\right\}| \leq k\right.\) . The lower-dimensional observation signal \(\mathbf{y} \in \mathbf{R}^{M}\) can be obtained from its linear measurements with a measurement matrix \(\mathbf{A} \in \mathrm{R}^{\mathrm{MxN}}\) , where \(M< . In matrix representation, \(\mathbf{y}=\mathbf{A} \mathbf{x}\) . This linear process is the data sampling process of CS. As for the signal recovery stage, the original high-dimensional sparse signal \(\mathbf{x}\) can be reconstructed exactly from the lower-dimensional measurement vector \(\mathbf{y}\) by solving the following 0l minimization optimization problem

\(\min _{x}\|\mathbf{x}\|_{0} \text { subject to } \mathbf{y}=\mathbf{A x}\)        (1)

 The above solving problem is NP-hard [6]. The CS theory proves that by using a proper measurement matrix \(\mathbf{A}\) , solving problem (1) can be replaced with solving the following \(l_1\) minimization optimization problem

\(\min _{x}\|\mathbf{x}\|_{1} \text { subject to } \mathbf{y}=\mathbf{A x}\)        (2)

where \(\|x\|_{1}=\sum_{i=1}^{N}\left|x_{i}\right|\) . In this problem, the sparsest estimate of \(\mathbf{x}\) can be obtained by basis pursuit (BP) algorithm [7]. Besides, there are some greedy algorithms for solving problem (1) directly, such as orthogonal matching pursuit (OMP) [8].

 In CS theory, measurement matrix plays a vital role. In data sampling stage, a better measurement matrix can lead to a smaller number of measurements to achieve the same reconstruction accuracy. In signal recovery stage, a better measurement matrix can lead to a higher reconstruction accuracy at the same number of measurements. Overall, a good measurement matrix \(\mathbf{A} \in \mathbf{R}^{\mathrm{M} \times \mathbf{N}}\) should ensure that the projected measurements \(\mathbf{y} \in \mathbf{R}^{M}\) maintain all the significant information of original signal \(\mathbf{x} \in \mathbf{R}^{N}\) so that the original signal \(\mathbf{x}\) can be reconstructed exactly from the lower-dimensional projected measurements \(\mathbf{y}\) with high probability. Candes and Tao [6] proposed a criteria named Restricted Isometry Property (RIP) in which the measurement matrix must satisfy.

Definition 1.1 For a matrix \(\mathbf{A} \in \mathbf{R}^{\mathrm{M} \times \mathbf{N}}\) , if there exists the smallest number \(\delta_{k} \in[0,1)\) such that

\(\left(1-\delta_{k}\right)\|\mathbf{x}\|_{2}^{2} \leq\|\mathbf{A x}\|_{2}^{2} \leq\left(1+\delta_{k}\right)\|\mathbf{x}\|_{2}^{2}\)        (3)

holds for any \(k\) -sparse signal \(\mathbf{x} \in \mathbf{R}^{N}\) . Then the matrix \(\mathbf{A}\) is said to satisfy the RIP of order \(k\) . The \(\delta_{k}\) is called the restricted isometry constant (RIC) of order \(k\) .

 With some conditions on \(\delta_{k}\) , RIP implies that the solution of problem (1) is coincident with that of problem (2) [9, 10] if the solution of problem (1) exists.

 Coherence is another important criteria to construct RIP matrices.

 Definition 1.2 Let \(\mathbf{a}_{1}, \mathbf{a}_{2}, \cdots, \mathbf{a}_{N}\) be the column vectors of matrix \(\mathbf{A}\) , then its coherence \(\mu(\mathbf{A})\) is defined as

\(\mu(\mathbf{A})=\max _{\mathbf{i} \leq i \mathbf{x} \leq N} \frac{\left\langle\mathbf{a}_{i}, \mathbf{a}_{j}\right\rangle |}{\left\|\mathbf{a}_{i}\right\|_{2} \cdot\left\|\mathbf{a}_{j}\right\|_{2}}\)        (4)

where \(\left\langle\mathbf{a}_{i}, \mathbf{a}_{j}\right\rangle=\mathbf{a}_{i}^{T} \mathbf{a}_{j}\) is the inner product of vectors \(\mathbf{a}_i\) and \(\mathbf{a}_j\) .

 The following lemma [10-12] relates the RIC \(\delta_{k}\) and the coherence \(\mu\) .

 Lemma 1.1 For a matrix \(\mathbf{A}\) , the relationship between the coherence \(\mu(\mathbf{A})\) and the \(k\) order RIC \(\delta_{k}\) is \(\delta_{k} \leq \mu(\mathbf{A})(k-1)\) , where \(k<\frac{1}{\mu(\mathbf{A})}+1\) .

 From above lemma, it can be seen that the matrices with low coherence satisfy RIP and are natural candidates for CS matrices.

 As seen in [13], if \(k\) satisfies \(k<\frac{1}{2}\left[1+\frac{1}{\mu(\mathbf{A})}\right]\) , any \(k\) - sparse signal \(\mathbf{x}\) can be reconstructed accurately from its undersampling linear measurements \(\mathbf{y}=\mathbf{A} \mathbf{x}\) via BP algorithm or OMP algorithm. Therefore, when we design measurement matrix \(\mathbf{A}\) , the upper bound of reconstructed signal sparsity \(k\) can be increased by reducing the coherence \(\mu(\mathbf{A})\), which means an increase in reconstruction performance. To realize the reconstruction of original signal with high accuracy, we should reduce the coherence \(\mu(\mathbf{A})\) as far as possible.

 The OMP algorithm involves lower complexity than the BP algorithm and requires a shorter running time. Therefore, it is relatively simple and fast in hardware implementation and becomes a widely used algorithm in hardware design [14, 15]. In this paper, considering the practical CS applications, the OMP algorithm is applied for signal recovery to benefit from the low coherence of the CS matrices.

 Both RIP [16-19] and coherence [9-13, 20-27] are important tools to analyze the property of measurement matrices. In this paper, coherence will be adopted to analyze and illustrate the property of constructed measurement matrices, because it is easier to compute.

 Existing measurement matrices can be divided into two categories: random measurement matrices and deterministic measurement matrices. For the former, the most widely used matrices are Gaussian or Bernoulli ones. Due to that random matrices satisfy RIP with overwhelming probability, they are widely used in scientific research. However, in random matrices, every element obeys certain probability distribution, where randomness exists. In order to realize a random matrix, all elements should be stored and the process is repeated when a new realization is needed, which would cost lots of storage resources. Random number generation has very high requirement to the hardware, which is not conducive to hardware implementation and limits the practical applications of CS. These deficiencies can be overcome by deterministic measurement matrices to get rid of the randomness. Although deterministic matrices may need a lot of complex mathematical operations during their construction, all elements of these matrices can be computed and generated on the fly only once, thus providing storage efficiency. Recently, many researchers have exploited some existing theories and techniques to construct deterministic measurement matrices, such as Euler Squares [9], extremal set theory [10], near orthogonal systems [12], chaotic systems [20-22], Legendre sequences [23], optimal codebooks [24], bipartite graph [25], low-density parity-check (LDPC) codes [13, 14, 26, 28], equiangular tight frame theory [27], Reed-Muller sequences [29] and sparse fast Fourier transform [30]. In particular, Sasmal et al. [11] proposed an optimal deterministic binary CS matrices by using a specialized composition rule which exploits the properties of existing binary matrices. The above mentioned deterministic measurement matrices show good sensing performance.

 M-sequence is a type of pseudorandom binary sequence, which is also called maximum length LFSR sequence. The generation of m-sequence depends on the feedback coefficients of LFSR associated with a feedback polynomial. For LFSR, different feedback polynomials generate distinct m-sequences [31]. For m-sequence, the properties of balance, excursion distribution and auto-correlation are similar to the basic properties of random sequence [32]. Therefore, m-sequence is the most widely used pseudorandom sequence. In [31], the BSF is constructed based on the linear combination of m-sequences or their shifts such that the resulting sequences have low correlation. The implementation of BSF is extremely easy by summing LFSR outputs. This paper attempts to relate the notion of BSF to the design of deterministic measurement matrices.

 In this paper, inspired by the BSF in [31] and Chebyshev chaotic sequence in [21], we construct a class of deterministic bipolar chaotic measurement matrices named BSFDBC with the elements of +1 and -1. First, we choose the trace representative function given in [31] to generate the set of binary pseudo-random sequences which constitute the BSF. And then, by numeric convert, the BSF is converted to the corresponding bipolar sequence family. By selecting some sequences among the bipolar sequence family and using the chaotic-based permutation algorithm [21] to put them together in designed order as column vectors, the proposed BSFDBC matrix is obtained. The BSFDBC matrices have good potential cryptographic property because a brute force search of the permutation operator is of high complexity.

 The coherence of BSFDBC matrices is investigated and compared with their Gaussian and Bernoulli counterparts. Theoretical analysis and simulation experiments show that the proposed BSFDBC matrices have limited influence on the recovery accuracy in different initial states and they outperform their Gaussian and Bernoulli counterparts in recovery accuracy. Simulation experiments also show that the BSFDBC matrix is sensitive to its initial state.

 The remainder of this paper is organized as follows. Section 2 introduces some preliminaries about finite field. Section 3 presents the deterministic construction procedure of BSFDBC matrices and a related example. Section 4 uses the coherence to analyze the proposed BSFDBC matrices and compares the coherence of the BSFDBC matrices with their Gaussian and Bernoulli counterparts. Simulation experiments are given to investigate the performance of proposed BSFDBC matrices in Section 5. Finally, Section 6 concludes this paper.

2. Preliminaries

 Definition 2.1 Suppose \(\beta\) is a primitive field element of finite field \(GF(q)\) with \(q\) elements, then all the field elements of \(GF(q)\) can be generated with 0 and the powers of \(\beta\) , that is \(G F(q)=\left\{0, \beta^{0}=1, \beta, \cdots, \beta^{q-2}\right\}\) .

 Among \(\left\{0,1, \beta, \cdots, \beta^{q-2}\right\}\) , the last \(q-1\) nonzero elements constitute the multiplicative group \(G F(q) \backslash\{0\}\) , which is also denoted as \(G F(q)^{*}\) . For describing convenience, all elements of \(GF(q)\) can also be expressed as \(\{0,1, \cdots, q-1\}\).

 Definition 2.2 Let \(m\) , \(n\) be positive integers and \(m\) be the factor of \(n\) . The trace function from \(G F\left(2^{n}\right)\) to \(G F\left(2^{m}\right)\) , denoted as \(T r_{m}^{n}(x)\) , is

\(T r_{m}^{n}(x)=x+x^{2^{m}}+\ldots+x^{2^{m\left(\frac{n}{m}-1\right)}}, x \in G F\left(2^{n}\right).\)        (5)

 When \(m=1\), \(G F\left(2^{m}\right)=G F(2)=\{0,1\}\) . For describing convenience, \(T r_{1}^{n}(x)\) can also be simply expressed as \(\operatorname{Tr}(x)\) .

 

3. Construction and Example of BSFDBC

3.1 Construction of BSFDBC

 The proposed BSFDBC matrices are a class of \(\left(2^{n}-1\right) \times 2^{n+1}\) deterministic bipolar chaotic matrices with initial state \(r_{0} \in[-1,1]\) , where \(n \geq 5\) . The concrete realization steps of BSFDBC matrices are as follows:

 Step-1: For given signal length \(N=2^{n+1}\) , \(n\) is judged as odd or even. For odd \(n\) , choose the trace representative function (6) given in [31]; otherwise for even \(n\) , choose the trace representative function (7) given in [31], where \(x \in G F\left(2^{n}\right)^{*}, \lambda_{0}, \lambda_{1} \in G F\left(2^{n}\right)\) .

\(S_{\lambda_{0}, \lambda_{1}}(x)=\operatorname{Tr}\left(\lambda_{0} x\right)+\operatorname{Tr}\left(\lambda_{1} x^{3}\right)+\sum_{i=2}^{(n-1) / 2} \operatorname{Tr}\left(x^{1+2^{i}}\right)\)        (6)

\(s_{\lambda_{0}, \lambda_{1}}(x)=\operatorname{Tr}\left(\lambda_{0} x\right)+\operatorname{Tr}\left(\lambda_{1} x^{3}\right)+\sum_{i=2}^{n / 2-1} \operatorname{Tr}\left(x^{1+2^{i}}\right)+\operatorname{Tr}_{1}^{n / 2}\left(x^{1+2^{n / 2}}\right)\)        (7)

 Step-2: For \(G F\left(2^{n}\right)\) , select a primitive field element \(\beta\) . Let \(b_{t}^{\lambda_{0}, \lambda_{1}}=S_{\lambda_{0}, \lambda_{1}}\left(\beta^{t}\right)\) , where \(t \in\left\{0,1, \cdots, 2^{n}-2\right\} \) , and \(\lambda_{0}, \lambda_{1} \in G F\left(2^{n}\right)\) . The sequence \(\left\{b_{t}^{\lambda_{0}, \lambda_{1}}\right\}_{t=0}^{2^{n}-2}=\left\{S_{\lambda_{0}, \lambda_{1}}\left(\beta^{t}\right)\right\}_{t=0}^{2^{n}-2}\) , denoted as \(\mathbf{b}^{\lambda_{0}, \lambda_{1}}\) , is a binary pseudo-random sequence of period \(2^{n}-1\) . The set of binary sequences \(\left\{\mathbf{b}^{\lambda_{0}, \lambda_{1}} | \lambda_{0}, \lambda_{1} \in G F\left(2^{n}\right)\right\}\) constitutes the BSF in [31]. By inputting all the elements of binary sequence \(\mathbf{b}^{\lambda_{0}, \lambda_{1}}=\left\{b_{t}^{\lambda_{0}, \lambda_{1}}\right\}_{t=0}^{2^{n}-2}\) into the numeric convert function (8) one by one, we can obtain the corresponding bipolar pseudo-random sequence \(\mathbf{c}^{\lambda_{0}, \lambda_{1}}=\left\{c_{t}^{\lambda_{0}, \lambda_{1}}\right\}_{t=0}^{2^{n}-2}\) .

\(c_{t}^{\lambda_{0}, \lambda_{1}}=\left\{\begin{array}{l} 1, \quad b_{t}^{\lambda_{0}, \lambda_{1}}=0 \\ -1, b_{t}^{\lambda_{0}, \lambda_{1}}=1 \end{array}\right.\)        (8)

 Order the elements of \(\left(\lambda_{0}, \lambda_{1}\right)\) lexicographically as \((0,0),(0,1), \cdots,\left(0,2^{n}-1\right),(1,0), (1,1), \cdots,\left(1,2^{n}-1\right),(2,0), \cdots,\left(2^{n}-1,2^{n}-1\right)\) .

 When the parameter pair \(\left(\lambda_{0}, \lambda_{1}\right)\) is given, the bipolar sequence \(\mathbf{c}^{\lambda_{0}, \lambda_{1}}=\left\{c_{t}^{\lambda_{0}, \lambda_{1}}\right\}_{t=0}^{2^{n}-2}\) is deterministic. All sequences of \(\left\{\mathbf{c}^{\lambda_{0}, \hat{\lambda}_{1}} | \lambda_{0} \in G F(2), \lambda_{1} \in G F\left(2^{n}\right)\right\}\) are put together indexed by \(\left(\lambda_{0}, \lambda_{1}\right)\) in order as column vectors to form a \(\left(2^{n}-1\right) \times 2^{n+1}\) matrix \(\mathbf{A} \) , which has the following form

\(\begin{aligned} &\mathbf{A}=\left[\mathbf{c}^{0,0}, \mathbf{c}^{0,1}, \cdots, \mathbf{c}^{0,2^{n}-1} | \mathbf{c}^{1,0}, \mathbf{c}^{1,1}, \cdots, \mathbf{c}^{1,2^{n}-1}\right]\\ &=\left[\begin{array}{cccc|cccc} c_{0}^{0,0} & c_{0}^{0,1} & \dots & c_{0}^{0,2^{n}-1} & c_{0}^{1,0} & c_{0}^{1,1} & \dots & c_{0}^{1,2^{n}-1} \\ c_{1}^{0,0} & c_{1}^{0,1} & \dots & c_{1}^{0,2^{n}-1} & c_{1}^{1,0} & c_{1}^{1,1} & \dots & c_{1}^{1,2^{n}-1} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ c_{2^{n}-2}^{0,0} & c_{2^{n}-2}^{0,0} & \cdots & c_{2^{n}-2}^{0,2^{n}-1} & c_{2^{n}-2}^{1,0} & c_{2^{n}-2}^{1,1} & \cdots & c_{2^{n}-2}^{1,2^{n}-1} \end{array}\right] \end{aligned}\)        (9)

 Step-3: Let \(R\left(r_{0}, s, l\right)\) be the sampled Chebyshev sequence \(\left\{r_{0}, r_{s}, r_{2 s}, \cdots, r_{(l-1) s}\right\}\) generated by the Cheyshev map \(r_{j+1}=\cos \left(w \cdot \arccos \left(r_{j}\right)\right)\) given in [21], where \(j=0,1,2, \cdots\), \(r_{0} \in[-1,1]\) is the initial state and \(w\) is a positive integer larger than 1. The w is also called the degree of the map. For given \(r_0\) , record each value of \(R\left(r_{0}, s, l\right)\) with \(w = 5\), \(s = 5\), and \(l=N=2^{n+1}\) . Then, sort the \(\left\{r_{0}, r_{s}, r_{2 s}, \cdots, r_{(l-1) s}\right\}\) in descending order and obtain the corresponding index set \(\chi\) , which will be a chaotic set because of the pseudo-randomness of \(R\left(r_{0}, s, l\right)\) [21].

 Step-4: Permute the column vectors of \(\mathbf{A}\) in designed order of set \(\chi\) to obtain the proposed BSFDBC matrix \(\mathbf{A}_{r_{0}}\) . In matrix representation, \(\mathbf{A}_{r_{0}}=\mathbf{A} \mathbf{D}_{r_{0}}\) , where the chaotic-based permutation operator \(\mathbf{D}_{r_{0}}\) is a deterministic column permutation of an identity matrix \(\mathbf{I} \in \mathbf{R}^{2^{n+1} \times 2^{n+1}}\) in the designed order of set \(\chi\) .

 From above construction, it can be seen that the sampling rate of BSFDBC matrices is \(\left(2^{n}-1\right) / 2^{n+1} \approx 0.5\) . For odd \(n=2 l+1\) , the process for obtaining a column vector in \(\mathbf{A}_{r_{0}} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\) can be understood by first adding \((l+1)\) m-sequences with different feedback polynomials and then converting the result summing sequence using element substituting in (8). In addition, the related Chebyshev chaotic sequence \(R\left(r_{0}, s, l\right)\) is deterministic with fixed initial state \(r_0\) . The corresponding permutation operator can be reflected in the permutation of element order \(\left(\lambda_{0}, \lambda_{1}\right)\) . Hence, \(\mathbf{A}_{r_{0}} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\) is easy to implement by summing LFSR outputs and using numeric converter, which is conducive to practical CS.

 Remark 1 Suppose the adversary knows \(\mathbf{A}\) and wants to know \(\mathbf{A}_{r_{0}}=\mathbf{A} \mathbf{D}_{r_{0}}\) , the permutation operator \(\mathbf{D}_{r_{0}}\) is required which corresponds to a permutation of integers \(\left[1,2, \cdots, 2^{n+1}\right]\) . A brute force search of the permutation is needed and the complexity is \(\left(2^{n+1}\right) !\) . Note that \(n \geq 5\), \(\left(2^{n+1}\right) ! \geq 64 !\). The cost of guessing the permutation operator \(\mathbf{D}_{r_{0}}\) is very high, which is too expensive for the adversary to be practical. Therefore, the BSFDBC matrices have good potential cryptographic property.

 Remark 2 The initial state \(r_0\) of the BSFDBC determines the permutation order according to the construction steps. A different \(r_0\) will lead to a different permutation order, which will further generate a different BSFDBC matrix. Therefore, \(r_0\) can be considered as the secret key to construct the BSFDBC matrix, which would be favorable in practical CS applications.

 Remark 3 Since \(r_0\) can be any value among the interval [−1,1], a large number of BSFDBC matrices can be obtained. These matrices can be used as encryption keys for cryptography, which implies that encryption occurs implicitly in the data sampling stage.

 

3.2 An Example of BSFDBC

 In the following, we give an example of a column vector of BSFDBC \(\mathbf{A}_{r_{0}}\) of size 127× 256 . Let \(G F\left(2^{7}\right)\) be the finite field with the primitive field element \(\beta\) satisfying \(\beta^{7}+\beta+1=0\) . In matrix \(\mathbf{A}\) , the binary sequence \(\mathbf{b}^{k}=\left\{b_{t}^{k}\right\}\) , which corresponds to \(k\) th column vector, is given by (6) at \(n = 7\) , and \(x=\beta^{t}\) for \(0 \leq t \leq 126\) .

 For \(k=128 i_{0}+i_{1}+1\) with \(i_{0}=0,1\) and \(0 \leq i_{1} \leq 127\) , if \(i_{1} \neq 127\) , \(b_{t}^{k}=\operatorname{Tr}\left(\beta^{i_{0}} \beta^{t}+\beta^{i_{1}} \beta^{3 t}+\beta^{5 t}+\beta^{9 t}\right)\) ; if \(i_{1}=127\) , \(b_{t}^{k}=\operatorname{Tr}\left(\beta^{i_{0}} \beta^{t}+\beta^{5 t}+\beta^{9 t}\right)\) . For different values of \(\left(i_{0}, i_{1}\right)\) , 256 cyclically distinct binary sequences \(\mathbf{b}^{k}=\left\{b_{t}^{k}\right\}\) are obtained, which correspond to all column vectors of BSFDBC. Let \(\mathbf{g}=\left\{g_{t}\right\}\) , \(g_{t}=\operatorname{Tr}\left(\beta^{t}\right)\) and \(\mathbf{g}^{(j)}=\left\{g_{j t}\right\}\) . Then \(\mathbf{g}\) is given by

\(\begin{aligned} \left\{g_{t}\right\}=& 10000001000001100001010001111001 \\ & 00010110011101010011111010000111 \\ & 00010010011011010110111101100011 \\ & 0100101110111001100101010111111. \end{aligned}\)       (10)

 From above, it is seen that \(\mathbf{b}^{k}=\left\{b_{t}^{k}\right\}\) can be obtained from the linear combination of m-sequences \(\mathbf{g}, \mathbf{g}^{(3)}, \mathbf{g}^{(5)}, \mathbf{g}^{(9)}\) or their shifts. For \(k=128 i_{0}+i_{1}+1\) with \(i_{0}=0,1\) and \(0 \leq i_{1} \leq 127\) , if \(i_{1} \neq 127\) , \(b_{t}^{k}=g_{t+i_{0}}+g_{3 t+i_{1}}+g_{5 t}+g_{9 t}\) ; otherwise if \(i_{1}=127\) , \(b_{t}^{k}=g_{t+i_{0}}+g_{5 t}+g_{9 t}\) . By inputting all the elements of binary sequence \(\mathbf{b}^{k}=\left\{b_{t}^{k}\right\}\) into the numeric convert function (8) one by one, we can obtain the corresponding bipolar sequence \(\mathbf{c}^{k}=\left\{c_{t}^{k}\right\}\) , which is the \(k\) th column vector of \(\mathbf{A}\) . After the permutation operator \(\mathbf{D}_{r_{0}}\) , the column vector \(\mathbf{c}^{k}=\left\{c_{t}^{k}\right\}\) is mapped into the corresponding position of \(\mathbf{A}_{r_{0}}=\mathbf{A} \mathbf{D}_{r_{0}}\) .

 

4. Coherence Analysis

 In this section, for proposed BSFDBC matrix \(\mathbf{A}_{r_{0}}=\mathbf{A} \mathbf{D}_{r_{0}}\) constructed in section 3, the coherence \(\mu\left(\mathbf{A}_{r_{0}}\right)\) is used to analyze and compare the performance of BSFDBC matrix with its Gaussian and Bernoulli counterparts.

 In order to derive the coherence of proposed BSFDBC matrix \(\mathbf{A}_{r_{0}}\) , the following one definition and one lemma are first introduced [31].

 Definition 4.1 Let \(\mathbf{a}=\left(a_{0}, a_{1}, \cdots, a_{v}\right)\) and \(\mathbf{b}=\left(b_{0}, b_{1}, \cdots, b_{v}\right)\) be two different binary sequence of period \(v\) . The cross-correlation of \(\mathbf{a}\) and \(\mathbf{b}\) is defined as \(C_{\mathbf{a}, \mathbf{b}}(\tau)=\sum_{i=0}^{v-1}(-1)^{a_{i}+b_{i+\tau}}\) for \(0 \leq \tau \leq v-1\), where \(i+\tau\) is computed modulo \(v\) . If  \(\mathbf{a}\) and \(\mathbf{b}\) are cyclically equivalent, ( ) , \(C_{\mathbf{a}, \mathbf{b}}(\tau)\) is the auto-correlation of sequence  \(\mathbf{a}\) .

 Lemma 4.1 For odd \(n\) , the cross-correlation of any two binary sequences  \(\mathbf{a}\) and \(\mathbf{b}\) given by (6) is \(C_{\mathrm{a}, \mathrm{b}}(\tau) \in\left\{-1,-1 \pm 2^{(n+1) / 2},-1 \pm 2^{(n+3) / 2}\right\}\) . For even n , the cross-correlation of any two binary sequences  \(\mathbf{a}\) and \(\mathbf{b}\) given by (7) is \(C_{\mathrm{a}, \mathrm{b}}(\tau) \in\left\{-1,-1 \pm 2^{n / 2},-1 \pm 2^{n / 2+1},-1 \pm 2^{n / 2+2}\right\}\) .

 Theorem 4.1 Let \(\mathbf{A}_{r_{0}}\) be a \(\left(2^{n}-1\right) \times 2^{n+1}(n \geq 5)\) BSFDBC matrix constructed in Section 3, where \(\mathbf{A}_{r_{0}}=\mathbf{A} \mathbf{D}_{r_{0}}\) , and \(r_{0} \in[-1,1]\) . If \(n\) is odd, \(\mu\left(\mathbf{A}_{r_{0}}\right) \leq \frac{1+2^{(n+3) / 2}}{2^{n}-1}\) ; if \(n\) is even, \(\mu\left(\mathbf{A}_{r_{0}}\right) \leq \frac{1+2^{n / 2+2}}{2^{n}-1}\) .

 Proof: For matrix \(\mathbf{A}_{r_{0}} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\) , we have \(\mu\left(\mathbf{A}_{r_{0}}\right)=\mu\left(\mathbf{A} \mathbf{D}_{r_{0}}\right)=\mu(\mathbf{A})\) according to the definition of coherence in (4), because \(\mathbf{D}_{r_{0}}\) only permute the column vectors of \(\mathbf{A}\) in designed order. To compute \(\mu\left(\mathbf{A}_{r_{0}}\right)\) , we can compute \(\mu(\mathbf{A})\). Let \(\mathbf{A}^{i}\) be the \(i\) th column of \(\mathbf{A}\) . Then

\(\mu(\mathbf{A})=\max _{\mathbf{i} \leq i \neq j \leq 2^{n+1}} \frac{\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right|}{\left\|\mathbf{A}^{i}\right\|_{2} \cdot\left\|\mathbf{A}^{i}\right\|_{2}}\) .       (11)

 Note that sequence \(\mathbf{A}^{i}\) and \(\mathbf{A}^{j}\) are bipolar sequences of period \(2^n-1\) with the elements of +1 and -1. We have

\(\left\|\mathbf{A}^{i}\right\|_{2}=\left\|\mathbf{A}^{j}\right\|_{2}=\left(2^{n}-1\right)^{1 / 2}\) .        (12)

 It can be seen from the construction in Section 3 that the matrix \(\mathbf{A} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\) has a BSF \(\left\{\mathbf{b}^{\lambda_{0}, \lambda_{1}} | \lambda_{0}, \lambda_{1} \in G F\left(2^{n}\right)\right\}\) and a bipolar sequence family \(\left\{\mathbf{c}^{\lambda_{0}, \lambda_{1}} | \lambda_{0}, \lambda_{1} \in G F\left(2^{n}\right)\right\}\) correspondingly. The column vector \({A}^{i}\) of \(\mathbf{A}\) is the bipolar sequence \(\mathbf{c}^{i}\) in \(\left\{\mathbf{c}^{\lambda_{0}, \lambda_{1}} | \lambda_{0} \in G F(2), \lambda_{1} \in G F\left(2^{n}\right)\right\}\) .

 Let \(\mathbf{b}^{i}=\left\{b_{t}^{i}\right\}_{t=0}^{2^{n}-2}\) and \(\mathbf{b}^{j}=\left\{b_{t}^{j}\right\}_{t=0}^{2^{n}-2}\) be any two binary sequences of \(\left\{\mathbf{b}^{\lambda_{0}, \lambda_{1}} | \lambda_{0} \in G F(2), \lambda_{1} \in G F\left(2^{n}\right)\right\}\) . From (8), the corresponding two bipolar sequences \(\mathbf{c}^{i}=\left\{c_{t}^{i}\right\}_{t=0}^{2^{n}-2}\) and \(\mathbf{c}^{j}=\left\{c_{t}^{j}\right\}_{t=0}^{2^{n}-2}\) are obtained, both of which belong to \(\left\{\mathbf{c}^{\lambda_{0}, {\lambda}_{1}} | \lambda_{0} \in G F(2), \lambda_{1} \in G F\left(2^{n}\right)\right\}\) .We have

\(\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle=\left\langle\mathbf{c}^{i}, \mathbf{c}^{j}\right\rangle=\sum_{t=0}^{2^{n}-2} c_{t}^{i}(S) c_{t}^{j}(S)=\sum_{t=0}^{2^{n}-2}(-1)^{b_{t}^{i}+b_{i}^{j}}=C_{\mathbf{b}^{i}, \mathbf{b}^{j}}(0).\)        (13)

 Notice that

\(\begin{aligned} \mathbf{b}^{i}, \mathbf{b}^{j} & \in\left\{\mathbf{b}^{\lambda_{0}, \hat{\lambda}_{1}} | \lambda_{0} \in G F(2), \lambda_{1} \in G F\left(2^{n}\right)\right\} \\ & \subset\left\{\mathbf{b}^{\lambda_{0}, \lambda_{1}} | \lambda_{0}, \lambda_{1} \in G F\left(2^{n}\right)\right\}, \end{aligned}\)

where \(\left\{\mathbf{b}^{\lambda_{0}, \lambda_{1}} | \lambda_{0}, \lambda_{1} \in G F\left(2^{n}\right)\right\}\) is the BSF in [31].

 Using Lemma 4.1, we can obtain that if \(n\) is odd,

\(\max _{1 \leq i \neq j \leq 2^{n+1}}\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right|=\max _{1 \leq i \neq j \leq 2^{n+1}} |C_{\mathbf{b}^{i}, \mathbf{b}^{j}}(0)|\leq \max |-1,-1 \pm 2^{(n+1) / 2},-1 \pm 2^{(n+3) / 2} |=1+2^{(n+3) / 2}.\)

 Similar to the derivation process for odd \(n\) , for even \(n\) , \(\max _{1 \leq i \neq j \leq 2^{n+1}}\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right|=1+2^{n / 2+2}\) . Theorem 4.1 is proved after substitution of above conclusion and (12) into (11).

 Remark of Theorem 4.1 Theorem 4.1 demonstrates that the initial state \(r_0\) of BSFDBC matrix \(A_{r_{0}}\) has no influence on the upper bound of coherence \(\mu\left(\mathbf{A}_{r_{0}}\right)\) . From the proof, we can see that if \(r_{0} \neq r_{1} \in[-1,1]\) , \(\mu\left(\mathbf{A}_{r_{0}}\right)=\mu\left(\mathbf{A}_{{r_{1}}}\right)=\mu(\mathbf{A})\) . This means that the value of coherence \(\mu\left(\mathbf{A}_{r_{0}}\right)\) of BSFDBC matrix \(A_{r_{0}}\) has no relation with its initial state \(r_0\) .

 In order to compare the coherence of proposed BSFDBC matrices with their Gaussian and Bernoulli counterparts, the following two lemmas are introduced [33].

 Lemma 4.2 Let \(\left\{x_{i}\right\}_{i=1}^{p}\) and \(\left\{y_{i}\right\}_{i=1}^{p}\) be sequences of independent and identically distributed zero-mean Gaussian random variables with variance \(\sigma^2\) . Then .

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{p} x_{i} y_{i}\right| \geq t\right) \leq 2 \exp \left(-\frac{t^{2}}{4 \sigma^{2}\left(p \sigma^{2}+t / 2\right)}\right).\)        (14)

 Lemma 4.3 Let  \(\left\{x_{i}\right\}_{i=1}^{p}\) and \(\left\{y_{i}\right\}_{i=1}^{p}\) be sequences of independent and identically distributed zero-mean bounded random variables which satisfy \(\left|x_{i}\right| \leq a\) and \(\left|x_{i} y_{i}\right| \leq a^{2}\) . Then .

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{p} x_{i} y_{i}\right| \geq t\right) \leq 2 \exp \left(-\frac{t^{2}}{2 p a^{4}}\right).\)       (15)

 Theorem 4.2 For a \(\left(2^{n}-1\right) \times 2^{n+1}(n \geq 5)\) BSFDBC matrix \(A_{r_{0}}\) , its coherence \(\mu\left(\mathbf{A}_{r_{0}}\right)\) is smaller than the corresponding Gaussian matrix \(\mathbf{B}\) and Bernoulli matrix \(\mathbf{D}\) with the elements of +1 and -1.

 Proof: Suppose \(\mathbf{B}=\left[\mathbf{b}_{1}, \mathbf{b}_{2}, \dots, \mathbf{b}_{2^{n+1}}\right] \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\) and \(\mathbf{D}=\left[\mathbf{d}_{1}, \mathbf{d}_{2}, \ldots, \mathbf{d}_{2^{n+1}}\right] \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\) , where \(\mathbf{b}_{i}\) and \(\mathbf{d}_{i}\) are column vectors of the matrices \(\mathbf{B}\) and \(\mathbf{D}\) for \(1 \leq i \leq 2^{n+1}\) , respectively.

Without loss of generality, we prove the theorem in case of even \(n\) .

 Let \(\left\{x_{i}\right\}_{i=1}^{2^{n}-1}\) and \(\left\{y_{i}\right\}_{i=1}^{2^{n}-1}\) be any two column vectors of Gaussian matrix \(\mathbf{B}\) . Based on Lemma 4.2 with \(p=2^{n}-1, t>\frac{1+2^{n / 2+2}}{2^{n}-1}\) , and \(\sigma^{2}=\frac{1}{2^{n}-1}\) , we have .

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{2^n-1} x_{i} y_{i}\right| \geq t\right) \leq 2 \exp \left\{-\frac{\left(2^{n}-1\right) t^{2}}{4+2 t}\right\}\)        (16)

 Let  \(z(n, t)=2 \exp \left\{-\frac{\left(2^{n}-1\right) t^{2}}{4+2 t}\right\}\) . It is easy to obtain that \(z(n, t) \) increases as n decreases. Thus, we have \(z(n, t) \leq z(6, t)\). It can be further derived that .

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right| \geq t\right) \leq z(6, t)=2 \exp \left\{-\frac{63 t^{2}}{4+2 t}\right\}\)        (17)

 We observe that \(z(6, t)\) increases as \(t\) decreases. Thus, we have \(z(6, t) .

 Let ) \(z_{1}(n)=z\left(6, \frac{1+2^{n / 2+2}}{2^{n}-1}\right)\) .We can also observe that \(z_{1}(n)\) n increases as n decreases. Thus, we have \(z(6, t) . Further, we have

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right| \geq t\right)<z_{1}(6) \approx 2 \exp (-3.4245) \approx 0.065\)      (18)

 For matrix \(\mathbf{B}, \left | \sum_{i=1}^{2^{n}-1} x_{i} y_{i} \right | \) can characterize its coherence \(\mu(\mathbf{B})\) according to the definition of coherence in (4). Let \(S=\left\{\mathbf{b}_{1}, \mathbf{b}_{2}, \dots, \mathbf{b}_{2^{n+1}}\right\}\) . We have

\(\mu(\mathbf{B})=\max _{\left\{x_{i}\right\},\left\{y_{i}\right\}}\left\{\sum_{i=1}^{2^{n}-1} x_{i} y_{i} |\left\{x_{i}\right\} \subset S,\left\{y_{i}\right\} \subset S \backslash\left\{x_{i}\right\}\right\}\)       (19)

 Further, we have \(\operatorname{Pr}\left(\min _{\left.\left\{x_{i}\right\}, \{ y_{i}\right\}}\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right| \geq t\right)<\left\{z_{1}(6)\right\}^{\frac{|S|(|S|-1)}{2}} \approx 0.065^{2^{n}\left(2^{n+1}-1\right)}\) . Let \(\delta_{b}(n)=0.065^{2^{n}\left(2^{n+1}-1\right)}\) with \(n \geq 6\) . Obviously, we can obtain that

\(\operatorname{Pr}\left(\min _{\left\{x_{i}\right\},\left\{y_{i}\right\}}\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right| \leq t\right) \geq 1-\delta_{b}(n) \approx 1.\)        (20)

 Hence, \(\mu(\mathbf{B})=\max _{\left\{x_{i}\right\},\left\{y_{i}\right\}} \sum_{i=1}^{2^{n}-1} x_{i} y_{i} \geq t\) . By \(t>\frac{1+2^{n / 2+2}}{2^{n}-1}, \text { and } \mu\left(\mathbf{A}_{r_{0}}\right) \leq \frac{1+2^{n / 2+2}}{2^{n}-1}\) , we have \(\mu(\mathbf{B})>\mu\left(\mathbf{A}_{r_{0}}\right)\) .

 Let \(\left\{l_{i}\right\}_{i=1}^{2^{n}-1}\) and \(\left\{h_{i}\right\}_{i=1}^{2^{n}-1}\) be any two column vectors of Bernoulli matrix \(\mathbf{D}\). Similar to the above derivation process, we have \(\operatorname{Pr}\left(\left|\sum_{i=1}^{2^n-1} l_{i} h_{i}\right| \geq t\right)<2 \exp \left\{-\frac{\left(1+2^{n / 2+2}\right)^{2}}{2\left(2^{n}-1\right)}\right\}\) based on the Lemma 4.3 with \(p=2^{n}-1, t>\frac{1+2^{n / 2+2}}{2^{n}-1}\) , and \(a=\frac{1}{\sqrt{2^{n}-1}}\) .

 Let \(w_{1}(n)=2 \exp \left\{-\frac{\left(1+2^{n / 2+2}\right)^{2}}{2\left(2^{n}-1\right)}\right\}\) . It is easy to obtain that \(w_{1}(n)\) increases as \(n\) decreases. Thus, we have \(w_{1}(n) \leq w_{1}(6) \approx 2 \exp (-8.6429)\) . Further, we have

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{n} l_{i} h_{i}\right| \geq t\right)<w_{1}(6) \approx 2 \exp (-8.6429)\)       (21)

 and

\(\operatorname{Pr}\left(\min _{\left.\left\{l_{i}\right\}, \{k_{i}\right\}}\left|\sum_{i=1}^{2^{n}-1} l_{i} h_{i}\right| \geq t\right)<\left\{w_{1}(6)\right\}^{2^{n}\left(2^{n+1}-1\right)} \approx\{2 \exp (-8.6429)\}^{2^{n}\left(2^{n+1}-1\right)}\) .       (22)

 Let \(\delta_{d}(n)=\{2 \exp (-8.6429)\}^{2^{n}\left(2^{n+1}-1\right)}\) with \(n \geq 6\) . Obviously, we can obtain that

\(\operatorname{Pr}\left(\min _{\{l_i\},\left\{h_{i}\right\}}\left|\sum_{i=1}^{2^{n}-1} l_{i} h_{i}\right| \leq t\right) \geq 1-\delta_{d}(n) \approx 1\) .        (23)

 Hence, \(\mu(\mathbf{D})=\max _{\left\{l_{i}\right\},\left\{h_{i}\right\}}\left|\sum_{i=1}^{2^{n}-1} l_{i} h_{i}\right| \geq t\) . By \(t>\frac{1+2^{n / 2+2}}{2^{n}-1}\) , and \(\mu\left(\mathbf{A}_{r_{0}}\right) \leq \frac{1+2^{n / 2+2}}{2^{n}-1}\) , we have \(\mu(\mathbf{D})>\mu\left(\mathbf{A}_{r_{0}}\right)\) .

 Therefore, the theorem is proved in case of even \(n\) . Similarly, the same conclusion can be obtained in case of odd \(n\) . Thus, Theorem 4.2 is proved.

 Remark of Theorem 4.2 For CS matrix, reducing the coherence leads to the reconstruction of original signal with higher accuracy. Theorem 4.2 demonstrates that reconstruction performance of the BSFDBC matrix is superior to its Gaussian and Bernoulli counterparts.

 

5. Simulation and Results

 In this section, simulation experiments with sparse signals and image signals are given to investigate the performance of proposed BSFDBC matrices. Here, Gaussian and Bernoulli random matrices of same size are used for comparison. In Gaussian matrix construction, each element obeys standard normal distribution \(N(0,1)\) . In Bernoulli matrix construction, each element is 1 or -1 with equal probability.

 For sparse signals, two types of BSFDBC matrices of size \(\left(2^{n}-1\right) \times 2^{n+1}\) are generated with initial state \(r_0\) : (i) BSFDBC matrices of size 255×512 for even \(n\) and \(n=8\); (ii) BSFDBC matrices of size 127× 256 for odd \(n\) and \(​​n=7\) . The \(k\) -sparse \(2^{n+1} \times 1\) original signal \(\mathbf{x}\) is generated by first selecting \(k\) nonzero locations uniformly randomly among the total \(2^{n+1}\) locations and then taking corresponding \(k\) nonzero values by independent and identically distributed standard normal distribution \(N(0,1)\) . For each sparsity level \(k\) , 1000 experiments are averaged to obtain the corresponding result. Suppose \(\mathbf{X}_{R}\) is the reconstructed signal from OMP. For noiseless signal recovery, if \(\left\|\mathbf{x}-\mathbf{x}_{R}\right\|_{2}<10^{-6}\) satisfies in one experiment, this reconstruction experiment is claimed to be successful. The successful reconstruction probability equals the successful reconstruction times divided by 1000. For noisy signal recovery, additive Gaussian noise \(\mathbf{e}\) is added to the original sparse signal    , where the signal-to-noise ratio (SNR) can be set. Therefore, given a sensing matrix \(\mathbf{A}\) , we have the measurement vector \(\mathbf{y}=\mathbf{A}(\mathbf{x}+\mathbf{e})=\mathbf{A} \mathbf{x}+\mathbf{A} \mathbf{e}\) , where \(\mathbf{A} \mathbf{e}\) is the noise term. The reconstruction SNR is defined as

\(S N R(\mathbf{x})=20 \cdot \log _{10}\left(\frac{\|\mathbf{x}\|_{2}}{\left\|\mathbf{x}-\mathbf{x}_{R}\right\|_{2}}\right) d B\) .        (24)

 For image signals \(\mathbf{x}\) of size \(m×n\) , the performance of BSFDBC matrix \(\mathbf{A}_{r_0}\) is investigated in image reconstruction using the block CS algorithm. The image \(\mathbf{I}\) is divided into smaller subimage set \(\left\{\mathrm{I}_{l} | l=1,2, \cdots, N\right\}\) of equal size. For each subimage \(\mathbf{I}_l\) , the sparse vector \(\mathbf{d}_{l}\) is obtained by the vectorized version of \(\mathbf{S}_{l}\) , which is the two-dimensional Daubechies 9/7 discrete wavelet transform (DWT) of \(\mathbf{I}_l\) . By using all the wavelet coefficients of \(\mathbf{d}_{l}\) , the dimensionality of the reconstruction problem can be determined. A down-sampling for \(\mathbf{d}_{l}\) is implemented to get the compressed measurements \(\mathbf{y}_{l}=\mathbf{A}_{r_{0}} \mathbf{d}_{l}\) . For image reconstruction, OMP algorithm is used to recover \(\mathbf{d}_{l}\) (and consequently \(\mathbf{I}_l\) ) from the reduced vector \(\mathbf{y}_{l}\) . Considering the tradeoff among reconstruction quality, hardware implementation and recovery time, the block size are selected to be 32×16 and 32×32, which correspond to two types of BSFDBC matrices. Let \(\mathbf{x}_R\) be the reconstructed image. The peak signal-to-noise ratio (PSNR) is defined as

\(P S N R(\mathbf{x})=10 \cdot \log _{10}\left(\frac{255^{2}}{\left\|\mathbf{x}-\mathbf{x}_{R}\right\|_{2}^{2} /(m \cdot n)}\right) d B\) .       (25)

 Note that if the original image \(\mathbf{x}\) is a three-dimensional color image, the signal \(\mathbf{x}\) is first converted to be a two-dimensional grayscale image signal \(\mathbf{x}^F\) by concatenating its R, G, B components in column extension form and then the resulting signal \(\mathbf{x}^F\) and corresponding reconstruction signal \(\mathbf{x}^F_R\) are applied to calculate \(P S N R(\mathbf{x})\) .

 

5.1 BSFDBC in Different Initial States

 For matrices of size 255×512 , Fig. 1(a) presents the successful reconstruction probability of noiseless \(k\) -sparse 512×1 signals under different initial states \(r_0\) , where \(k \in\{60,95,105,115\}\), and \(-1 \leq r_{0} \leq 1\). For matrices of size 127× 256 , Fig. 1(b) presents the successful reconstruction probability of noiseless \(k\) -sparse 256×1 signals under different initial states \(r_0\) , where \(k \in\{20,45,50,55\}\), and \(-1 \leq r_{0} \leq 1\).

 Fig. 1 shows that for all values of sparsity level, the initial state of the BSFDBC has limited influence on the recovery accuracy. For instance, for matrices of size 255×512 , the associated successful reconstruction probabilities at sparsity 105 vary in a limited range [0.71,0.757]. This result is due to the insensitivity of coherence of BSFDBC matrix to its initial state.

Fig. 1. The successful reconstruction probability versus initial state for noiseless sparse signals where sparsity level varies. (a) The matrices of size 255×512 , (b) The matrices of size 127 × 256

 

5.2 Key Sensitivity of BSFDBC

 As described in Section 3, the BSFDBC matrix \(\mathbf{A}_{r_{0}}\) is constructed based on BSF and Chebyshev chaotic sequence with secret key \({r_{0}}=0.8\) . The matrix \(\mathbf{A}_{r_{0}}\) can be used as encryption key for cryptography, which implies that encryption occurs implicitly in the data sampling stage. As for the signal recovery, consider the matrix \(\mathbf{A}_{r_{0}}\) generated by the right key \({r_{0}}=0.8\) and \(\mathbf{A}_{r_{1}}\) generated by the wrong key \({r_{1}}\) . The test image is the “liftingbody” of size 512×512 shown in Fig. 2(a), where the block size is selected to be 32×16. Fig. 2(b) and Fig. 2(c) are the decrypted image with wrong keys \(​​​​r_1=0.3\) and \(​​​​r_1=-0.8\), respectively. Fig. 2(d) is the decrypted image with right key \({r_{0}}=0.8\) . The according reconstruction PSNR for Fig. 2(b), Fig. 2(c) and Fig. 2(d) are 2.01dB, 2.14dB and 36.48dB, respectively. Obviously, the encrypted image cannot be decrypted correctly with wrong key \({r_{1}}\) . Fig. 3 presents the reconstruction PSNR for the “liftingbody” decrypted with different key \({r_{1}}\) , where \(-1 \leq r_{1} \leq 1\).

Fig. 2. Performance of BSFDBC for “liftingbody”. (a) Original image, (b) Decrypted image with wrong key 0.3, (c) Decrypted image with wrong key -0.8, (d) Decrypted image with right key 0.8

Fig. 3. The reconstruction PSNR for the “liftingbody” decrypted with different key

 Fig. 3 shows that the image signal cannot be decrypted correctly with wrong key \(r_{1} \neq r_{0}\) . Therefore, the BSFDBC \(\mathbf{A}_{r_{0}}\) is sensitive to the secret key \({r_{0}}\) and data security can be ensured effectively.

 

5.3 BSFDBC for Sparse Signals

 Without loss of generality, the initial state of BSFDBC matrix is set to be 0.8 in this section and later one.

 Example 1: For matrices of size 255×512 , Fig. 4(a) presents the successful reconstruction probability of noiseless \(k\) -sparse 512×1 signals under different sparsity levels, where \(30 \leq k \leq 150\) . For matrices of size 127× 256 , Fig. 4(b) presents the successful reconstruction probability of noiseless k -sparse 256×1 signals under different sparsity levels, where \(10 \leq k \leq 80\).

Fig. 4. The successful reconstruction probability versus sparsity level for noiseless sparse signals. (a) The matrices of size 255×512 , (b) The matrices of size 127 × 256

 Fig. 4 shows that the reconstruction performance of BSFDBC matrix is superior to its Gaussian and Bernoulli counterparts. For instance, for the BSFDBC, Gaussian and Bernoulli matrices of size 255×512 , the associated successful reconstruction probabilities at sparsity 70 are 0.994, 0.69, and 0.718, respectively. This result is due to the smaller coherence provided by BSFDBC matrix than the other two.

 Example 2: In this example, the 30dB noise level is added to the original sparse signal. For matrices of size 255×512 , Fig. 5(a) presents the reconstruction SNR of noisy \(k\) -sparse 512×1 signals under different sparsity levels, where \(30 \leq k \leq 150\) . For matrices of size 127× 256 , Fig. 5(b) presents the reconstruction SNR of noisy k -sparse 256×1 signals under different sparsity levels, where \(10 \leq k \leq 80\).

 Fig. 5 shows that for all values of sparsity level, the BSFDBC matrix has more SNR than the Gaussian and Bernoulli matrices. For instance, for the BSFDBC, Gaussian and Bernoulli matrices of size 255×512 , the associated reconstruction SNRs at sparsity 70 are 32.98 dB, 31.34 dB and 31.34 dB, respectively. This is because that the BSFDBC matrix has smaller coherence than the other two, which is more conducive to signal recovery.

Fig. 5. The reconstruction SNR versus sparsity level for noisy sparse signals with SNR of 30 dB. (a) The matrices of size 255×512 , (b) The matrices of size 127 × 256

Fig. 6. The reconstruction SNR versus input SNR for noisy sparse signals. (a) The matrices of size 255×512 , (b) The matrices of size 127 × 256

 Example 3: In this example, the sparsity level of original signal is fixed and its noise level varies. For matrices of size 255×512 , Fig. 6(a) presents the reconstruction SNR of noisy 70-sparse 512×1 signals under different noise levels. For matrices of size 127× 256 , Fig. 6(b) presents the reconstruction SNR of noisy 35-sparse 256×1 signals under different noise levels.

 Fig. 6 shows that the BSFDBC matrix gives higher reconstruction SNR than the corresponding Gaussian and Bernoulli matrices in different noise levels. Here, we provide some experiment results via the BSFDBC, Gaussian and Bernoulli matrices of size 255×512 . When the input SNR is 50dB, the associated reconstruction SNRs are 54.62 dB, 51.25 dB and 51.41 dB, respectively.

 From above three examples, it can be found that the BSFDBC matrices give better recovery performance than their Gaussian and Bernoulli counterparts in noiseless and noisy scenarios.

 

 5.4 BSFDBC for Image Signals

 As shown in Fig. 7, the test images include three grayscale images and three color images. The three grayscale images are “lena” of size 256× 256 , “peppers” of size 256× 256 and “airport” of size 1024×1024, while the three color images are “Earth” of size 512×512×3, “airplane” of size 512×512×3 and “bone” of size 675×653×3 . Table 1 presents the reconstruction PSNR for different test images with block size 32×16 and 32×32.

Fig. 7. Test images. (a) Lena, (b) Peppers, (c) Airport, (d) Earth, (e) Airplane, (f) Bone

Table 1. The reconstruction PSNR (dB) for different test images with block size 32×16 and 32×32

 From Table 1, it is observed that for all test images, the BSFDBC matrix has more reconstruction PSNR than the Gaussian and Bernoulli matrices. In addition, the reconstruction PSNR increases as the block size.

 Simulation experiments with sparse signals and image signals show that the reconstruction performance of BSFDBC matrices is superior to their Gaussian and Bernoulli counterparts, which is coincide with the conclusion of Theorem 4.2. Consequently, inspired from BSF and Chebyshev chaotic sequence, the designed BSFDBC matrices possess the characteristics of easy hardware implementation, good sensing performance and good cryptographic property. These characteristics can make the proposed matrices applied to practical CS applications, such as sparse signal restore, image block CS and image encryption.

 

6. Conclusion

 On the basis of BSF and Chebyshev chaotic sequence, this paper constructs a class of deterministic bipolar measurement matrices named BSFDBC and gives related example. The coherence of proposed BSFDBC matrices is investigated and derived theoretically to be smaller than the corresponding Gaussian and Bernoulli random matrices. Simulation experiments with sparse signals and image signals show that the proposed BSFDBC matrix is sensitive to its initial state, has limited influence on the recovery accuracy in different initial states and it outperforms its Gaussian and Bernoulli counterparts in recovery accuracy. The BSFDBC matrices possess the characteristics of easy hardware implementation, good sensing performance and good cryptographic property, which is conducive to practical CS.

References

  1. Emmanuel J. Candès, Justin Romberg and Terence Tao, "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information," IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489-509, February, 2006. https://doi.org/10.1109/TIT.2005.862083
  2. David L. Donoho, "Compressed sensing," IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, April, 2006. https://doi.org/10.1109/TIT.2006.871582
  3. Shuai Liu, Weiling Bai, Gaocheng Liu, Wenhui Li and Hari M. Srivastava, "Parallel fractal compression method for big video data," Complexity, vol. 2018, pp. 1-16, October, 2018.
  4. Gaocheng Liu, Shuai Liu, Khan Muhammad, Arun Kumar Sangaiah and Faiyaz Doctor, "Object tracking in vary lighting conditions for fog based intelligent surveillance of public spaces," IEEE Access, vol. 6, pp. 29283-29296, May, 2018. https://doi.org/10.1109/ACCESS.2018.2834916
  5. Zheng Pan, Shuai Liu, Arun Kumar Sangaiah and Khan Muhammad, "Visual attention feature (VAF): a novel strategy for visual tracking based on cloud platform in intelligent surveillance systems," Journal of Parallel and Distributed Computing, vol. 120, pp. 182-194, October, 2018. https://doi.org/10.1016/j.jpdc.2018.06.012
  6. Emmanuel J. Candès and Terence Tao, "Decoding by linear programming," IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203-4215, December, 2005. https://doi.org/10.1109/TIT.2005.858979
  7. Scott Shaobing Chen, David L. Donoho and Michael A. Saunders, "Atomic decomposition by basis pursuit," SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33-61, August, 1998. https://doi.org/10.1137/S1064827596304010
  8. Joel A. Tropp, "Greed is good: Algorithmic results for sparse approximation," IEEE Transactions on Information Theory, vol. 50, no. 10, pp. 2231-2242, October, 2004. https://doi.org/10.1109/TIT.2004.834793
  9. R. Ramu Naidu, Phanindra Jampana and C. S. Sastry, "Deterministic compressed sensing matrices: construction via Euler Squares and applications," IEEE Transactions on Signal Processing, vol. 64, no. 14, pp. 3566-3575, July, 2016. https://doi.org/10.1109/TSP.2016.2550020
  10. R. Ramu Naidu and Chandra R. Murthy, "Construction of binary sensing matrices using extremal set theory," IEEE Signal Processing Letters, vol. 24, no. 2, pp. 211-215, February, 2017. https://doi.org/10.1109/LSP.2016.2638426
  11. Pradip Sasmal, R. Ramu Naidu, Challa S. Sastry and Phanindra Jampana, "Composition of binary compressed sensing matrices," IEEE Signal Processing Letters, vol. 23, no.8, pp. 1096-1100, August, 2016. https://doi.org/10.1109/LSP.2016.2585181
  12. Shuxing Li and Gennian Ge, "Deterministic sensing matrices arising from near orthogonal systems," IEEE Transactions on Information Theory, vol. 60, no. 4, pp. 2291-2302, April, 2014. https://doi.org/10.1109/TIT.2014.2303973
  13. Jun Zhang, Guojun Han and Yi Fang, "Deterministic construction of compressed sensing matrices from protograph LDPC codes," IEEE Signal Processing Letters, vol. 22, no. 11, pp. 1960-1964, November, 2015. https://doi.org/10.1109/LSP.2015.2447934
  14. Mohammad Fardad, Sayed Masoud Sayedi and Ehsan Yazdian, "A low-complexity hardware for deterministic compressive sensing reconstruction," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 65, no. 10, pp. 3349-3361, October, 2018. https://doi.org/10.1109/TCSI.2018.2803627
  15. Jin-Wei Jhang and Yuan-hao Huang, "A high-SNR projection-based atom selection OMP processor for compressive sensing," IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 24, no. 12, pp. 3477-3488, December, 2016. https://doi.org/10.1109/TVLSI.2016.2554401
  16. Hongping Gan, Zhi Li, Jian Li, Xi Wang and Zhengfu Cheng, "Compressive sensing using chaotic sequence based on chebyshev map," Nonlinear Dynamics, vol. 78, no. 4, pp. 2429-2438, December, 2014. https://doi.org/10.1007/s11071-014-1600-1
  17. Juan Castorena and Charles D. Creusere, "The restricted isometry property for banded random matrices," IEEE Transactions on Signal Processing, vol. 62, no. 19, pp. 5073-5084, October, 2014. https://doi.org/10.1109/TSP.2014.2345350
  18. Hongping Gan, Song Xiao and Yimin Zhao, "A novel secure data transmission scheme using chaotic compressed sensing," IEEE Access, vol. 6, pp. 4587-4598, February, 2018. https://doi.org/10.1109/ACCESS.2017.2780323
  19. Mahsa Lotfi and Mathukumalli Vidyasagar, "A fast noniterative algorithm for compressive sensing using binary measurement matrices," IEEE Transactions on Signal Processing, vol. 66, no. 15, pp. 4079-4089, May, 2018. https://doi.org/10.1109/tsp.2018.2841881
  20. Li Zeng, Xiongwei Zhang, Liang Chen, Tieyong Cao and Jibin Yang, "Deterministic construction of toeplitzed structurally chaotic matrix for compressed sensing," Circuits, Systems, and Signal Processing, vol. 34, no. 3, pp. 797-813, March, 2015. https://doi.org/10.1007/s00034-014-9873-7
  21. Hongping Gan, Song Xiao, Yimin Zhao and Xiao Xue, "Construction of efficient and structural chaotic sensing matrix for compressive sensing," Signal Processing: Image Communication, vol. 68, pp. 129-137, October, 2018. https://doi.org/10.1016/j.image.2018.06.004
  22. Hongping Gan, Song Xiao and Yimin Zhao, "A large class of chaotic sensing matrices for compressed sensing," Signal Processing, vol. 149, pp. 193-203, August, 2018. https://doi.org/10.1016/j.sigpro.2018.03.014
  23. Guohua Zhang, Rudolf Mathar and Quan Zhou, "Deterministic bipolar measurement matrices with flexible sizes from Legendre sequence," Electronics Letters, vol. 52, no. 11, pp. 928-930, May, 2016. https://doi.org/10.1049/el.2016.0765
  24. Gang Wang, Min-Yao Niu and Fang-Wei Fu, "Deterministic constructions of compressed sensing matrices based on optimal codebooks and codes," Applied Mathematics and Computation, vol. 343, pp. 128-136, February, 2019. https://doi.org/10.1016/j.amc.2018.09.042
  25. Weizhi Lu, Tao Dai and Shu-Tao Xia, "Binary matrices for compressed sensing," IEEE Transactions on Signal Processing, vol. 66, no. 1, pp. 77-85, January, 2018. https://doi.org/10.1109/TSP.2017.2757915
  26. Liu Haiqiang, Yin Jihang, Hua Gang, Yin Hongsheng and Zhu Aichun, "Deterministic construction of measurement matrices based on Bose balanced incomplete block designs," IEEE Access, vol. 6, pp. 21710-21718, April, 2018. https://doi.org/10.1109/ACCESS.2018.2824329
  27. Tian Shujuan, Fan Xiaoping, Li Zhetao, Pan Tian, Choi Youngjune and Sekiya Hiroo, "Orthogonal-gradient measurement matrix construction algorithm," Chinese Journal of Electronics, vol. 25, no. 1, pp. 81-87, January, 2016. https://doi.org/10.1049/cje.2016.01.013
  28. Haiyang Liu, Hao Zhang and Lianrong Ma, "On the spark of binary LDPC measurement matrices from complete protographs," IEEE Signal Processing Letters, vol. 24, no. 11, pp. 1616-1620, November, 2017. https://doi.org/10.1109/LSP.2017.2749043
  29. Jue Wang, Zhaoyang Zhang, Xianbin Wang, Hong Wang and Chunxu Jiao, "A low-complexity reconstruction algorithm for compressed sensing using Reed-Muller sequences," in Proc. of IEEE Int. Conf. on Communications, pp. 1-6, May 20-24, 2018.
  30. Sung-Hsien Hsieh, Chun-Shien Lu and Soo-Chang Pei, "Compressive sensing matrix design for fast encoding and decoding via sparse FFT," IEEE Signal Processing Letters, vol. 25, no. 4, pp. 591-595, April, 2018. https://doi.org/10.1109/LSP.2018.2809693
  31. Nam Yul Yu and Guang Gong, "A new binary sequence family with low correlation and large size," IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1624-1636, April, 2006. https://doi.org/10.1109/TIT.2006.871062
  32. Yan Tang, Guonian Lv and Kuixi Yin, "Deterministic sensing matrices based on multidimensional pseudo-random sequences," Circuits, Systems, and Signal Processing, vol. 33, no. 5, pp. 1597-1610, May, 2014. https://doi.org/10.1007/s00034-013-9701-5
  33. Jarvis Haupt, Waheed U. Bajwa, Gil Raz and Robert Nowak, "Toeplitz compressed sensing matrices with applications to sparse channel estimation," IEEE Transactions on Information Theory, vol. 56, no. 11, pp. 5862-5875, November, 2010. https://doi.org/10.1109/TIT.2010.2070191