DOI QR코드

DOI QR Code

The Construction and Viterbi Decoding of New (2k, k, l) Convolutional Codes

  • Peng, Wanquan (Dept. College of Electrical Engineering, Chongqing Vocational Institute of Engineering) ;
  • Zhang, Chengchang (Dept. College of Communication Engineering, Chongqing University)
  • Received : 2013.04.05
  • Accepted : 2013.07.08
  • Published : 2014.03.31

Abstract

The free distance of (n, k, l) convolutional codes has some connection with the memory length, which depends on not only l but also on k. To efficiently obtain a large memory length, we have constructed a new class of (2k, k, l) convolutional codes by (2k, k) block codes and (2, 1, l) convolutional codes, and its encoder and generation function are also given in this paper. With the help of some matrix modules, we designed a single structure Viterbi decoder with a parallel capability, obtained a unified and efficient decoding model for (2k, k, l) convolutional codes, and then give a description of the decoding process in detail. By observing the survivor path memory in a matrix viewer, and testing the role of the max module, we implemented a simulation with (2k, k, l) convolutional codes. The results show that many of them are better than conventional (2, 1, l) convolutional codes.

Keywords

1. INTRODUCTION

Convolutional codes have good BER performance and a memory characteristic. Early classical convolutional codes have self-orthogonal codes, orthogonalizable codes, and “quicklook- in” codes [1, 2]. In the 1970s, to search optimum codes, some scholars proposed a Viterbi decoding search algorithm by computer [3]. In the 1980s, punctured convolutional codes and tail-biting convolutional codes were widely used in a variety of digital communication systems [4]. A recursive systematic convolutional code (RSC) was developed with the invention of Turbo Codes in the 1990s [5]. An RSC is a system code with the distance characteristics of a non-systematic code. Currently, convolutional LDPC codes [6, 7] have become the new hot topic, as they can obtain a good cost performance when implementing the Belief Propagation (BP) algorithm. In addition, the quantum convolutional codes used in quantum communication have also become absorbing quantum error correcting codes [8].

In fact, the free distance of (n, k, l) convolutional codes depends on not only a good generator matrix, but also on more memory length kl. However, in the development of convolutional codes, more l, not k, are successfully increased. Based on this, by combining (2k, k) block codes with (2, 1, l) convolutional codes, we constructed a new class of (2k, k, l) convolutional codes that can have growth in both k and l. As a new kind of code, (2k, k, l) convolutional codes can implement the Viterbi algorithm like (2, 1, l) convolutional codes. In Section 3, by means of a matrix module, we designed a soft decision Viterbi matrix decoder with a parallel capability [9], and we describe the decoding process in detail. Section 4 covers the testing of the role of the max module, deals with some (2k, k, l) and (2, 1, l) convolutional codes on the Gaussian channel and BPSK, and compares the error-correcting capability of the two kinds of codes.

 

2. THE CONSTRUCTION OF CONVOLUTIONAL CODES

The encoder of new (2k, k, l) convolutional codes is shown in Fig.1. The input message M(t) is q k-bits (k>1) vector, where M(t)=[ m0(t) m1(t) m2(t) …mk-1(t) ]T; i0, i1, i2, …, il is 2k-ary for M(t), M(t-l), …, M(t-l), respectively, that decimal scale is 0~2k -1; Dj is vector register, which can delay k-bits information together, after j times delay, M(t-j)=[m0(t-j) m1(t-j) m2(t-j) … mk-1(t-j)]T; the linear combiner 1 implements the operation according to , and multiplies with the generation matrix G=[I PT]T of (2k, k) block codes. Based on the operation of the Galois field of GF(2), we can obtain the encoded output as follows:

Where I is the identity matrix, I and P are k×k matrix. At the same time, the linear combiner 2 implements the operation according to . The result is input to the embedded zero module, the k bits 0 are embedded, then 2k×1 output vector is obtained as:

The final encoded output is the sum of (1) and (2), which is as follows:

Equation (3) is named the “generation function.” In the above process, (2k, k) codes are called “embedded codes,” and can be taken from the block codes that have code rat R=1/2 and a good Hamming distance. However, even if k is not too large, as a result of the multiplier effect, the growth of memory length of the (2k, k, l) convolutional codes is more significant than those of the (2, 1, l) convolutional codes. So, we will pick out (6, 3) and (8, 4), which are two double loop cyclic codes, as embedded codes in [10] to construct (6, 3, l) and (8, 4, l) convolutional codes, and their generation matrix are respectively:

and:

We can also see that their minimum distances are respectively 3 and 4 from (4) and (5), that they have a very good length-distance cost performance, which is beneficial to the new convolutional codes. On the other hand, the polynomial coefficient of linear combiner 1 and 2 gj, hj can be derived from conventional (2, 1, l) convolutional codes that have a good free distance. For this paper, we selected the codes with optimum distance characteristics from [10], where l=1~5, and are shown in Tab.1, where g0~gl and h0=hl are used to delimit groups of three digits, and are expressed in octal.

Fig. 1.Encoder of (2k, k, l) convolutional codes

The growth rate of the (2k, k, l) memory length is k times as that (2, 1, l) ones, which is very advantageous for the study of large constraint degree convolutional codes. Fig. 1 fully demonstrates the process where memory is injected from (2, 1, l) convolutional codes to block codes that have a similar characterize with concatenated codes [11]. However, the code rate is 1/2, which is the same as the embedded codes, and there is not a loss of concatenated codes.

Table 1.Generating polynomial coefficients of (2, 1, l) convolutional codes

 

3. VITERBI MATRIX DECODING

Fig. 2.Matrix decoding of (2k, k, l) convolutional codes

The (2k, k, l) convolutional codes can also decode with the Viterbi algorithm, and they can obtain faster decoding by adopting a parallel structure [12] at the expenses of the structure complexity. In fact, the trellis of (2k, k, l) convolutional codes has 2lk states. The number of branch paths that converge at the same state node is 2k, which can form 2(l+1)k branches. The likelihood score of a branch path is defined as the branch metric. The accumulated value for all branch metrics of a connected path is defined as the path metric. The decoder calculates branch the metric for 2k branches converged at one of state nodes, respectively. It also adds with the path metric from the previous time such that obtaining 2k new path metrics, and then picks out the path with the maximum metric as a survivor path. There are 2lk Add-Compare-Select operations. In order to implement the next accumulating operation, the decoder needs 2lk registers to store the surviving path metric. To avoid overflow, the attenuation for the path metrics should be implemented periodically. The 2lk register sets are provided for the parallel decoding to save the surviving path.

3.1 The Add-Compare-Select

With the help of some matrix modules, this paper provides a Viterbi matrix decoder with a single structure and a parallel processing capability, which is shown in Fig. 2. First, there are 2k(l+1) branches in the trellis of (2k, k, l) convolutional codes. A branch has a codeword and so we can obtain a 2 k(l+1)×1 matrix in i0i1…il-1il ascending sort by 2k-ary:

where C00...00~C00...0K corresponds to the 2k branches of state node S00…0, that can be derived from (3). Each element is 1×2k vector in (6), which can be converted into 2k (l+1)×2k matrix:

Since C is a constant matrix that can be obtained and converted to a bipolar code beforehand and can be stored in a “code word generator.” We assume that the received vector with noise is R(t)=[ r0(t) r1(t) … r2k-1(t)]T. The matrix multiplier completes the “multiply” operation and the output is:

where:

Equation (9) is a inner product for and R(t), which is equivalent to a maximum a posteriori (MAP). Thus, the matrix multiplication between C and R(t) can complete the likelihood operation for all 2(l+1)k branch paths. The result is in accordance with the maximum likelihood decoding. Suppose the output of the matrix adder is:

Λ(t), as well as Q(t), is a 2(l+1)k×1 matrix, in order to facilitate the implementation of the “comparison operation.” The shaper then orderly extracts 2k elements from (10), which are shaped into a new 2lk×2k matrix:

The superscript of high l bits in each column is sorted by 2k-ary. The comparator compares 2k elements and outputs the maximum element of each row:

The role of the above process is to find the maximum likelihood path by comparing the 2k path metrics of the state node Si0il...il-1 . In order to avoid the Y(t) gradually enlarging, the attenuator finds the minimum value λmin in (12), and subtracts the value from all rows. As such, it follows that:

Where the maximum likelihood criterion does not have to be destroyed under the same attenuation for all elements of Y(t), and the path metric can be limited to a small value. Y'(t) is sent to the path metric memory to be saved, due to each old state, and should point to the 2k new states. The corresponding path metric will be called 2k times in the next accumulation, such that 2k times row merging is needed for Y'(t). So, the output of the accumulator in the next time is:

The accumulator, shaper, comparator, attenuator, path metric memory, and the merger in the loop complete the add-compare-select the attenuation of the path metric and update operations.

3.2 Saving and Updating the Survivor Path

Each current state node will be connected with the surviving path that has been retained by the previous state node selected in (12). It will delete the oldest branch to complete the update of the survivor path. This procedure is completed by a variable selector and columns merger in Fig. 2. The decoder is required to provide original information groups that are consistent with the current state node Si0il...il-1. Considering that the 2kl groups information vector forms the following matrices:

where βi0il...il-1 =(i0)B is 1×k binary information, suppose that the current surviving path memory information is:

This is a 2lk×k (τ+1) matrix and the total memory depth is k(τ+1). The last k column of the matrix is the oldest group.

The role of the variable selector is to find a survivor path in the previous time that corresponds to each new branch. Let the index matrix of the maximum value in (12) be:

The value range of each element is 0~2k-1. In fact, (17) is the column index of each row of (12). It needs to be transformed into the row index of (16) to be used. Let:

Equation (18) can be merged to obtain the correction vector:

Equation (19) adds (17) to obtain a new row index matrix:

In the next moment, the variable selector would obtain X(t) from the surviving path memory, select new row one by one according to U, and complete the reordering of all rows:

Then, merge the column with β and automatically delete the last k columns to obtain a new survivor path matrix:

With the increase of the memory depth, some survivor paths will be brought together, which is reflected in the right column where the element will gradually become more identical. The role of the max module is that it outputs the index of the largest element in Y'(t). The variable selector selects the last k columns of (22) as the output based on the index, which can effectively reduce memory depth. This is because even if the last node survivor path does not completely meet together, the decoder can still choose the best path as the decoding output.

Matrix processing makes the (2k k, l) convolutional codes decoder have the same single structure, and it achieves a high unity for the viterbi algorithm. It only needs to modify the inner parameters of some modules with different types of codes. This is very conducive to analysis and design.

 

4. SIMULATION ANALYSIS

Fig. 3.Convergence of BER performance for memory depth

Simulation is implemented in the Gaussian channel and BPSK modulation. Double precision data is used in add-compare-select. It is well known that if the memory depth is too small it will affect the error-correcting capability and an excessive increase in the spending of the decoder. The first task is to verify the validity of the max module, and to determine a suitable memory depth. Take the (6, 3, 3) convolutional code as an example where 10 different memory depths are selected based on integer times of kl=9. When the Eb/No is respectively1.5db, 2db, and 2.5db, the BER is shown with or without the max module separately in Fig. 3. The results indicate that the convergence of the decoder was significantly improved after the max module was introduced, and it is obvious that the max module is really able to reduce the memory cells. When the memory depth is approximately 6×kl=54 the decoder will approach the best performance. Therefore, the memory depth will be 6kl in the simulation listed below.

In order to monitor the decoding process, we can observe the output of the survivor path memory with a matrix viewer. Fig. 4 shows the screen capture of (8, 4, 2) convolutional codes, where the memory depth τ is set to 6×kl=48, the numbers for the state node are 2kl=256, and the black and white denote 1 and 0. By comparing the two figures, we can implement a helpful observation for a real time channel condition.

Fig. 4.Screenshots of the path memory matrix for (8, 4, 2) convolutional codes

Combining Tab. 1 with (4) and (5), we construct five (6, 3, l) convolutional codes and four (8, 4, l) convolutional codes. Their BER performances are shown in the real line of Fig. 5 and Fig. 6. It can be seen that the SNR can obtain a stable gain with the growth of l. For example, when only l=4, the SNR is 2.9 and 2.4dB at BER=10−5.

Fig. 5.Performance comparison between (6, 3, l) and (2, 1, l) convolutional codes

Fig. 6.Performance comparison between (8, 4, l) and (2, 1, l) convolutional codes

In order to analyze the BER performance of (2k, k, l) convolutional codes further, we will compare it with (2, 1, l) conventional convolutional codes. Let l1and l2 be the encoding restriction of (2k, k, l) and (2, 1, l) convolutional codes respectively. When k×l1=l2, their memory length and state number are equal. The decoding complexity is generally the same, so it is comparable between the two codes. The best eight (2, 1, l) convolutional codes derived from [10] are listed in Table 2, and their performances are added respectively as the dotted line of Fig. 5 and Fig. 6. It can be seen that, except for (6, 3, 1), the other (2k, k, l) convolutional codes have different degrees of advantage over (2, 1, l) convolutional codes.

Table 2.Generated polynomial coefficients of (2, 1, l) convolutional codes

 

5. CONCLUSION

The method to construct long codes with short ones has always been the main point of error correcting codes. In this paper, we showed how we constructed a new class of (2k, k, l) convolutional codes with earlier (2k, k) block codes, which visibly hold this feature. The (2k, k, l) convolutional codes can achieve k-times increase in the memory length. We also developed a new approach for the study of large memory convolutional codes. For Viterbi decoding, we proposed a single structured decoder with a parallel processing capability by introducing a series of matrix modules, and we ended up with a high quality of decoding model. However, the complexity of the Viterbi algorithm is the exponential growth of kl, which results in its advantage not being able to be fully shown. We will keep this problem as we carry out new research to discover the existence of the relationship between the (2k, k, l) convolutional codes and LDPC, and will fully uncover its potential error-correcting capability.

References

  1. Robinson J, Bernstein A. A class of binary recurrent codes with limited error propagation[J]. IEEE Transactions on Informational Theory, vol.13, 1967, pp.106-113. https://doi.org/10.1109/TIT.1967.1053951
  2. Massey J, Costello D. Nonsystematic Convolutional Codes for Sequential Decoding in Space Applications[J]. IEEE Transactions on Communication Technology, vol.19, 1971, pp.806-813. https://doi.org/10.1109/TCOM.1971.1090720
  3. Bahl L, et al. An efficient algorithm for computing free distance[J]. IEEE Transactions on Informational Theory, vol.18, 1972, pp.437 - 439. https://doi.org/10.1109/TIT.1972.1054821
  4. Shu Lin, Daniel J, Costello. Error Control Coding: Fundamentals and Applications [M]. 2nd ed. Pearson Education, 2004, pp.582-598.
  5. Berrou C, Glavieux A. Near optimum error correcting coding and decoding: turbo-codes[J]. IEEE Transactions on Communication Theory, vol.44, 1996, pp.1261-1271. https://doi.org/10.1109/26.539767
  6. Pusane A E, et al. Deriving Good LDPC Convolutional Codes from LDPC Block Codes[J]. IEEE Transactions on Informational Theory, vol.57, 2011, pp. 835-857. https://doi.org/10.1109/TIT.2010.2095211
  7. Iyengar A R, Papaleo M, et al. Windowed Decoding of Protograph-based LDPC Convolutional Codes over Erasure Channels[J]. IEEE Transactions on Informational Theory, vol.58, 2012, pp.2303-2320. https://doi.org/10.1109/TIT.2011.2177439
  8. Houshmand M, Hosseini-Khayat S, Wilde M M. Minimal-Memory, Noncatastrophic, Polynomial- Depth Quantum Convolutional Encoders[J]. IEEE Transactions on Informational Theory, vol.59, 2013, pp.1198-1210. https://doi.org/10.1109/TIT.2012.2220520
  9. Jie Luo. On Low-Complexity Maximum-Likelihood Decoding of Convolutional Codes[J]. IEEE Transactions on Informational Theory, vol.54, 2008, pp.5756-5760. https://doi.org/10.1109/TIT.2008.2006461
  10. Wang Xinmei. Error Correcting Codes-Princip1e and Method[M]. Xi'an: Xidian University Press, 2001, pp.159-161,455. (in Chinese)
  11. Huebner A, Kliewer J, Costello D J. Double Serially Concatenated Convolutional Codes With Jointly Designed S-Type Permutors [J]. IEEE Transactions on Informational Theory, vol.55, 2009, pp.5811- 5821. https://doi.org/10.1109/TIT.2009.2032804
  12. Jie Jin, Chi-ying Tsui. Low-Power Limited-Search Parallel State Viterbi Decoder Implementation Based on Scarce State Transition[J]. IEEE Transactions on VLSI Systems, vol.15, 2007, pp.1172- 1176. https://doi.org/10.1109/TVLSI.2007.903957