• Title/Summary/Keyword: Syndrome decoding

Search Result 29, Processing Time 0.02 seconds

Syndrome Check aided Fast-SSCANL Decoding Algorithm for Polar Codes

  • Choangyang Liu;Wenjie Dai;Rui Guo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1412-1430
    • /
    • 2024
  • The soft cancellation list (SCANL) decoding algorithm for polar codes runs L soft cancellation (SCAN) decoders with different decoding factor graphs. Although it can achieve better decoding performance than SCAN algorithm, it has high latency. In this paper, a fast simplified SCANL (Fast-SSCANL) algorithm that runs L independent Fast-SSCAN decoders is proposed. In Fast-SSCANL decoder, special nodes in each factor graph is identified, and corresponding low-latency decoding approaches for each special node is propose first. Then, syndrome check aided Fast-SSCANL (SC-Fast-SSCANL) algorithm is further put forward. The ordinary nodes satisfied the syndrome check will execute hard decision directly without traversing the factor graph, thereby reducing the decoding latency further. Simulation results show that Fast-SSCANL and SC-Fast-SSCANL algorithms can achieve the same BER performance as the SCANL algorithm with lower latency. Fast-SSCANL algorithm can reduce latency by more than 83% compared with SCANL, and SC-Fast-SSCANL algorithm can reduce more than 85% latency compared with SCANL regardless of code length and code rate.

A Syndrome-distribution decoding MOLS L$_{p}$ codes

  • Hahn, S.;Kim, D.G.;Kim, Y.S.
    • Communications of Mathematical Education
    • /
    • v.6
    • /
    • pp.371-381
    • /
    • 1997
  • Let p be an odd prime number. We introduce simple and useful decoding algorithm for orthogonal Latin square codes of order p. Let H be the parity check matrix of orthogonal Latin square code. For any x ${\in}$ GF(p)$^{n}$, we call xH$^{T}$ the syndrome of x. This method is based on the syndrome decoding for linear codes. In L$_{p}$, we need to find the first and the second coordinates of codeword in order to correct the errored received vector.

  • PDF

DECODING OF LEXICODES S10,4

  • KIM, D.G.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.6 no.1
    • /
    • pp.47-52
    • /
    • 2002
  • In this paper we propose a simple decoding algorithm for the 4-ary lexicographic codes (or lexicodes) of length 10 with minimum distance 4, write $S_{10,4}$. It is based on the syndrome decoding method. That is, using a syndrome vector we detect an error and it will be corrected an error from the four parity check equations.

  • PDF

A new syndrome check error estimation algorithm and its concatenated coding for wireless communication

  • 이문호;장진수;최승배
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1419-1426
    • /
    • 1997
  • A new SCEE(Syndrome Check Error Estimation) decoding method for convolutional code and concatenated SCEE/RS (Reed-Solomon) conding scheme are proposed. First, we describe the operation of the decoding steps in the proposed algorithm. Then deterministic values on the decoding operation are drived when some combination of predecoder-reencoder is used. Computer simulation results show that the compuatational complexity of the proposed SCEE decoder is significantly reduced compared to that of conventional Viterbi-decoder without degratation of the $P_{e}$ performance. Also, the concatenated SCEE/RS decoder has almost the same complexity of a RS decoder and its coding gain is higher than that of soft decision Viterbi or RS decoder respectively.

  • PDF

An Improved Decoding Scheme of Hamming Codes using Soft Values (소프트 값을 이용한 해밍 부호의 개선된 복호 방식)

  • Cheong, Ho-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.1
    • /
    • pp.37-42
    • /
    • 2019
  • In this paper, we propose a syndrome decoding scheme that can correct two errors for single error correcting Hamming codes within a code length. The decoding scheme proposed in this paper has the advantage of significantly improving the error rate performance compared to the decoder complexity by correcting multiple errors without substantially increasing the decoding complexity. It is suitable for applications in which the energy use of encoder/decoder is extremely limited and the low error rate performance is required, such as IoT communications and molecular communications. In order to verify the improvement of the error rate performance of the Hamming code with the proposed decoding scheme, we performed simulation on Hamming codes with short code length in the AWGN and BPSK modulation environments. As a result, compared with the conventional decoding method, the proposed decoding scheme showed performance improvement of about 1.1 ~ 1.2[dB] regardless of the code length of the Hamming code.

LDPC Decoding by Failed Check Nodes for Serial Concatenated Code

  • Yu, Seog Kun;Joo, Eon Kyeong
    • ETRI Journal
    • /
    • v.37 no.1
    • /
    • pp.54-60
    • /
    • 2015
  • The use of serial concatenated codes is an effective technique for alleviating the error floor phenomenon of low-density parity-check (LDPC) codes. An enhanced sum-product algorithm (SPA) for LDPC codes, which is suitable for serial concatenated codes, is proposed in this paper. The proposed algorithm minimizes the number of errors by using the failed check nodes (FCNs) in LDPC decoding. Hence, the error-correcting capability of the serial concatenated code can be improved. The number of FCNs is simply obtained by the syndrome test, which is performed during the SPA. Hence, the decoding procedure of the proposed algorithm is similar to that of the conventional algorithm. The error performance of the proposed algorithm is analyzed and compared with that of the conventional algorithm. As a result, a gain of 1.4 dB can be obtained by the proposed algorithm at a bit error rate of $10^{-8}$. In addition, the error performance of the proposed algorithm with just 30 iterations is shown to be superior to that of the conventional algorithm with 100 iterations.

Research Trends in Quantum Error Decoders for Fault-Tolerant Quantum Computing (결함허용 양자 컴퓨팅을 위한 양자 오류 복호기 연구 동향)

  • E.Y. Cho;J.H. On;C.Y. Kim;G. Cha
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.5
    • /
    • pp.34-50
    • /
    • 2023
  • Quantum error correction is a key technology for achieving fault-tolerant quantum computation. Finding the best decoding solution to a single error syndrome pattern counteracting multiple errors is an NP-hard problem. Consequently, error decoding is one of the most expensive processes to protect the information in a logical qubit. Recent research on quantum error decoding has been focused on developing conventional and neural-network-based decoding algorithms to satisfy accuracy, speed, and scalability requirements. Although conventional decoding methods have notably improved accuracy in short codes, they face many challenges regarding speed and scalability in long codes. To overcome such problems, machine learning has been extensively applied to neural-network-based error decoding with meaningful results. Nevertheless, when using neural-network-based decoders alone, the learning cost grows exponentially with the code size. To prevent this problem, hierarchical error decoding has been devised by combining conventional and neural-network-based decoders. In addition, research on quantum error decoding is aimed at reducing the spacetime decoding cost and solving the backlog problem caused by decoding delays when using hardware-implemented decoders in cryogenic environments. We review the latest research trends in decoders for quantum error correction with high accuracy, neural-network-based quantum error decoders with high speed and scalability, and hardware-based quantum error decoders implemented in real qubit operating environments.

Reliability-Based Iterative Proportionality-logic Decoding of LDPC Codes with Adaptive Decision

  • Sun, Youming;Chen, Haiqiang;Li, Xiangcheng;Luo, Lingshan;Qin, Tuanfa
    • Journal of Communications and Networks
    • /
    • v.17 no.3
    • /
    • pp.213-220
    • /
    • 2015
  • In this paper, we present a reliability-based iterative proportionality-logic decoding algorithm for two classes of structured low-density parity-check (LDPC) codes. The main contributions of this paper include: 1) Syndrome messages instead of extrinsic messages are processed and exchanged between variable nodes and check nodes, which can reduce the decoding complexity; 2) a more flexible decision mechanism is developed in which the decision threshold can be self-adjusted during the iterative process. Such decision mechanism is particularly effective for decoding the majority-logic decodable codes; 3) only part of the variable nodes satisfying the pre-designed criterion are involved for the presented algorithm, which is in the proportionality-logic sense and can further reduce the computational complexity. Simulation results show that, when combined with factor correction techniques and appropriate proportionality parameter, the presented algorithm performs well and can achieve fast decoding convergence rate while maintaining relative low decoding complexity, especially for small quantized levels (3-4 bits). The presented algorithm provides a candidate for those application scenarios where the memory load and the energy consumption are extremely constrained.

Multihop Rate Adaptive Wireless Scalable Video Using Syndrome-Based Partial Decoding

  • Cho, Yong-Ju;Radha, Hayder;Seo, Jeong-Il;Kang, Jung-Won;Hong, Jin-Woo
    • ETRI Journal
    • /
    • v.32 no.2
    • /
    • pp.273-280
    • /
    • 2010
  • The overall channel capacity of a multihop wireless path drops progressively over each hop due to the cascading effect of noise and interference. Hence, without optimal rate adaptation, the video quality is expected to degrade significantly at any client located at a far-edge of an ad-hoc network. To overcome this limitation, decoding and forwarding (DF), which fully decodes codewords at each intermediate node, can be employed to provide the best video quality. However, complexity and memory usage for DF are significantly high. Consequently, we propose syndrome-based partial decoding (SPD). In the SPD framework an intermediate node partially decodes a codeword and relays the packet along with its syndromes if the packet is corrupted. We demonstrate the efficacy of the proposed scheme by simulations using actual 802.11b wireless traces. The trace-driven simulations show that the proposed SPD framework, which reduces the overall processing requirements of intermediate nodes, provides reasonably high goodput when compared to simple forwarding and less complexity and memory requirements when compared to DF.

An Error Correcting High Rate DC-Free Multimode Code Design for Optical Storage Systems (광기록 시스템을 위한 오류 정정 능력과 높은 부호율을 가지는 DC-free 다중모드 부호 설계)

  • Lee, June;Woo, Choong-Chae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.226-231
    • /
    • 2010
  • This paper proposes a new coding technique for constructing error correcting high rate DC-free multimode code using a generator matrix generated from a sparse parity-check matrix. The scheme exploits high rate generator matrixes for producing distinct candidate codewords. The decoding complexity depends on whether the syndrome of the received codeword is zero or not. If the syndrome is zero, the decoding is simply performed by expurgating the redundant bits of the received codeword. Otherwise, the decoding is performed by a sum-product algorithm. The performance of the proposed scheme can achieve a reasonable DC-suppression and a low bit error rate.