• Title/Summary/Keyword: Decoding complexity

Search Result 434, Processing Time 0.029 seconds

Complexity Analysis of Internet Video Coding (IVC) Decoding

  • Park, Sang-hyo;Dong, Tianyu;Jang, Euee S.
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.179-188
    • /
    • 2017
  • The Internet Video Coding (IVC) standard is due to be published by Moving Picture Experts Group (MPEG) for various Internet applications such as internet broadcast streaming. IVC aims at three things fundamentally: 1) forming IVC patents under a free of charge license, 2) reaching comparable compression performance to AVC/H.264 constrained Baseline Profile (cBP), and 3) maintaining computational complexity for feasible implementation of real-time encoding and decoding. MPEG experts have worked diligently on the intellectual property rights issues for IVC, and they reported that IVC already achieved the second goal (compression performance) and even showed comparable performance to even AVC/H.264 High Profile (HP). For the complexity issue, however, there has not been thorough analysis on IVC decoder. In this paper, we analyze the IVC decoder in view of the time complexity by evaluating running time. Through the experimental results, IVC is 3.6 times and 3.1 times more complex than AVC/H.264 cBP under constrained set (CS) 1 and CS2, respectively. Compared to AVC/H.264 HP, IVC is 2.8 times and 2.9 times slower in decoding time under CS1 and CS2, respectively. The most critical tool to be improved for lightweight IVC decoder is motion compensation process containing a resolution-adaptive interpolation filtering process.

Adaptive Hard Decision Aided Fast Decoding Method in Distributed Video Coding (적응적 경판정 출력을 이용한 고속 분산 비디오 복호화 기술)

  • Oh, Ryang-Geun;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.66-74
    • /
    • 2010
  • Recently distributed video coding (DVC) is spotlighted for the environment which has restriction in computing resource at encoder. Wyner-Ziv (WZ) coding is a representative scheme of DVC. The WZ encoder independently encodes key frame and WZ frame respectively by conventional intra coding and channel code. WZ decoder generates side information from reconstructed two key frames (t-1, t+1) based on temporal correlation. The side information is regarded as a noisy version of original WZ frame. Virtual channel noise can be removed by channel decoding process. So the performance of WZ coding greatly depends on the performance of channel code. Among existing channel codes, Turbo code and LDPC code have the most powerful error correction capability. These channel codes use stochastically iterative decoding process. However the iterative decoding process is quite time-consuming, so complexity of WZ decoder is considerably increased. Analysis of the complexity of LPDCA with real video data shows that the portion of complexity of LDPCA decoding is higher than 60% in total WZ decoding complexity. Using the HDA (Hard Decision Aided) method proposed in channel code area, channel decoding complexity can be much reduced. But considerable RD performance loss is possible according to different thresholds and its proper value is different for each sequence. In this paper, we propose an adaptive HDA method which sets up a proper threshold according to sequence. The proposed method shows about 62% and 32% of time saving, respectively in LDPCA and WZ decoding process, while RD performance is not that decreased.

Serial Concatenation of Space-Time and Recursive Convolutional Codes

  • Ko, Young-Jo;Kim, Jung-Im
    • ETRI Journal
    • /
    • v.25 no.2
    • /
    • pp.144-147
    • /
    • 2003
  • We propose a new serial concatenation scheme for space-time and recursive convolutional codes, in which a space-time code is used as the outer code and a single recursive convolutional code as the inner code. We discuss previously proposed serial concatenation schemes employing multiple inner codes and compare them with the new one. The proposed method and the previous one with joint decoding, both performing a combined decoding of the simultaneous output signals from multiple antennas, give a large performance gain over the separate decoding method. In decoding complexity, the new concatenation scheme has a lower complexity compared with the multiple encoding/joint decoding scheme due to the use of the single inner code. Simulation results for a communication system with two transmit and one receive antennas in a quasi-static Rayleigh fading channel show that the proposed scheme outperforms the previous schemes.

  • PDF

Single-Step Adaptive Offset Min-Sum Algorithm for Decoding LDPC Codes (LDPC 코드의 빠른 복원을 위한 1단으로 구성된 적응적인 오프셋 MS 알고리즘)

  • Lin, Xiaoju;Baasantseren, Gansuren;Lee, Hae-Kee;Kim, Sung-Soo
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.59 no.1
    • /
    • pp.53-57
    • /
    • 2010
  • Low-density parity-check (LDPC) codes with belief-propagation (BP) algorithm achieve a remarkable performance close to the Shannon limit at reasonable decoding complexity. Conventionally, each iteration in decoding process contains two steps, the horizontal step and the vertical step. In this paper, an efficient implementation of the adaptive offset min-sum (AOMS) algorithm for decoding LDPC codes using the single-step method is proposed. Furthermore, the performances of the AOMS algorithm compared with belief-propagation (BP) algorithm are investigated. The algorithms using the single-step method reduce the implementation complexity, speed up the decoding process and have better efficiency in terms of memory requirements.

Complexity-Reduced Algorithms for LDPC Decoder for DVB-S2 Systems

  • Choi, Eun-A;Jung, Ji-Won;Kim, Nae-Soo;Oh, Deock-Gil
    • ETRI Journal
    • /
    • v.27 no.5
    • /
    • pp.639-642
    • /
    • 2005
  • This paper proposes two kinds of complexity-reduced algorithms for a low density parity check (LDPC) decoder. First, sequential decoding using a partial group is proposed. It has the same hardware complexity and requires a fewer number of iterations with little performance loss. The amount of performance loss can be determined by the designer, based on a tradeoff with the desired reduction in complexity. Second, an early detection method for reducing the computational complexity is proposed. Using a confidence criterion, some bit nodes and check node edges are detected early on during decoding. Once the edges are detected, no further iteration is required; thus early detection reduces the computational complexity.

  • PDF

An Adaptive K-best detection algorithm for MIMO systems (다중 송수신 안테나 시스템에서 적응 K-best 검출 알고리즘)

  • Kim, Jong-Wook;Kang, Ji-Won;Lee, Chung-Yong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.10 s.352
    • /
    • pp.1-7
    • /
    • 2006
  • Lattice decoding concept has been proposed for the implementation of the Maximum-Likelihood detection which is the optimal receiver from the viewpoint of the BER (Bit Error Rate) performance for MIMO (Multiple Input Multiple Output) systems. Sphere decoding algorithm and K-best decoding algorithm are based on the lattice decoding concept. A K-best decoding algorithm shows a good BER performance with relatively low complexity. However, with small K value, the error propagation effect severely degrades the performance. In this paper, we propose an adaptive K-best decoding algorithm which has lower average complexity and better BER performance than conventional K-best decoding algorithm.

ADAPTIVE SLIDING WINDOW METHOD FOR TURBO CODES IN CDMA CELLULAR SYSTEM WITH POWER CONTROL ERROR

  • Park, Sook-Min;Yoon, Sang-Sic;Kim, Sang-Wu;Lee, Kwyro
    • Proceedings of the IEEK Conference
    • /
    • 2003.07a
    • /
    • pp.565-568
    • /
    • 2003
  • This paper presents a method that can be used to reduce the decoding computational complexity in turbo codes. To reduce the decoding complexity we proposed an adaptive sliding window method which control the learning period of Viterbi sliding window method depending on channel signal to interference ratio (SIR). When received signal to interference ratio (SIR) is relatively high, we can reduce the decoding complexity without a noticeable degradation of BER performance at CDMA cellular system with power control error.

  • PDF

Improved SE SD Algorithm based on MMSE for MIMO Detection (MIMO 검파를 위한 MMSE 기반의 향상된 SE SD 알고리듬)

  • Cho, Hye-Min;Park, Soon-Chul;Han, Dong-Seog
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.3A
    • /
    • pp.231-237
    • /
    • 2010
  • Multi-input multi-output (MIMO) systems are used to improve the transmission rate in proportion to the number of antennas. However, their computational complexity is very high for the detection in the receiver. The sphere decoding (SD) is a detection algorithm with reduced complexity. In this paper, an improved Schnorr-Euchner SD (SE SD) is proposed based on the minimum mean square error (MMSE) and the Euclidean distance criteria without additional complexity.

Low-Complexity Triple-Error-Correcting Parallel BCH Decoder

  • Yeon, Jaewoong;Yang, Seung-Jun;Kim, Cheolho;Lee, Hanho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.5
    • /
    • pp.465-472
    • /
    • 2013
  • This paper presents a low-complexity triple-error-correcting parallel Bose-Chaudhuri-Hocquenghem (BCH) decoder architecture and its efficient design techniques. A novel modified step-by-step (m-SBS) decoding algorithm, which significantly reduces computational complexity, is proposed for the parallel BCH decoder. In addition, a determinant calculator and a error locator are proposed to reduce hardware complexity. Specifically, a sharing syndrome factor calculator and a self-error detection scheme are proposed. The multi-channel multi-parallel BCH decoder using the proposed m-SBS algorithm and design techniques have considerably less hardware complexity and latency than those using a conventional algorithms. For a 16-channel 4-parallel (1020, 990) BCH decoder over GF($2^{12}$), the proposed design can lead to a reduction in complexity of at least 23 % compared to conventional architecttures.

Generalized SCAN Bit-Flipping Decoding Algorithm for Polar Code

  • Lou Chen;Guo Rui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1296-1309
    • /
    • 2023
  • In this paper, based on the soft cancellation (SCAN) bit-flipping (SCAN-BF) algorithm, a generalized SCAN bit-flipping (GSCAN-BF-Ω) decoding algorithm is carried out, where Ω represents the number of bits flipped or corrected at the same time. GSCAN-BF-Ω algorithm corrects the prior information of the code bits and flips the prior information of the unreliable information bits simultaneously to improve the block error rate (BLER) performance. Then, a joint threshold scheme for the GSCAN-BF-2 decoding algorithm is proposed to reduce the average decoding complexity by considering both the bit channel quality and the reliability of the coded bits. Simulation results show that the GSCAN-BF-Ω decoding algorithm reduces the average decoding latency while getting performance gains compared to the common multiple SCAN bit-flipping decoding algorithm. And the GSCAN-BF-2 decoding algorithm with the joint threshold reduces the average decoding latency further by approximately 50% with only a slight performance loss compared to the GSCAN-BF-2 decoding algorithm.