• Title/Summary/Keyword: codeword

Search Result 174, Processing Time 0.025 seconds

A Versatile Reed-Solomon Decoder for Continuous Decoding of Variable Block-Length Codewords (가변 블록 길이 부호어의 연속 복호를 위한 가변형 Reed-Solomon 복호기)

  • 송문규;공민한
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.3
    • /
    • pp.187-187
    • /
    • 2004
  • In this paper, we present an efficient architecture of a versatile Reed-Solomon (RS) decoder which can be programmed to decode RS codes continuously with my message length k as well as any block length n. This unique feature eliminates the need of inserting zeros for decoding shortened RS codes. Also, the values of the parameters n and k, hence the error-correcting capability t can be altered at every codeword block. The decoder permits 3-step pipelined processing based on the modified Euclid's algorithm (MEA). Since each step can be driven by a separate clock, the decoder can operate just as 2-step pipeline processing by employing the faster clock in step 2 and/or step 3. Also, the decoder can be used even in the case that the input clock is different from the output clock. Each step is designed to have a structure suitable for decoding RS codes with varying block length. A new architecture for the MEA is designed for variable values of the t. The operating length of the shift registers in the MEA block is shortened by one, and it can be varied according to the different values of the t. To maintain the throughput rate with less circuitry, the MEA block uses both the recursive technique and the over-clocking technique. The decoder can decodes codeword received not only in a burst mode, but also in a continuous mode. It can be used in a wide range of applications because of its versatility. The adaptive RS decoder over GF($2^8$) having the error-correcting capability of upto 10 has been designed in VHDL, and successfully synthesized in an FPGA chip.

A Versatile Reed-Solomon Decoder for Continuous Decoding of Variable Block-Length Codewords (가변 블록 길이 부호어의 연속 복호를 위한 가변형 Reed-Solomon 복호기)

  • 송문규;공민한
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.3
    • /
    • pp.29-38
    • /
    • 2004
  • In this paper, we present an efficient architecture of a versatile Reed-Solomon (RS) decoder which can be programmed to decode RS codes continuously with my message length k as well as any block length n. This unique feature eliminates the need of inserting zeros for decoding shortened RS codes. Also, the values of the parameters n and k, hence the error-correcting capability t can be altered at every codeword block. The decoder permits 3-step pipelined processing based on the modified Euclid's algorithm (MEA). Since each step can be driven by a separate clock, the decoder can operate just as 2-step pipeline processing by employing the faster clock in step 2 and/or step 3. Also, the decoder can be used even in the case that the input clock is different from the output clock. Each step is designed to have a structure suitable for decoding RS codes with varying block length. A new architecture for the MEA is designed for variable values of the t. The operating length of the shift registers in the MEA block is shortened by one, and it can be varied according to the different values of the t. To maintain the throughput rate with less circuitry, the MEA block uses both the recursive technique and the over-clocking technique. The decoder can decodes codeword received not only in a burst mode, but also in a continuous mode. It can be used in a wide range of applications because of its versatility. The adaptive RS decoder over GF(2$^{8}$ ) having the error-correcting capability of upto 10 has been designed in VHDL, and successfully synthesized in an FPGA chip.

Upper Bounds for the Performance of Turbo-Like Codes and Low Density Parity Check Codes

  • Chung, Kyu-Hyuk;Heo, Jun
    • Journal of Communications and Networks
    • /
    • v.10 no.1
    • /
    • pp.5-9
    • /
    • 2008
  • Researchers have investigated many upper bound techniques applicable to error probabilities on the maximum likelihood (ML) decoding performance of turbo-like codes and low density parity check (LDPC) codes in recent years for a long codeword block size. This is because it is trivial for a short codeword block size. Previous research efforts, such as the simple bound technique [20] recently proposed, developed upper bounds for LDPC codes and turbo-like codes using ensemble codes or the uniformly interleaved assumption. This assumption bounds the performance averaged over all ensemble codes or all interleavers. Another previous research effort [21] obtained the upper bound of turbo-like code with a particular interleaver using a truncated union bound which requires information of the minimum Hamming distance and the number of codewords with the minimum Hamming distance. However, it gives the reliable bound only in the region of the error floor where the minimum Hamming distance is dominant, i.e., in the region of high signal-to-noise ratios. Therefore, currently an upper bound on ML decoding performance for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix cannot be calculated because of heavy complexity so that only average bounds for ensemble codes can be obtained using a uniform interleaver assumption. In this paper, we propose a new bound technique on ML decoding performance for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix using ML estimated weight distributions and we also show that the practical iterative decoding performance is approximately suboptimal in ML sense because the simulation performance of iterative decoding is worse than the proposed upper bound and no wonder, even worse than ML decoding performance. In order to show this point, we compare the simulation results with the proposed upper bound and previous bounds. The proposed bound technique is based on the simple bound with an approximate weight distribution including several exact smallest distance terms, not with the ensemble distribution or the uniform interleaver assumption. This technique also shows a tighter upper bound than any other previous bound techniques for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix.

Bit Interleaver Design of Ultra High-Order Modulations in DVB-T2 for UHDTV Broadcasting (DVB-T2 기반의 UHDTV 방송을 위한 초고차 성상 변조방식의 비트 인터리버 설계)

  • Kang, In-Woong;Kim, Youngmin;Seo, Jae Hyun;Kim, Heung Mook;Kim, Hyoung-Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.4
    • /
    • pp.195-205
    • /
    • 2014
  • The ultra-high definition television (UHDTV) has been considered as a next generation broadcsating service. However the conventional digital terrestrial transmission system cannot afford the required transmission data rate of UHDTV, and thus adopting ultra-high order constellation, such as 4096-QAM, into the conventional DTT systems has been studied. In particular, when the ultra-high order constellation is adopted into the digital video broadcasting-2nd generation terrestrial (DVB-T2) unequal-error protection (UEP) properties of a codeword of an error correction coding and ultra-high order constellations should be properly matched by bit mapper in order to enhance the decoding performance. Because long codeword results in a heavy computational complexity to design the bit mapper, the DVB-T2 divided it into cascaded blocks, the bit interleaver and the bit-to-cell DEMUX, and there have been many researches related to each block. However, there are few published study related to design methodology of bit interleaver. In this respect, this paper proposes a design methodology of the bit interleaver and presents bit interleavers of 1024-QAM and 4096-QAM according to the proposed design algorithm. The newly designed interleavers improved the decoding performance of the error correction coding by maximally 0.6 dB SNR over both of AWGN and random fading channel.

Design of Multiple-symbol Lookup Table for Fast Thumbnail Generation in Compressed Domain (압축영역에서 빠른 축소 영상 추출을 위한 다중부호 룩업테이블 설계)

  • Yoon, Ja-Cheon;Sull, Sanghoon
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.413-421
    • /
    • 2005
  • As the population of HDTV is growing, among many useful features of modern set top boxes (STBs) or digital video recorders (DVRs), video browsing, visual bookmark, and picture-in-picture capabilities are very frequently required. These features typically employ reduced-size versions of video frames, or thumbnail images. Most thumbnail generation approaches generate DC images directly from a compressed video stream. A discrete cosine transform (DCT) coefficient for which the frequency is zero in both dimensions in a compressed block is called a DC coefficient and is simply used to construct a DC image. If a block has been encoded with field DCT, a few AC coefficients are needed to generate the DC image in addition to a DC coefficient. However, the bit length of a codeword coded with variable length coding (VLC) cannot be determined until the previous VLC codeword has been decoded, thus it is required that all codewords should be fully decoded regardless of their necessary for DC image generation. In this paper, we propose a method especially for fast DC image generation from an I-frame using multiple-symbol lookup table (mLUT). The experimental results show that the method using the mLUT improves the performance greatly by reducing LUT count by 50$\%$.

Consecutive transition limited code for high-density magnetic recording channel (고밀도 자기기록 채널을 위한 연속적인 천이의 제한을 갖는 코드)

  • 이주현;이재진
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1177-1181
    • /
    • 2003
  • The modulation code with the limitation of consecutive transition length is a type of channel codes in high-density magnetic recording channel. When code sequence has two or less successive transitions, the detection performance of channel outputs can be improved. However, the code rate is reduced considerably. We present a rate 7/8 nut-length limited (RLL) code that consecutive transition length of each codeword is limited to 2 (j=2), and j is allowed to be 3 when codewords are connected. In addition, the consecutive zeros of the proposed code is limited to 7 (k=7).

An Efficient Dynamic Entropy Coding by using Multiple Codeword in H.264/AVC (다중 부호어를 이용한 효율적인 H .264/AVC 동적 부호화 방법)

  • 백성학;문용호;김재호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1055-1061
    • /
    • 2004
  • In this paper, we propose an efficient dynamic coding scheme by using multiple codewords in H.264/AVC entropy coding. The exponential Golomb (Exp-Golomb) codewords used in H.264/AVC do not reflect enough the symbol distributions of the combined syntax element in (7) due to their static probability distribution characteristics. However, the multiple codewords in this paper have different statistical characteristics. we propose a dynamic coding scheme by using selectively among multiple codewords to encode the combined syntax element according to given image sequences. Simulation results show that our proposed scheme outperforms the existing (7) method in compression efficiency without loss of quality.

An Efficient Motion Vector Coding Algorithm for the Video Sequence with Slow Motion (저속 동영상에 효과적인 움직임 벡터 부호화 알고리듬)

  • Moon Yong ho;Kim Young kuk;Chang Jung hwan;Kim Jae ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.4C
    • /
    • pp.269-275
    • /
    • 2005
  • In this paper, we propose a new efficient motion vector coding algorithm for the video sequence with slow motion. In the proposed algorithm, the amount of motion for a given video sequence is determined by a Skip_rate parameter. The motion difference for slow motion is encoded with a combined codeword which is generated from the conventional codewords. The simulation results show that the proposed algorithm achieves approximately $15\%$ bits gain compared to the conventional methods. Moreover, additional memory and calculations for statistical observation are not required in the proposed algorithm.

Performance Analysis of Coded FSK System for Multi-hop Wireless Sensor Networks (멀티 홉 무선 센서 네트워크를 위한 부호화된 FSK 시스템의 성능 해석)

  • Oh, Kyu-Tae;Roh, Jae-Sung
    • Journal of Advanced Navigation Technology
    • /
    • v.11 no.4
    • /
    • pp.408-414
    • /
    • 2007
  • Research advances in the areas of micro-sensor device and wireless network technology, has made it possible to develop energy efficient and low cost wireless sensor nodes. In this paper, the forward error control (FEC) scheme for lower power consumption and excellent BER(Bit Error Rate) performance during transmission propose in multi-hop wireless sensor network based on FSK modem. The FEC technique uses extra processing power related to encoding and decoding, it is need complex functions to be built into the sensor node. The probability of receiving a correct bit and codeword for relaying a frame over h nodes to the sink node is calculated as a function of channel parameter, number of hops, number of bits transmitted and the distance between the different nodes.

  • PDF

Lossless VQ Indices Compression Based on the High Correlation of Adjacent Image Blocks

  • Wang, Zhi-Hui;Yang, Hai-Rui;Chang, Chin-Chen;Horng, Gwoboa;Huang, Ying-Hsuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.8
    • /
    • pp.2913-2929
    • /
    • 2014
  • Traditional vector quantization (VQ) schemes encode image blocks as VQ indices, in which there is significant similarity between the image block and the codeword of the VQ index. Thus, the method can compress an image and maintain good image quality. This paper proposes a novel lossless VQ indices compression algorithm to further compress the VQ index table. Our scheme exploits the high correlation of adjacent image blocks to search for the same VQ index with the current encoding index from the neighboring indices. To increase compression efficiency, codewords in the codebook are sorted according to the degree of similarity of adjacent VQ indices to generate a state codebook to find the same index with the current encoding index. Note that the repetition indices both on the search path and in the state codebooks are excluded to increase the possibility for matching the current encoding index. Experimental results illustrated the superiority of our scheme over other compression schemes in the index domain.