• 제목/요약/키워드: computational complexity reduction

검색결과 258건 처리시간 0.022초

Matrix Decomposition for Low Computational Complexity in Orthogonal Precoding of N-continuous Schemes for Sidelobe Suppression of OFDM Signals

  • Kawasaki, Hikaru;Matsui, Takahiro;Ohta, Masaya;Yamashita, Katsumi
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권2호
    • /
    • pp.117-123
    • /
    • 2017
  • N-continuous orthogonal frequency division multiplexing (OFDM) is a precoding method for sidelobe suppression of OFDM signals and seamlessly connects OFDM symbols up to the high-order derivative for sidelobe suppression, which is suitable for suppressing out-of-band radiation. However, it severely degrades the error rate as it increases the continuous derivative order. Two schemes for orthogonal precoding of N-continuous OFDM have been proposed to achieve an ideal error rate while maintaining sidelobe suppression performance; however, the large size of the precoder matrices in both schemes causes very high computational complexity for precoding and decoding. This paper proposes matrix decomposition of precoder matrices with a large size in the orthogonal precoding schemes in order to reduce computational complexity. Numerical experiments show that the proposed method can drastically reduce computational complexity without any performance degradation.

Complexity-Reduced Algorithms for LDPC Decoder for DVB-S2 Systems

  • Choi, Eun-A;Jung, Ji-Won;Kim, Nae-Soo;Oh, Deock-Gil
    • ETRI Journal
    • /
    • 제27권5호
    • /
    • pp.639-642
    • /
    • 2005
  • This paper proposes two kinds of complexity-reduced algorithms for a low density parity check (LDPC) decoder. First, sequential decoding using a partial group is proposed. It has the same hardware complexity and requires a fewer number of iterations with little performance loss. The amount of performance loss can be determined by the designer, based on a tradeoff with the desired reduction in complexity. Second, an early detection method for reducing the computational complexity is proposed. Using a confidence criterion, some bit nodes and check node edges are detected early on during decoding. Once the edges are detected, no further iteration is required; thus early detection reduces the computational complexity.

  • PDF

Modified Cubic Convolution Interpolation for Low Computational Complexity

  • Jun, Young-Hyun;Yun, Jong-Ho;Choi, Myung-Ryul
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2006년도 6th International Meeting on Information Display
    • /
    • pp.1259-1262
    • /
    • 2006
  • In this paper, we propose a modified cubic convolution interpolation for the enlargement or reduction of digital images using a pixel difference value. The proposed method has a low complexity: the number of multiplier of weighted value to calculate one pixel of a scaled image has seven less than that of cubic convolution interpolation has sixteen. We use the linear function of the cubic convolution and the difference pixel value for selecting interpolation methods. The proposed method is compared with the conventional one for the computational complexity and the image quality. The simulation results show that the proposed method has less computational complexity than one of the cubic convolution interpolation.

  • PDF

A Hierarchical Mode Decision Method for H.264 Intra Image Coding

  • Liu, Jiantan;Yoo, Kook-Yeol
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2007년도 춘계학술발표대회
    • /
    • pp.297-300
    • /
    • 2007
  • Due to its impressive compression performance, the H.264 video coder is highlighted in the video communications industry, such as DMB (Digital Multimedia Broadcasting), PMP (Portable Multimedia Player), etc. The main bottleneck to use the H.264 coder lays in the computational complexity, i.e. five times more complex than the market leading MPEG-4 Simple Profile codec. In this paper, we propose the hierarchical mode decision method for intraframe coding for the reduction of the computation complexity of the encoder. By determining the mode group early, the propose algorithm can skip the computationally demanding computation in the mode decision. The proposed algorithm is composed of three steps: $16{\times}16$ mode decision, $4{\times}4$ mode-group decisions, and final mode decision among the selected mode group. The simulation results show that the proposed algorithm achieves 20% to 50% reduction in the computational complexity compared with the conventional algorithm.

거리 근사를 이용하는 고속 최근 이웃 탐색 분류기에 관한 연구 (Study on the fast nearest-neighbor searching classifier using distance approximation)

  • 이일완;채수익
    • 전자공학회논문지C
    • /
    • 제34C권2호
    • /
    • pp.71-79
    • /
    • 1997
  • In this paper, we propose a new nearest-neighbor classifier with reduced computational complexity in search process. In the proposed classifier, the classes are divided into two sets: reference and non-reference sets. It reduces computational requriement by approximating the distance between the input and a class iwth the information of distances among the calsses. It calculates only the distance between the input and the reference classes. We convert a given classifier into RCC (reduced computational complexity but smal lincrease in misclassification probability of its corresponding RCC classifier. We designed RCC classifiers for the recognition of digits from the NIST database. We obtained an RCC classifier with 60% reduction in the computational complexity with the cost of 0.5% increase in misclassification probability.

  • PDF

ANALYSIS OF THE UPPER BOUND ON THE COMPLEXITY OF LLL ALGORITHM

  • PARK, YUNJU;PARK, JAEHYUN
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제20권2호
    • /
    • pp.107-121
    • /
    • 2016
  • We analyze the complexity of the LLL algorithm, invented by Lenstra, Lenstra, and $Lov{\acute{a}}sz$ as a a well-known lattice reduction (LR) algorithm which is previously known as having the complexity of $O(N^4{\log}B)$ multiplications (or, $O(N^5({\log}B)^2)$ bit operations) for a lattice basis matrix $H({\in}{\mathbb{R}}^{M{\times}N})$ where B is the maximum value among the squared norm of columns of H. This implies that the complexity of the lattice reduction algorithm depends only on the matrix size and the lattice basis norm. However, the matrix structures (i.e., the correlation among the columns) of a given lattice matrix, which is usually measured by its condition number or determinant, can affect the computational complexity of the LR algorithm. In this paper, to see how the matrix structures can affect the LLL algorithm's complexity, we derive a more tight upper bound on the complexity of LLL algorithm in terms of the condition number and determinant of a given lattice matrix. We also analyze the complexities of the LLL updating/downdating schemes using the proposed upper bound.

Two New Types of Candidate Symbol Sorting Schemes for Complexity Reduction of a Sphere Decoder

  • 전은성;김요한;김동구
    • 한국통신학회논문지
    • /
    • 제32권9C호
    • /
    • pp.888-894
    • /
    • 2007
  • The computational complexity of a sphere decoder (SD) is conventionally reduced by decoding order scheme which sorts candidate symbols in the ascending order of the Euclidean distance from the output of a zero-forcing (ZF) receiver. However, since the ZF output may not be a reliable sorting reference, we propose two types of sorting schemes to allow faster decoding. The first is to use the newly found lattice points in the previous search round instead of the ZF output (Type I). Since these lattice points are closer to the received signal than the ZF output, they can serve as a more reliable sorting reference for finding the maximum likelihood (ML) solution. The second sorting scheme is to sort candidate symbols in descending order according to the number of candidate symbols in the following layer, which are called child symbols (Type II). These two proposed sorting schemes can be combined with layer sorting for more complexity reduction. Through simulation, the Type I and Type II sorting schemes were found to provide 12% and 20% complexity reduction respectively over conventional sorting schemes. When they are combined with layer sorting, Type I and Type II provide an additional 10-15% complexity reduction while maintaining detection performance.

Design of M-Channel IIR Uniform DFT Filter Banks Using Recursive Digital Filters

  • Dehghani, M.J.;Aravind, R.;Prabhu, K.M.M.
    • ETRI Journal
    • /
    • 제25권5호
    • /
    • pp.345-355
    • /
    • 2003
  • In this paper, we propose a method for designing a class of M-channel, causal, stable, perfect reconstruction, infinite impulse response (IIR), and parallel uniform discrete Fourier transform (DFT) filter banks. It is based on a previously proposed structure by Martinez et al. [1] for IIR digital filter design for sampling rate reduction. The proposed filter bank has a modular structure and is therefore very well suited for VLSI implementation. Moreover, the current structure is more efficient in terms of computational complexity than the most general IIR DFT filter bank, and this results in a reduced computational complexity by more than 50% in both the critically sampled and oversampled cases. In the polyphase oversampled DFT filter bank case, we get flexible stop-band attenuation, which is also taken care of in the proposed algorithm.

  • PDF

시공간 상관성을 이용한 적응적 움직임 추정 (Adaptive motion estimation based on spatio-temporal correlations)

  • 김동욱;김진태;최종수
    • 한국통신학회논문지
    • /
    • 제21권5호
    • /
    • pp.1109-1122
    • /
    • 1996
  • Generally, moving images contain the various components in motions, which reange from a static object and background to a fast moving object. To extract the accurate motion parameters, we must consider the various motions. That requires a wide search egion in motion estimation. The wide search, however, causes a high computational complexity. If we have a few knowledge about the motion direction and magnitude before motion estimation, we can determine the search location and search window size using the already-known information about the motion. In this paper, we present a local adaptive motion estimation approach that predicts a block motion based on spatio-temporal neighborhood blocks and adaptively defines the search location and search window size. This paper presents a technique for reducing computational complexity, while having high accuracy in motion estimation. The proposed algorithm is introduced the forward and backward projection techniques. The search windeo size for a block is adaptively determined by previous motion vectors and prediction errors. Simulations show significant improvements in the qualities of the motion compensated images and in the reduction of the computational complexity.

  • PDF

Simplified DC Calculation Method for Simplified Depth Coding Mode of 3D High Efficiency Video Coding

  • Jo, Hyunho;Lee, Jin Young;Choi, Byeongdoo;Sim, Donggyu
    • 전자공학회논문지
    • /
    • 제51권3호
    • /
    • pp.139-143
    • /
    • 2014
  • This paper proposes a simplified DC calculation method for simplified depth coding (SDC) mode of 3D High Efficiency Video Coding (3D-HEVC) to reduce the computational complexity. For the computational complexity reduction, the current reference software of 3D-HEVC employs reference samples sub-sampling method. However, accumulation, branch, and division operations are still utilized and these operations increase computational complexity. The proposed method calculates DC value without those operations. The experimental results show that the proposed method achieves 0.1% coding gain for synthesized views in common test condition (CTC) with the significantly reduced number of computing operations.