• Title/Summary/Keyword: Time complexity

Search Result 3,055, Processing Time 0.034 seconds

Optimization of Link-level Performance and Complexity for the Floating-point and Fixed-point Designs of IEEE 802.16e OFDMA/TDD Mobile Modem (IEEE 802.16e OFDMA/TDD 이동국 모뎀의 링크 성능과 복잡도 최적화를 위한 부동 및 고정 소수점 설계)

  • Sun, Tae-Hyoung;Kang, Seung-Won;Kim, Kyu-Hyun;Chang, Kyung-Hi
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.11 s.353
    • /
    • pp.95-117
    • /
    • 2006
  • In this paper, we describe the optimization of the link-level performance and the complexity of floating-point and fixed-point methods in IEEE 802.16e OFDMA/TDD mobile modem. In floating-point design, we propose the channel estimation methods for downlink traffic channel and select the optimized method using computer simulation. So we also propose efficent algorithms for time and frequency synchronization, Digital Front End and CINR estimation scheme to optimize the system performance. Furthermore, we describe fixed-point method of uplink traffic and control channels. The superiority of the proposed algorithm is validated using the performances of Detection, False Alarm, Missing Probability and Mean Acquisition Time, PER Curve, etc. For fixed-point design, we propose an efficient methodology for optimized fixed-point design from floating-point At last, we design fixed-point of traffic channel, time and frequency synchronization, DFE block in uplink and downlink. The tradeoff between performance and complexity are optimized through computer simulations.

Counter-Based Approaches for Efficient WCET Analysis of Multicore Processors with Shared Caches

  • Ding, Yiqiang;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.4
    • /
    • pp.285-299
    • /
    • 2013
  • To enable hard real-time systems to take advantage of multicore processors, it is crucial to obtain the worst-case execution time (WCET) for programs running on multicore processors. However, this is challenging and complicated due to the inter-thread interferences from the shared resources in a multicore processor. Recent research used the combined cache conflict graph (CCCG) to model and compute the worst-case inter-thread interferences on a shared L2 cache in a multicore processor, which is called the CCCG-based approach in this paper. Although it can compute the WCET safely and accurately, its computational complexity is exponential and prohibitive for a large number of cores. In this paper, we propose three counter-based approaches to significantly reduce the complexity of the multicore WCET analysis, while achieving absolute safety with tightness close to the CCCG-based approach. The basic counter-based approach simply counts the worst-case number of cache line blocks mapped to a cache set of a shared L2 cache from all the concurrent threads, and compares it with the associativity of the cache set to compute the worst-case cache behavior. The enhanced counter-based approach uses techniques to enhance the accuracy of calculating the counters. The hybrid counter-based approach combines the enhanced counter-based approach and the CCCG-based approach to further improve the tightness of analysis without significantly increasing the complexity. Our experiments on a 4-core processor indicate that the enhanced counter-based approach overestimates the WCET by 14% on average compared to the CCCG-based approach, while its averaged running time is less than 1/380 that of the CCCG-based approach. The hybrid approach reduces the overestimation to only 2.65%, while its running time is less than 1/150 that of the CCCG-based approach on average.

Maximum Sugar Loss Lot First Production Algorithm for Cane Sugar Production Problem (사탕수수 설탕 생산 문제의 최대 당분 손실 로트 우선 생산 알고리즘)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.12
    • /
    • pp.171-175
    • /
    • 2014
  • Gu$\acute{e}$ret et al. tries to obtain the solution using linear programming with $O(m^4)$ time complexity for cane sugar production problem a kind of bin packing problem that is classified as NP-complete problem. On the other hand, this paper suggests the maximum loss of lot first production greedy rule algorithm with O(mlogm) polynomial time complexity underlying assumption of the polynomial time rule to find the solution is exist. The proposed algorithm sorts the lots of sugar loss slope into descending order. Then, we select the lots for each slot production capacity only, and swap the exhausted life span of lots for lastly selected lots. As a result of experiments, this algorithm reduces the $O(m^4)$ of linear programming to O(mlogm) time complexity. Also, this algorithm better result than linear programming.

Indoor localization algorithm based on WLAN using modified database and selective operation (변형된 데이터베이스와 선택적 연산을 이용한 WLAN 실내위치인식 알고리즘)

  • Seong, Ju-Hyeon;Park, Jong-Sung;Lee, Seung-Hee;Seo, Dong-Hoan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.37 no.8
    • /
    • pp.932-938
    • /
    • 2013
  • Recently, the Fingerprint, which is one of the methods of indoor localization using WLAN, has been many studied owing to robustness about ranging error by the diffraction and refraction of radio waves. However, in the signal gathering process and comparison operation for the measured signals with the database, this method requires time consumption and computational complexity. In order to compensate for these problems, this paper presents, based on proposed modified database, WLAN indoor localization algorithm using selective operation of collected signal in real time. The proposed algorithm reduces the configuration time and the size of the data in the database through linear interpolation and thresholding according to the signal strength, the localization accuracy, while reducing the computational complexity, is maintained through selective operation of the signals which are measured in real time. The experimental results show that the accuracy of localization is improved to 17.8% and the computational complexity reduced to 46% compared to conventional Fingerprint in the corridor by using proposed algorithm.

Efficient Bit-Parallel Shifted Polynomial Basis Multipliers for All Irreducible Trinomial (삼항 기약다항식을 위한 효율적인 Shifted Polynomial Basis 비트-병렬 곱셈기)

  • Chang, Nam-Su;Kim, Chang-Han;Hong, Seok-Hie;Park, Young-Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.2
    • /
    • pp.49-61
    • /
    • 2009
  • Finite Field multiplication operation is one of the most important operations in the finite field arithmetic. Recently, Fan and Dai introduced a Shifted Polynomial Basis(SPB) and construct a non-pipeline bit-parallel multiplier for $F_{2^n}$. In this paper, we propose a new bit-parallel shifted polynomial basis type I and type II multipliers for $F_{2^n}$ defined by an irreducible trinomial $x^{n}+x^{k}+1$. The proposed type I multiplier has more efficient the space and time complexity than the previous ones. And, proposed type II multiplier have a smaller space complexity than all previously SPB multiplier(include our type I multiplier). However, the time complexity of proposed type II is increased by 1 XOR time-delay in the worst case.

BIFURCATION OF A PREDATOR-PREY SYSTEM WITH GENERATION DELAY AND HABITAT COMPLEXITY

  • Ma, Zhihui;Tang, Haopeng;Wang, Shufan;Wang, Tingting
    • Journal of the Korean Mathematical Society
    • /
    • v.55 no.1
    • /
    • pp.43-58
    • /
    • 2018
  • In this paper, we study a delayed predator-prey system with Holling type IV functional response incorporating the effect of habitat complexity. The results show that there exist stability switches and Hopf bifurcation occurs while the delay crosses a set of critical values. The explicit formulas which determine the direction and stability of Hopf bifurcation are obtained by the normal form theory and the center manifold theorem.

Efficient Algorithm and Architecture for Elliptic Curve Cryptographic Processor

  • Nguyen, Tuy Tan;Lee, Hanho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.118-125
    • /
    • 2016
  • This paper presents a new high-efficient algorithm and architecture for an elliptic curve cryptographic processor. To reduce the computational complexity, novel modified Lopez-Dahab scalar point multiplication and left-to-right algorithms are proposed for point multiplication operation. Moreover, bit-serial Galois-field multiplication is used in order to decrease hardware complexity. The field multiplication operations are performed in parallel to improve system latency. As a result, our approach can reduce hardware costs, while the total time required for point multiplication is kept to a reasonable amount. The results on a Xilinx Virtex-5, Virtex-7 FPGAs and VLSI implementation show that the proposed architecture has less hardware complexity, number of clock cycles and higher efficiency than the previous works.

Cube selection using function complexity and minimizatio of two-level reed-muller expressions (함수복잡도를 이용한 큐브선택과 이단계 리드뮬러표현의 최소화)

  • Lee, Gueesang
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.32A no.6
    • /
    • pp.104-110
    • /
    • 1995
  • In this paper, an effective method for the minimization of two-level Reed-muller expressions by cube selection whcih considers functional complexity is presented. In contrast to the previous methods which use Xlinking operations to join two cubes for minimizatio, the cube selection method tries to select cubes one at a time until they cover the ON-set of the given function. This method works for most benchmark circuits, but for parity-type functions it shows power performance. To solve this problem, a cost function which computes the functional complexity instead of only the size of ON-set of the function is used. Therefore the optimization is performed considering how the trun minterms are grouped together so that they can be realized by only a small number of cubes. In other words, it considers how the function is changed and how the change affects the next optimization step. Experimental results shows better performance in many cases including parity-type functions compared to pervious results.

  • PDF

An SAD-Based Selective Bi-prediction Method for Fast Motion Estimation in High Efficiency Video Coding

  • Kim, Jongho;Jun, DongSan;Jeong, Seyoon;Cho, Sukhee;Choi, Jin Soo;Kim, Jinwoong;Ahn, Chieteuk
    • ETRI Journal
    • /
    • v.34 no.5
    • /
    • pp.753-758
    • /
    • 2012
  • As the next-generation video coding standard, High Efficiency Video Coding (HEVC) has adopted advanced coding tools despite the increase in computational complexity. In this paper, we propose a selective bi-prediction method to reduce the encoding complexity of HEVC. The proposed method evaluates the statistical property of the sum of absolute differences in the motion estimation process and determines whether bi-prediction is performed. A performance comparison of the complexity reduction is provided to show the effectiveness of the proposed method compared to the HEVC test model version 4.0. On average, 50% of the bi-prediction time can be reduced by the proposed method, while maintaining a negligible bit increment and a minimal loss of image quality.

Advanced Block Matching Algorithm for Motion Estimation and Motion Compensation

  • Cho, Hyo-Moon;Cho, Sang-Bock
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.23-25
    • /
    • 2007
  • The partial distortion elimination (PDE) scheme is used to decrease the sum of absolute difference (SAD) computational complexity, since the SAD calculation has been taken much potion of the video compression. In motion estimation (ME) based on PDE, it is ideal that the initial value of SAD in summing performance has large value. The traditional scan order methods have many operation time and high operational complexity because these adopted the division or multiplication. In this paper, we introduce the new scan order and search order by using only adder. We define the average value which is called to rough average value (RAVR). Which is to reduce the computational complexity and increase the operational speed and then we can obtain the improvement of SAD performance. And also this RAVR is used to decide the search order sequence, since the difference RAVR between the current block and candidate block is small then this candidate block has high probability to suitable candidate. Thus, our proposed algorithm combines above two main concepts and suffers the improving SAD performance and the easy hardware implementation methods.

  • PDF