• Title/Summary/Keyword: complexity science

Search Result 1,853, Processing Time 0.026 seconds

Complexity Analysis of Internet Video Coding (IVC) Decoding

  • Park, Sang-hyo;Dong, Tianyu;Jang, Euee S.
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.179-188
    • /
    • 2017
  • The Internet Video Coding (IVC) standard is due to be published by Moving Picture Experts Group (MPEG) for various Internet applications such as internet broadcast streaming. IVC aims at three things fundamentally: 1) forming IVC patents under a free of charge license, 2) reaching comparable compression performance to AVC/H.264 constrained Baseline Profile (cBP), and 3) maintaining computational complexity for feasible implementation of real-time encoding and decoding. MPEG experts have worked diligently on the intellectual property rights issues for IVC, and they reported that IVC already achieved the second goal (compression performance) and even showed comparable performance to even AVC/H.264 High Profile (HP). For the complexity issue, however, there has not been thorough analysis on IVC decoder. In this paper, we analyze the IVC decoder in view of the time complexity by evaluating running time. Through the experimental results, IVC is 3.6 times and 3.1 times more complex than AVC/H.264 cBP under constrained set (CS) 1 and CS2, respectively. Compared to AVC/H.264 HP, IVC is 2.8 times and 2.9 times slower in decoding time under CS1 and CS2, respectively. The most critical tool to be improved for lightweight IVC decoder is motion compensation process containing a resolution-adaptive interpolation filtering process.

Analysis on the Complexity of Scientific Reasoning during Pre-service Elementary School Teachers' Open-Inquiry Activities (예비초등교사의 자유 탐구 활동에서 나타나는 추론 복잡성 분석)

  • Jeong, Sun-Hee;Choi, Hyun-Dong;Yang, Il-Ho
    • Journal of Korean Elementary Science Education
    • /
    • v.30 no.3
    • /
    • pp.379-393
    • /
    • 2011
  • The purpose of this study was to analyze the complexity of scientific reasoning during open inquiry activities of pre-service elementary school teachers. In this study, 6 pre-service elementary teachers who participated in open-inquiry activities were selected. The data of scientific reasoning during their inquiry process was collected from the video recording of reporting about inquiry process and results, their reports and researcher's notetaking. CSRI Matrix (Dolan & Grady, 2010) was used to analyze the complexity of participants' scientific reasoning. The result showed that the degree of the complexity of their scientific reasoning varied in participants. Particularly the low degree of the complexity of scientific reasoning presented in posing preliminary hypotheses, providing suggestions for future research, communicating and defending finding. Also, The more pre-service teachers' epistemology of inquiry are similar to that of scientists, the more complex scientific reasoning represents. This results suggest that teachers should impress on students the importance of doing the precedent study and providing suggestions for future research, and provide a place for communicating and defending findings.

Low complexity hybrid layered tabu-likelihood ascent search for large MIMO detection with perfect and estimated channel state information

  • Sourav Chakraborty;Nirmalendu Bikas Sinha;Monojit Mitra
    • ETRI Journal
    • /
    • v.45 no.3
    • /
    • pp.418-432
    • /
    • 2023
  • In this work, we proposed a low-complexity hybrid layered tabu-likelihood ascent search (LTLAS) algorithm for large multiple-input multiple-output (MIMO) system. The conventional layered tabu search (LTS) approach involves many partial reactive tabu searches (RTSs), and each RTS requires an initialization and searching phase. In the proposed algorithm, we restricted the upper limit of the number of RTS operations. Once RTS operations exceed the limit, RTS will be replaced by low-complexity likelihood ascent search (LAS) operations. The block-based detection approach is considered to maintain a higher signal-to-noise ratio (SNR) detection performance. An efficient precomputation technique is derived, which can suppress redundant computations. The simulation results show that the bit error rate (BER) performance of the proposed detection method is close to the conventional LTS method. The complexity analysis shows that the proposed method has significantly lower computational complexity than conventional methods. Also, the proposed method can reduce almost 50% of real operations to achieve a BER of 10-3.

N-Step Sliding Recursion Formula of Variance and Its Implementation

  • Yu, Lang;He, Gang;Mutahir, Ahmad Khwaja
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.832-844
    • /
    • 2020
  • The degree of dispersion of a random variable can be described by the variance, which reflects the distance of the random variable from its mean. However, the time complexity of the traditional variance calculation algorithm is O(n), which results from full calculation of all samples. When the number of samples increases or on the occasion of high speed signal processing, algorithms with O(n) time complexity will cost huge amount of time and that may results in performance degradation of the whole system. A novel multi-step recursive algorithm for variance calculation of the time-varying data series with O(1) time complexity (constant time) is proposed in this paper. Numerical simulation and experiments of the algorithm is presented and the results demonstrate that the proposed multi-step recursive algorithm can effectively decrease computing time and hence significantly improve the variance calculation efficiency for time-varying data, which demonstrates the potential value for time-consumption data analysis or high speed signal processing.

Low-Complexity Triple-Error-Correcting Parallel BCH Decoder

  • Yeon, Jaewoong;Yang, Seung-Jun;Kim, Cheolho;Lee, Hanho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.5
    • /
    • pp.465-472
    • /
    • 2013
  • This paper presents a low-complexity triple-error-correcting parallel Bose-Chaudhuri-Hocquenghem (BCH) decoder architecture and its efficient design techniques. A novel modified step-by-step (m-SBS) decoding algorithm, which significantly reduces computational complexity, is proposed for the parallel BCH decoder. In addition, a determinant calculator and a error locator are proposed to reduce hardware complexity. Specifically, a sharing syndrome factor calculator and a self-error detection scheme are proposed. The multi-channel multi-parallel BCH decoder using the proposed m-SBS algorithm and design techniques have considerably less hardware complexity and latency than those using a conventional algorithms. For a 16-channel 4-parallel (1020, 990) BCH decoder over GF($2^{12}$), the proposed design can lead to a reduction in complexity of at least 23 % compared to conventional architecttures.

Fast CU Encoding Schemes Based on Merge Mode and Motion Estimation for HEVC Inter Prediction

  • Wu, Jinfu;Guo, Baolong;Hou, Jie;Yan, Yunyi;Jiang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1195-1211
    • /
    • 2016
  • The emerging video coding standard High Efficiency Video Coding (HEVC) has shown almost 40% bit-rate reduction over the state-of-the-art Advanced Video Coding (AVC) standard but at about 40% computational complexity overhead. The main reason for HEVC computational complexity is the inter prediction that accounts for 60%-70% of the whole encoding time. In this paper, we propose several fast coding unit (CU) encoding schemes based on the Merge mode and motion estimation information to reduce the computational complexity caused by the HEVC inter prediction. Firstly, an early Merge mode decision method based on motion estimation (EMD) is proposed for each CU size. Then, a Merge mode based early termination method (MET) is developed to determine the CU size at an early stage. To provide a better balance between computational complexity and coding efficiency, several fast CU encoding schemes are surveyed according to the rate-distortion-complexity characteristics of EMD and MET methods as a function of CU sizes. These fast CU encoding schemes can be seamlessly incorporated in the existing control structures of the HEVC encoder without limiting its potential parallelization and hardware acceleration. Experimental results demonstrate that the proposed schemes achieve 19%-46% computational complexity reduction over the HEVC test model reference software, HM 16.4, at a cost of 0.2%-2.4% bit-rate increases under the random access coding configuration. The respective values under the low-delay B coding configuration are 17%-43% and 0.1%-1.2%.

Measurement of Classes Complexity in the Object-Oriented Analysis Phase (객체지향 분석 단계에서의 클래스 복잡도 측정)

  • Kim, Yu-Kyung;Park, Jai-Nyun
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.10
    • /
    • pp.720-731
    • /
    • 2001
  • Complexity metrics have been developed for the structured paradigm of software development are not suitable for use with the object-oriented(OO) paradigm, because they do not support key object-oriented concepts such as inheritance, polymorphism. message passing and encapsulation. There are many researches on OO software metrics such as program complexity or design metrics. But metrics measuring the complexity of classes at the OO analysis phase are needed because they provide earlier feedback to the development project. and earlier feedback means more effective developing and less costly maintenance. In this paper, we propose the new metrics to measure the complexity of analysis classes which draw out in the analysis based on RUP(Rational Unified Process). By the collaboration complexity, is denoted by CC, we mean the maximum number of the collaborations can be achieved with each of the collaborator and determine the potential complexity. And the interface complexity, is denoted by IC, shows the difficulty related to understand the interface of collaborators each other. We verify theoretically the suggested metrics for Weyuker's nine properties. Moreover, we show the computation results for analysis classes of the system which automatically respond to questions of the user using the text mining technique. As a result of the comparison of CC and CBO and WMC suggested by Chidamber and Kemerer, the class that have highly the proposed metric value maintain the high complexity at the design phase too. And the complexity can be represented by CC and IC more than CBO and WMC. We can expect that our metrics may provide us the earlier feedback and hence possible to predict the efforts, costs and time required to remainder processes. As a result, we expect to develop the cost-effective OO software by reviewing the complexity of analysis classes in the first stage of SDLC(Software Development Life Cycle).

  • PDF

A Software Complexity Measurement Technique for Object-Oriented Reverse Engineering (객체지향 역공학을 위한 소프트웨어 복잡도 측정 기법)

  • Kim Jongwan;Hwang Chong-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.847-852
    • /
    • 2005
  • Over the last decade, numerous complexity measurement techniques for Object-Oriented (OO) software system have been proposed for managing the effects of OO codes. These techniques may be based on source code analysis such as WMC (Weighted Methods per Class) and LCOM (Lack of Cohesion in Methods). The techniques are limited to count the number of functions (C++). However. we suggested a new weighted method that checks the number of parameters, the return value and its data type. Then we addressed an effective complexity measurement technique based on the weight of class interfaces to provide guidelines for measuring the class complexity of OO codes in reverse engineering. The results of this research show that the proposed complexity measurement technique ECC(Enhanced Class Complexity) is consistent and accurate in C++ environment.

A complexity analysis of a "pragmatic" relaxation method for the combinatorial optimization with a side constraint (단일 추가제약을 갖는 조합최적화문제를 위한 실용적 완화해법의 계산시간 분석)

  • 홍성필
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.25 no.1
    • /
    • pp.27-36
    • /
    • 2000
  • We perform a computational complexity analysis of a heuristic algotithm proposed in the literature for the combinatorial optimization problems extended with a single side-constraint. This algorithm, although such a view was not given in the original work, is a disguised version of an optimal Lagrangian dual solution technique. It also has been observed to be a very efficient heuristic producing near-optimal solutions for the primal problems in some experiments. Especially, the number of iterations grows sublinearly in terms of the network node size so that the heuristic seems to be particularly suitable for the applicatons such as routing with semi-real time requirements. The goal of this paper is to establish a polynomal worst-case complexity of the algorithm. In particular, the obtained complexity bound suports the sublinear growth of the required iterations.

  • PDF

An Adaptive Block Matching Algorithm based on Temporal Correlations

  • Yoon, Hyo-Sun;Son, Nam-Rye;Lee, Guee-Sang;Kim, Soo-Hyung
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.188-191
    • /
    • 2002
  • To reduce the bit-rate of video sequences by removing temporal redundancy, motion estimation techniques have been developed. However, the high computational complexity of the problem makes such techniques very difficult to be applied to high-resolution applications in a real time environment. For this reason, low computational complexity motion estimation algorithms are viable solutions. If a priori knowledge about the motion of the current block is available before the motion estimation, a better starting point for the search of n optimal motion vector on be selected and also the computational complexity will be reduced. In this paper, we present an adaptive block matching algorithm based on temporal correlations of consecutive image frames that defines the search pattern and the location of initial starting point adaptively to reduce computational complexity. Experiments show that, comparing with DS(Diamond Search) algorithm, the proposed algorithm is about 0.1∼0.5(㏈) better than DS in terms of PSNR and improves as much as 50% in terms of the average number of search points per motion estimation.

  • PDF