• 제목/요약/키워드: Time complexity

검색결과 3,055건 처리시간 0.026초

시계열 데이타의 흔돈도 분석 알고리즘에 관한 연구 (A Study on Complexity Measure Algorithm of Time Series Data)

  • 이병채;정기삼;이명호
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1995년도 춘계학술대회
    • /
    • pp.281-284
    • /
    • 1995
  • This paper describes a complexity measure algorithm based on nonlinear dynamics(chaos theory). In order to quantify complexity or regularity of biomedical signal, this paper proposed fractal dimension-1 and fractal dimension-2 algorithm with digital filter. Approximate entropy algorithm which measure a system regularity are also compared. In this paper investigate what we quantify of biomedical signal. These quantified complexity measure may be a useful information about human physiology.

  • PDF

Fast 3D Mesh Compression Using Shared Vertex Analysis

  • Jang, Euee-Seon;Lee, Seung-Wook;Koo, Bon-Ki;Kim, Dai-Yong;Son, Kyoung-Soo
    • ETRI Journal
    • /
    • 제32권1호
    • /
    • pp.163-165
    • /
    • 2010
  • A trend in 3D mesh compression is codec design with low computational complexity which preserves the input vertex and face order. However, this added information increases the complexity. We present a fast 3D mesh compression method that compresses the redundant shared vertex information between neighboring faces using simple first-order differential coding followed by fast entropy coding with a fixed length prefix. Our algorithm is feasible for low complexity designs and maintains the order, which is now part of the MPEG-4 scalable complexity 3D mesh compression standard. The proposed algorithm is 30 times faster than MPEG-4 3D mesh coding extension.

고 복잡도 H.264/AVC의 실시간 압축을 위한 고속 인터 예측 부호화 기법 (A Fast Inter Prediction Encoding Algorithm for Real-time Compression of H.264/AVC with High Complexity)

  • 김영현;최현준;서영호;김동욱
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.411-412
    • /
    • 2006
  • In this paper, we proposed a fast algorithm for inter prediction included the most complexity in H.264/AVC. It decide search range according to direction of predicted motion vector, and then perform adaptive candidate spiral search. Simultaneously, it perform motion estimation of variable loop with threshold for variable block size. Conclusively, it is implemented in JM FME with high complexity applying to rate-distortion optimization. Experimental results show that significant complexity reduction is achieved while the degradation in video quality is negligible.

  • PDF

이야기 나누기를 통합한 쓰기 활동이 어휘의 다양성과 표현의 복합성에 미치는 효과 (Integrating Writing Activity with Verbal Sharing : Effect on Diversity of Vocabulary and Complexity of Expression in Young Children)

  • 이병래
    • 아동학회지
    • /
    • 제21권2호
    • /
    • pp.45-56
    • /
    • 2000
  • This study investigated the effects of writing activity integrated with verbal sharing on the diversity of vocabulary and complexity of expression in 33 five-year-old children attending a private kindergarten in Seoul. Subjects were divided into a control group of 16 children and an experimental group of 17 children. The experimental group participated in a 7 week program of writing activities integrated with verbal sharing time. The instrument used for the pre-and post-tests was the writing ability test(Nam Mi Jung, 1996) and the complexity of sentences test(Young Hee No, 1994). Data were analyzed by ANCOVA. The results revealed significant differences between the experimental and control groups in children's diversity of vocabulary and complexity of expression.

  • PDF

New Time-Domain Decoder for Correcting both Errors and Erasures of Reed-Solomon Codes

  • Lu, Erl-Huei;Chen, Tso-Cho;Shih, Chih-Wen
    • ETRI Journal
    • /
    • 제38권4호
    • /
    • pp.612-621
    • /
    • 2016
  • A new time-domain decoder for Reed-Solomon (RS) codes is proposed. Because this decoder can correct both errors and erasures without computing the erasure locator, errata locator, or errata evaluator polynomials, the computational complexity can be substantially reduced. Herein, to demonstrate this benefit, complexity comparisons between the proposed decoder and the Truong-Jeng-Hung and Lin-Costello decoders are presented. These comparisons show that the proposed decoder consistently has lower computational requirements when correcting all combinations of ${\nu}$ errors and ${\mu}$ erasures than both of the related decoders under the condition of $2{\nu}+{\mu}{\leq}d_{\min}-1$, where $d_{min}$ denotes the minimum distance of the RS code. Finally, the (255, 223) and (63, 39) RS codes are used as examples for complexity comparisons under the upper bounded condition of min $2{\nu}+{\mu}=d_{\min}-1$. To decode the two RS codes, the new decoder can save about 40% additions and multiplications when min ${\mu}=d_{min}-1$ as compared with the two related decoders. Furthermore, it can also save 50% of the required inverses for min $0{\leq}{\mu}{\leq}d_{\min}-1$.

Computation and Communication Efficient Key Distribution Protocol for Secure Multicast Communication

  • Vijayakumar, P.;Bose, S.;Kannan, A.;Jegatha Deborah, L.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권4호
    • /
    • pp.878-894
    • /
    • 2013
  • Secure multimedia multicast applications involve group communications where group membership requires secured dynamic key generation and updating operations. Such operations usually consume high computation time and therefore designing a key distribution protocol with reduced computation time is necessary for multicast applications. In this paper, we propose a new key distribution protocol that focuses on two aspects. The first one aims at the reduction of computation complexity by performing lesser numbers of multiplication operations using a ternary-tree approach during key updating. Moreover, it aims to optimize the number of multiplication operations by using the existing Karatsuba divide and conquer approach for fast multiplication. The second aspect aims at reducing the amount of information communicated to the group members during the update operations in the key content. The proposed algorithm has been evaluated based on computation and communication complexity and a comparative performance analysis of various key distribution protocols is provided. Moreover, it has been observed that the proposed algorithm reduces the computation and communication time significantly.

복합키워드의 고속검색 알고리즘에 관한 연구 (A Study of High Speed Retrieval Algorithm of Long Component Keyword)

  • 이진관;정규철;이태헌;박기홍
    • 한국정보통신학회논문지
    • /
    • 제8권8호
    • /
    • pp.1769-1776
    • /
    • 2004
  • 효율적인 키워드 추출은 정보검색 시스템에서 중요하지만 많은 키워드 중 적당한 키워드를 결정하기 위한 방법들은 여러 가지가 있다. 그중 단일 키워드만을 검색하는 AC알고리즘을 해결하기 위한 DER구조는 복합키워드 검색이 가능하나 많은 검색시간이 걸린다는 문제점을 가지고 있다. 본 논문에서는 이러한 문제점을 해결하기 위해 DER구조의 검색방법을 기반으로 한 독립적인 검색테이블을 확장하여 EDER 구조라는 알고리즘을 구축하였다. 500개의 텍스트 파일을 실험한 결과 키워드의 포스팅 결과가 AC의 DER구조보다 EDER구조가 작았으며, 검색시간 또한 K5에서 DER구조가 0.6초, EDER구조가 0.2초로 더 빠른 검색을 보며주고 있어 제안 방법이 효과적임을 알 수 있었다.

Improving Lookup Time Complexity of Compressed Suffix Arrays using Multi-ary Wavelet Tree

  • Wu, Zheng;Na, Joong-Chae;Kim, Min-Hwan;Kim, Dong-Kyue
    • Journal of Computing Science and Engineering
    • /
    • 제3권1호
    • /
    • pp.1-4
    • /
    • 2009
  • In a given text T of size n, we need to search for the information that we are interested. In order to support fast searching, an index must be constructed by preprocessing the text. Suffix array is a kind of index data structure. The compressed suffix array (CSA) is one of the compressed indices based on the regularity of the suffix array, and can be compressed to the $k^{th}$ order empirical entropy. In this paper we improve the lookup time complexity of the compressed suffix array by using the multi-ary wavelet tree at the cost of more space. In our implementation, the lookup time complexity of the compressed suffix array is O(${\log}_{\sigma}^{\varepsilon/(1-{\varepsilon})}\;n\;{\log}_r\;\sigma$), and the space of the compressed suffix array is ${\varepsilon}^{-1}\;nH_k(T)+O(n\;{\log}\;{\log}\;n/{\log}^{\varepsilon}_{\sigma}\;n)$ bits, where a is the size of alphabet, $H_k$ is the kth order empirical entropy r is the branching factor of the multi-ary wavelet tree such that $2{\leq}r{\leq}\sqrt{n}$ and $r{\leq}O({\log}^{1-{\varepsilon}}_{\sigma}\;n)$ and 0 < $\varepsilon$ < 1/2 is a constant.

Moving Object Detection Using Sparse Approximation and Sparse Coding Migration

  • Li, Shufang;Hu, Zhengping;Zhao, Mengyao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권5호
    • /
    • pp.2141-2155
    • /
    • 2020
  • In order to meet the requirements of background change, illumination variation, moving shadow interference and high accuracy in object detection of moving camera, and strive for real-time and high efficiency, this paper presents an object detection algorithm based on sparse approximation recursion and sparse coding migration in subspace. First, low-rank sparse decomposition is used to reduce the dimension of the data. Combining with dictionary sparse representation, the computational model is established by the recursive formula of sparse approximation with the video sequences taken as subspace sets. And the moving object is calculated by the background difference method, which effectively reduces the computational complexity and running time. According to the idea of sparse coding migration, the above operations are carried out in the down-sampling space to further reduce the requirements of computational complexity and memory storage, and this will be adapt to multi-scale target objects and overcome the impact of large anomaly areas. Finally, experiments are carried out on VDAO datasets containing 59 sets of videos. The experimental results show that the algorithm can detect moving object effectively in the moving camera with uniform speed, not only in terms of low computational complexity but also in terms of low storage requirements, so that our proposed algorithm is suitable for detection systems with high real-time requirements.

Energy-Efficient Scheduling with Delay Constraints in Time-Varying Uplink Channels

  • Kwon, Ho-Joong;Lee, Byeong-Gi
    • Journal of Communications and Networks
    • /
    • 제10권1호
    • /
    • pp.28-37
    • /
    • 2008
  • In this paper, we investigate the problem of minimizing the average transmission power of users while guaranteeing the average delay constraints in time-varying uplink channels. We design a scheduler that selects a user for transmission and determines the transmission rate of the selected user based on the channel and backlog information of users. Since it requires prohibitively high computation complexity to determine an optimal scheduler for multi-user systems, we propose a low-complexity scheduling scheme that can achieve near-optimal performance. In this scheme, we reduce the complexity by decomposing the multiuser problem into multiple individual user problems. We arrange the probability of selecting each user such that it can be determined only by the information of the corresponding user and then optimize the transmission rate of each user independently. We solve the user problem by using a dynamic programming approach and analyze the upper and lower bounds of average transmission power and average delay, respectively. In addition, we investigate the effects of the user selection algorithm on the performance for different channel models. We show that a channel-adaptive user selection algorithm can improve the energy efficiency under uncorrelated channels but the gain is obtainable only for loose delay requirements in the case of correlated channels. Based on this, we propose a user selection algorithm that adapts itself to both the channel condition and the backlog level, which turns out to be energy-efficient over wide range of delay requirement regardless of the channel model.