• Title/Summary/Keyword: Worst case

Search Result 793, Processing Time 0.185 seconds

The Sampled-Data $H_{\infty}$ Problem: Obtaining an equivalent discrete-time system via a closed-loop expression of worst-case disturbance (샘플치 $H_{\infty}$ 문제: 최악의 외란의 폐경로 표현을 통한 등가의 이산시간 시스템 구현)

  • 공민종;조창호;이상철;조도현;이상효
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.340-340
    • /
    • 2000
  • This paper aims at deriving an equivalent finite dimensional discrete-time system for H$_{\infty}$ type problem for sampled-data control systems. A widely used ph is based on the lifting technique, but it needs somewhat complicate computation. Instead this paper derives an equivalent finite-dimensional discrete-time system directly from a description of the sampled-data system which is achieved via a closed-loop expression of the worst-case intersample disturbance.

  • PDF

On Diagonal Loading for Robust Adaptive Beamforming Based on Worst-Case Performance Optimization

  • Lin, Jing-Ran;Peng, Qi-Cong;Shao, Huai-Zong
    • ETRI Journal
    • /
    • v.29 no.1
    • /
    • pp.50-58
    • /
    • 2007
  • Robust adaptive beamforming based on worst-case performance optimization is investigated in this paper. It improves robustness against steering vector mismatches by the approach of diagonal loading. A closed-form solution to optimal loading is derived after some approximations. Besides reducing the computational complexity, it shows how different factors affect the optimal loading. Based on this solution, a performance analysis of the beamformer is carried out. As a consequence, approximated closed-form expressions of the source-of-interest power estimation and the output signalto-interference-plus-noise ratio are presented in order to predict its performance. Numerical examples show that the proposed closed-form expressions are very close to their actual values.

  • PDF

Robust Quick String Matching Algorithm for Network Security (네트워크 보안을 위한 강력한 문자열 매칭 알고리즘)

  • Lee, Jong Woock;Park, Chan Kil
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.4
    • /
    • pp.135-141
    • /
    • 2013
  • String matching is one of the key algorithms in network security and many areas could be benefit from a faster string matching algorithm. Based on the most efficient string matching algorithm in sual applications, the Boyer-Moore (BM) algorithm, a novel algorithm called RQS is proposed. RQS utilizes an improved bad character heuristic to achieve bigger shift value area and an enhanced good suffix heuristic to dramatically improve the worst case performance. The two heuristics combined with a novel determinant condition to switch between them enable RQS achieve a higher performance than BM both under normal and worst case situation. The experimental results reveal that RQS appears efficient than BM many times in worst case, and the longer the pattern, the bigger the performance improvement. The performance of RQS is 7.57~36.34% higher than BM in English text searching, 16.26~26.18% higher than BM in uniformly random text searching, and 9.77% higher than BM in the real world Snort pattern set searching.

Finding the Worst-case Instances of Some Sorting Algorithms Using Genetic Algorithms (유전 알고리즘을 이용한 정렬 알고리즘의 최악의 인스턴스 탐색)

  • Jeon, So-Yeong;Kim, Yong-Hyuk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06b
    • /
    • pp.1-5
    • /
    • 2010
  • 정렬 알고리즘에서 사용한 원소 간 비교횟수를 기준으로, 비교횟수가 많게 되는 순열을 최악의 인스턴스(worst-case instance)라 명명하고 이를 찾기 위해 유전 알고리즘(genetic algorithm)을 사용하였다. 잘 알려진 퀵 정렬(quick sort), 머지 정렬(merge sort), 힙 정렬(heap sort), 삽입 정렬(insertion sort), 쉘 정렬(shell sort), 개선된 퀵 정렬(advanced quick sort)에 대해서 실험하였다. 머지 정렬과 삽입 정렬에 대해 탐색한 인스턴스는 최악의 인스턴스에 거의 근접하였다. 퀵 정렬은 크기가 증가함에 따라 최악의 인스턴스 탐색이 어려웠다. 나머지 정렬에 대해서 찾은 인스턴스는 최악의 인스턴스인지 이론적으로 보장할 수 없지만, 임의의 1,000개 순열을 정렬해서 얻은 비교횟수들의 평균치보다는 훨씬 높았다. 본 논문의 최악의 인스턴스를 탐색하는 시도는 알고리즘의 성능 검증을 위한 테스트 데이터를 생성한다는 점에서 의미가 크다.

  • PDF

The Sampled-Data $H{\infty}$ Problem: Applying the Discretization Method via a Closed-Loop Expression of Worst-Case Disturbance (샘플치 $H{\infty}$ 문제: 최악의 외란의 폐경로 표현을 통한 이산화 기법 적용)

  • 조도현;박진홍
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.7
    • /
    • pp.967-974
    • /
    • 2001
  • This paper aims at deriving an equivalent finite dimensional discrete-time system for $H{\infty}$ type problem for sampled-data control systems. A widely used approach is based on the lifting technique, but it needs somewhat complicate computation. Instead, this paper derives an equivalent finite-dimensional discrete-time system directly from a description of the sampled-data system which is achieved via a closed-loop expression of the worst-case intersample disturbance.

  • PDF

Estimation of the Terminal Velocity of the Worst-Case Fragment in an Underwater Torpedo Explosion Using an MM-ALE Finite Element Simulation (MM-ALE 유한요소 시뮬레이션을 이용한 수중 어뢰폭발에서의 최악파편의 종단속도 추정)

  • Choi, Byung-Hee;Ryu, Chang-Ha
    • Explosives and Blasting
    • /
    • v.37 no.3
    • /
    • pp.13-24
    • /
    • 2019
  • This paper was prepared to investigate the behavior of fragments in underwater torpedo explosion beneath a frigate or surface ship by using an explicit finite element analysis. In this study, a fluid-structure interaction (FSI) methodology, called the multi-material arbitrary Lagrangian-Eulerian (MM-ALE) approach in LS-DYNA, was employed to obtain the responses of the torpedo fragments and frigate hull to the explosion. The Euler models for the analysis were comprised of air, water, and explosive, while the Lagrange models consisted of the fragment and the hull. The focus of this modeling was to examine whether a worst-case fragment could penetrate the frigate hull located close (4.5 m) to the exploding torpedo. The simulation was performed in two separate steps. At first, with the assumption that the expanding skin of the torpedo had been torn apart by consuming 30% of the explosive energy, the initial velocity of the worst-case fragment was sought based on a well-known experimental result concerning the fragment velocity in underwater bomb explosion. Then, the terminal velocity of the worst-case fragment that is expected to occur before the fragment hit the frigate hull was sought in the second step. Under the given conditions, the possible initial velocities of the worst-case fragment were found to be very fast (400 and 1000 m/s). But, the velocity difference between the fragment and the hull was merely 4 m/s at the instant of collision. This result was likely to be due to both the tremendous drag force exerted by the water and the non-failure condition given to the frigate hull. Anyway, at least under the given conditions, it is thought that the worst-case fragment seldom penetrate the frigate hull because there is no significant velocity difference between them.

A Study on Worst-Case Analysis of 2-Stage Amplifier in Ku-Band SSPA for Communication Satellite (위성채용 Ku-Band SSPA를 위한 2-단 증폭기의 Worst-Cast Analysis에 관한 연구)

  • Ki Hyeok Jeong;Sang Woong Lee;Young Chul Lee;Chull Chai Shin
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.29A no.3
    • /
    • pp.33-40
    • /
    • 1992
  • In this paper, We designed a 2-stage amplifier for transponder of satellite with dual GaAs FETs NE13783, and performed the worst-case analysis for circuit qualification in the space environment. The bandwidth of amplifier was chosen 11.7-12.2 GHz which is the down link frequency band for domestic Ku-band satellite communication, and the alumina substrate with 25 mil of thickness. The design and optimization of amplifier was achieved by the commercial CAD program TOUCHSTONE and the measurement was performed through the automatic network analyzer.

  • PDF

Counter-Based Approaches for Efficient WCET Analysis of Multicore Processors with Shared Caches

  • Ding, Yiqiang;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.4
    • /
    • pp.285-299
    • /
    • 2013
  • To enable hard real-time systems to take advantage of multicore processors, it is crucial to obtain the worst-case execution time (WCET) for programs running on multicore processors. However, this is challenging and complicated due to the inter-thread interferences from the shared resources in a multicore processor. Recent research used the combined cache conflict graph (CCCG) to model and compute the worst-case inter-thread interferences on a shared L2 cache in a multicore processor, which is called the CCCG-based approach in this paper. Although it can compute the WCET safely and accurately, its computational complexity is exponential and prohibitive for a large number of cores. In this paper, we propose three counter-based approaches to significantly reduce the complexity of the multicore WCET analysis, while achieving absolute safety with tightness close to the CCCG-based approach. The basic counter-based approach simply counts the worst-case number of cache line blocks mapped to a cache set of a shared L2 cache from all the concurrent threads, and compares it with the associativity of the cache set to compute the worst-case cache behavior. The enhanced counter-based approach uses techniques to enhance the accuracy of calculating the counters. The hybrid counter-based approach combines the enhanced counter-based approach and the CCCG-based approach to further improve the tightness of analysis without significantly increasing the complexity. Our experiments on a 4-core processor indicate that the enhanced counter-based approach overestimates the WCET by 14% on average compared to the CCCG-based approach, while its averaged running time is less than 1/380 that of the CCCG-based approach. The hybrid approach reduces the overestimation to only 2.65%, while its running time is less than 1/150 that of the CCCG-based approach on average.

On a Simple and Stable Merging Algorithm (단순하고 스테이블한 머징알고리즘)

  • Kim, Pok-Son;Kutzner, Arne
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.455-462
    • /
    • 2010
  • We investigate the worst case complexity regarding the number of comparisons for a simple and stable merging algorithm. The complexity analysis shows that the algorithm performs O(mlog(n/m)) comparisons for two sequences of sizes m and n $m{\leq}n$. So, according to the lower bound for merging $\Omega$(mlog(n/m)), the algorithm is asymptotically optimal regarding the number of comparisons. For proving the worst case complexity we divide the domain of all inputs into two disjoint cases. For either of these cases we will extract a special subcase and prove the asymptotic optimality for these two subcases. Using this knowledge for special cases we will prove the optimality for all remaining cases. By using this approach we give a transparent solution for the hardly tractable problem of delivering a clean complexity analysis for the algorithm.

Some Recent Results of Approximation Algorithms for Markov Games and their Applications

  • 장형수
    • Proceedings of the Korean Society of Computational and Applied Mathematics Conference
    • /
    • 2003.09a
    • /
    • pp.15-15
    • /
    • 2003
  • We provide some recent results of approximation algorithms for solving Markov Games and discuss their applications to problems that arise in Computer Science. We consider a receding horizon approach as an approximate solution to two-person zero-sum Markov games with an infinite horizon discounted cost criterion. We present error bounds from the optimal equilibrium value of the game when both players take “correlated” receding horizon policies that are based on exact or approximate solutions of receding finite horizon subgames. Motivated by the worst-case optimal control of queueing systems by Altman, we then analyze error bounds when the minimizer plays the (approximate) receding horizon control and the maximizer plays the worst case policy. We give two heuristic examples of the approximate receding horizon control. We extend “parallel rollout” and “hindsight optimization” into the Markov game setting within the framework of the approximate receding horizon approach and analyze their performances. From the parallel rollout approach, the minimizing player seeks to combine dynamically multiple heuristic policies in a set to improve the performances of all of the heuristic policies simultaneously under the guess that the maximizing player has chosen a fixed worst-case policy. Given $\varepsilon$>0, we give the value of the receding horizon which guarantees that the parallel rollout policy with the horizon played by the minimizer “dominates” any heuristic policy in the set by $\varepsilon$, From the hindsight optimization approach, the minimizing player makes a decision based on his expected optimal hindsight performance over a finite horizon. We finally discuss practical implementations of the receding horizon approaches via simulation and applications.

  • PDF