• 제목/요약/키워드: probabilistic analysis of algorithms

검색결과 35건 처리시간 0.054초

점근적 분석 모형에 기초한 유한개 레코드 정렬 알고리즘 효율성의 확률적 분석 (Probabilistic analysis of efficiencies for sorting algorithms with a finite number of records based on an asymptotic algorithm analysis)

  • 김숙영
    • 한국컴퓨터산업학회논문지
    • /
    • 제5권2호
    • /
    • pp.325-330
    • /
    • 2004
  • 정렬 알고리즘 효율성을 분석하는 O 표기법은 자료 크기에 관한 모형을 구축하지 않고 자료 크기가 무한하게 증가될 때의 정렬 비교 횟수의 증가율에 관한 대략적인 정보만을 제공하는 점근적 알고리즘 분석 결과이다 그러므로 제한된 유한개의 자료들만을 정렬하는 응용 면에서도 정렬 알고리즘 효율성 검정이 필요하다. 9,000개 이하의 수치 자료에 삽입 정렬과 퀵 정렬 알고리즘을 적용하여 자료 개수에 따른 정렬 시 필요한 원소 교환 횟수 관계 모형을 구축하였다. 효율성이 O(nlogn)으로 분류되는 퀵 정렬의 경우 추정된 모형은 S=0.9305 $N^{1.1339}$으로, O( $n^2$) 으로 분류되는 퀵 정렬에서는 S=0.12232 $N^{2.013}$으로 추정되었다. 또한 모형의 적합도 검정 결과 정렬 시 자료 개수에 따른 원소 교환 횟수 관계가 추정된 모형들에 의하여 99% 이상이 설명될 수 있으며 적합성을 증명하는 강한 확률적 증거가 발견 되었다. 본 연구 결과들은 정렬 자료 개수가 적은 경우나 새로 개발된 정렬 알고리즘 효율성에 관한 검정의 필요성을 제시한다.

  • PDF

Probabilistic penalized principal component analysis

  • Park, Chongsun;Wang, Morgan C.;Mo, Eun Bi
    • Communications for Statistical Applications and Methods
    • /
    • 제24권2호
    • /
    • pp.143-154
    • /
    • 2017
  • A variable selection method based on probabilistic principal component analysis (PCA) using penalized likelihood method is proposed. The proposed method is a two-step variable reduction method. The first step is based on the probabilistic principal component idea to identify principle components. The penalty function is used to identify important variables in each component. We then build a model on the original data space instead of building on the rotated data space through latent variables (principal components) because the proposed method achieves the goal of dimension reduction through identifying important observed variables. Consequently, the proposed method is of more practical use. The proposed estimators perform as the oracle procedure and are root-n consistent with a proper choice of regularization parameters. The proposed method can be successfully applied to high-dimensional PCA problems with a relatively large portion of irrelevant variables included in the data set. It is straightforward to extend our likelihood method in handling problems with missing observations using EM algorithms. Further, it could be effectively applied in cases where some data vectors exhibit one or more missing values at random.

주성분 분석을 위한 새로운 EM 알고리듬 (New EM algorithm for Principal Component Analysis)

  • 안종훈;오종훈
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2001년도 봄 학술발표논문집 Vol.28 No.1 (B)
    • /
    • pp.529-531
    • /
    • 2001
  • We present an expectation-maximization algorithm for principal component analysis via orthogonalization. The algorithm finds actual principal components, whereas previously proposed EM algorithms can only find principal subspace. New algorithm is simple and more efficient thant probabilistic PCA specially in noiseless cases. Conventional PCA needs computation of inverse of the covariance matrices, which makes the algorithm prohibitively expensive when the dimensions of data space is large. This EM algorithm is very powerful for high dimensional data when only a few principal components are needed.

  • PDF

Optimal Design of Inverse Electromagnetic Problems with Uncertain Design Parameters Assisted by Reliability and Design Sensitivity Analysis

  • Ren, Ziyan;Um, Doojong;Koh, Chang-Seop
    • Journal of Magnetics
    • /
    • 제19권3호
    • /
    • pp.266-272
    • /
    • 2014
  • In this paper, we suggest reliability as a metric to evaluate the robustness of a design for the optimal design of electromagnetic devices, with respect to constraints under the uncertainties in design variables. For fast numerical efficiency, we applied the sensitivity-assisted Monte Carlo simulation (S-MCS) method to perform reliability calculation. Furthermore, we incorporated the S-MCS with single-objective and multi-objective particle swarm optimization algorithms to achieve reliability-based optimal designs, undertaking probabilistic constraint and multi-objective optimization approaches, respectively. We validated the performance of the developed optimization algorithms through application to the optimal design of a superconducting magnetic energy storage system.

A Probabilistic based Systems Approach to Reliability Prediction of Solid Rocket Motors

  • Moon, Keun-Hwan;Gang, Jin-Hyuk;Kim, Dong-Seong;Kim, Jin-Kon;Choi, Joo-Ho
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제17권4호
    • /
    • pp.565-578
    • /
    • 2016
  • A probabilistic based systems approach is addressed in this study for the reliability prediction of solid rocket motors (SRM). To achieve this goal, quantitative Failure Modes, Effects and Criticality Analysis (FMECA) approach is employed to determine the reliability of components, which are integrated into the Fault Tree Analysis (FTA) to obtain the system reliability. The quantitative FMECA is implemented by burden and capability approach when they are available. Otherwise, the semi-quantitative FMECA is taken using the failure rate handbook. Among the many failure modes in the SRM, four most important problems are chosen to illustrate the burden and capability approach, which are the rupture, fracture of the case, and leak due to the jointed bolt and O-ring seal failure. Four algorithms are employed to determine the failure probability of these problems, and compared with those by the Monte Carlo Simulation as well as the commercial code NESSUS for verification. Overall, the study offers a comprehensive treatment of the reliability practice for the SRM development, and may be useful across the wide range of propulsion systems in the aerospace community.

경쟁 공진화 알고리듬에서 경쟁전략들의 비교 분석 (Comparison and Analysis of Competition Strategies in Competitive Coevolutionary Algorithms)

  • 김여근;김재윤
    • 대한산업공학회지
    • /
    • 제28권1호
    • /
    • pp.87-98
    • /
    • 2002
  • A competitive coevolutionary algorithm is a probabilistic search method that imitates coevolution process through evolutionary arms race. The algorithm has been used to solve adversarial problems. In the algorithms, the selection of competitors is needed to evaluate the fitness of an individual. The goal of this study is to compare and analyze several competition strategies in terms of solution quality, convergence speed, balance between competitive coevolving species, population diversity, etc. With two types of test-bed problems, game problems and solution-test problems, extensive experiments are carried out. In the game problems, sampling strategies based on fitness have a risk of providing bad solutions due to evolutionary unbalance between species. On the other hand, in the solution-test problems, evolutionary unbalance does not appear in any strategies and the strategies using information about competition results are efficient in solution quality. The experimental results indicate that the tournament competition can progress an evolutionary arms race and then is successful from the viewpoint of evolutionary computation.

국내 원자력발전소 지진 PSA의 CDF 과평가 방지를 위한 비희귀사건 모델링 방법 연구 (A Simple Approach to Calculate CDF with Non-rare Events in Seismic PSA Model of Korean Nuclear Power Plants)

  • 임학규
    • 한국안전학회지
    • /
    • 제36권5호
    • /
    • pp.86-91
    • /
    • 2021
  • Calculating the scrutable core damage frequency (CDF) of nuclear power plants is an important component of the seismic probabilistic safety assessment (SPSA). In this work, a simple approach is developed to calculate CDF from minimal cut sets (MCSs) with non-rare events. When conventional calculation methods based on rare event approximations are employed, the CDF of industry SPSA models is significantly overestimated by non-rare events in the MCSs. Recently, quantification algorithms using binary decision diagrams (BDDs) have been introduced to prevent CDF overestimation in the SPSA. However, BDD structures are generated from a small part of whole MCSs due to limited computational memory, and they cannot be reviewed due to their complicated logic structure. This study suggests a simple approach for scrutinizing the CDF calculation based on whole MCSs in the SPSA system analysis model. The proposed approach compares the new results to outputs from existing algorithms, which helps in avoiding CDF overestimation.

Motion-based design of TMD for vibrating footbridges under uncertainty conditions

  • Jimenez-Alonso, Javier F.;Saez, Andres
    • Smart Structures and Systems
    • /
    • 제21권6호
    • /
    • pp.727-740
    • /
    • 2018
  • Tuned mass dampers (TMDs) are passive damping devices widely employed to mitigate the pedestrian-induced vibrations on footbridges. The TMD design must ensure an adequate performance during the overall life-cycle of the structure. Although the TMD is initially adjusted to match the natural frequency of the vibration mode which needs to be controlled, its design must further take into account the change of the modal parameters of the footbridge due to the modification of the operational and environmental conditions. For this purpose, a motion-based design optimization method is proposed and implemented herein, aimed at ensuring the adequate behavior of footbridges under uncertainty conditions. The uncertainty associated with the variation of such modal parameters is simulated by a probabilistic approach based on the results of previous research reported in literature. The pedestrian action is modelled according to the recommendations of the Synpex guidelines. A comparison among the TMD parameters obtained considering different design criteria, design requirements and uncertainty levels is performed. To illustrate the proposed approach, a benchmark footbridge is considered. Results show both which is the most adequate design criterion to control the pedestrian-induced vibrations on the footbridge and the influence of the design requirements and the uncertainty level in the final TMD design.

변분법을 이용한 확률론적 유한요소법에 관한 연구 (A Study on the Stochastic Finite Element Method Based on Variational Approach)

  • 배동명;김경열
    • 수산해양기술연구
    • /
    • 제32권4호
    • /
    • pp.432-446
    • /
    • 1996
  • A stochastic Hamilton variational principle(SHVP) is formulated for dynamic problems of linear continuum. The SHVP allows incorporation of probabilistic distributions into the finite element analysis. The formulation is simplified by transformation of correlated random variables to a set of uncorrelated random variables through a standard eigenproblem. A procedure based on the Fourier analysis and synthesis is presented for eliminating secularities from the perturbation approach. In addition to, a method to analyse stochastic design sensitivity for structural dynamics is present. A combination of the adjoint variable approach and the second order perturbation method is used in the finite element codes. An alternative form of the constraint functional that holds for all times is introduced to consider the time response of dynamic sensitivity. The algorithms developed can readily be adapted to existing deterministic finite element codes. The numerical results for stochastic analysis by proceeding approach of cantilever, 2D-frame and 3D-frame illustrates in this paper.

  • PDF

JPV 소수 생성 알고리즘의 확률적 분석 및 성능 개선 (Probabilistic Analysis of JPV Prime Generation Algorithm and its Improvement)

  • 박희진;조호성
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제35권2호
    • /
    • pp.75-83
    • /
    • 2008
  • Joye와 연구자들은 기존의 조합 소수 판단 검사에서 trial division 과정을 제거한 새로운 소수 생성 알고리즘 (이하 JPV 알고리즘)을 제시하였으며, 이 알고리즘이 기존의 조합 소수 생성 알고리즘에 비해 $30{\sim}40%$ 정도 빠르다고 주장하였다. 하지만 이 비교는 전체 수행시간이 아닌 Fermat 검사의 호출 횟수만을 비교한 것으로 정확한 비교와는 거리가 있다. 기존의 조합 소수 생성 알고리즘에 대해 이론적인 수행시간 예측 방법이 있음에도 불구하고 두 알고리즘의 전체 수행시간을 비교할 수 없었던 이유는 JPV 알고리즘에 대한 이론적인 수행 시간 예측 모델이 없었기 때문이다. 본 논문에서는 먼저 JPV 알고리즘을 확률적으로 분석하여 수행시간 예측 모델을 제시하고, 이 모델을 이용하여 JPV 알고리즘과 기존의 조차 소수 생성 알고리즘의 전체 수행시간을 비교한다. 이 모델을 이용하여 펜티엄4 시스템에서 512비트 소수의 생성 시간을 예측해 본 결과 Fermat 검사의 호출 횟수를 이용한 비교와는 달리 JPV 알고리즘이 기존의 조합 소수 생성 알고리즘보다 느리다는 결론을 얻었다. 이러한 이론적인 분석을 통한 비교는 실제 동일한 환경에서 실험을 통해서 검증되었다. 또한, 본 논문에서는 JPV 알고리즘의 성능 개선 방법을 제시한다. 이 방법을 사용하여 JPV 알고리즘을 개선하면 동일한 공간을 사용할 경우에 JPV 알고리즘이 기존의 조합 소수 생성 알고리즘과 비슷한 성능을 보인다.