• Title/Summary/Keyword: Probabilistic methods

Search Result 581, Processing Time 0.025 seconds

Robust optimum design of MTMD for control of footbridges subjected to human-induced vibrations via the CIOA

  • Leticia Fleck Fadel Miguel;Otavio Augusto Peter de Souza
    • Structural Engineering and Mechanics
    • /
    • v.86 no.5
    • /
    • pp.647-661
    • /
    • 2023
  • It is recognized that the installation of energy dissipation devices, such as the tuned mass damper (TMD), decreases the dynamic response of structures, however, the best parameters of each device persist hard to determine. Unlike many works that perform only a deterministic optimization, this work proposes a complete methodology to minimize the dynamic response of footbridges by optimizing the parameters of multiple tuned mass dampers (MTMD) taking into account uncertainties present in the parameters of the structure and also of the human excitation. For application purposes, a steel footbridge, based on a real structure, is studied. Three different scenarios for the MTMD are simulated. The proposed robust optimization problem is solved via the Circle-Inspired Optimization Algorithm (CIOA), a novel and efficient metaheuristic algorithm recently developed by the authors. The objective function is to minimize the mean maximum vertical displacement of the footbridge, whereas the design variables are the stiffness and damping constants of the MTMD. The results showed the excellent capacity of the proposed methodology, reducing the mean maximum vertical displacement by more than 36% and in a computational time about 9% less than using a classical genetic algorithm. The results obtained by the proposed methodology are also compared with results obtained through traditional TMD design methods, showing again the best performance of the proposed optimization method. Finally, an analysis of the maximum vertical acceleration showed a reduction of more than 91% for the three scenarios, leading the footbridge to acceleration values below the recommended comfort limits. Hence, the proposed methodology could be employed to optimize MTMD, improving the design of footbridges.

Conditional Density based Statistical Prediction

  • J Rama Devi;K. Koteswara Rao;M Venkateswara Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.6
    • /
    • pp.127-139
    • /
    • 2023
  • Numerous genuine issues, for example, financial exchange expectation, climate determining and so forth has inalienable arbitrariness related with them. Receiving a probabilistic system for forecast can oblige this dubious connection among past and future. Commonly the interest is in the contingent likelihood thickness of the arbitrary variable included. One methodology for expectation is with time arrangement and auto relapse models. In this work, liner expectation technique and approach for computation of forecast coefficient are given and likelihood of blunder for various assessors is determined. The current methods all need in some regard assessing a boundary of some accepted arrangement. In this way, an elective methodology is proposed. The elective methodology is to gauge the restrictive thickness of the irregular variable included. The methodology proposed in this theory includes assessing the (discretized) restrictive thickness utilizing a Markovian definition when two arbitrary factors are genuinely needy, knowing the estimation of one of them allows us to improve gauge of the estimation of the other one. The restrictive thickness is assessed as the proportion of the two dimensional joint thickness to the one-dimensional thickness of irregular variable at whatever point the later is positive. Markov models are utilized in the issues of settling on an arrangement of choices and issue that have an innate transience that comprises of an interaction that unfurls on schedule on schedule. In the nonstop time Markov chain models the time stretches between two successive changes may likewise be a ceaseless irregular variable. The Markovian methodology is especially basic and quick for practically all classes of classes of issues requiring the assessment of contingent densities.

COMPENSATION STRUCTURE AND CONTINGENCY ALLOCATION IN INTEGRATED PROJECT DELIVERY SYSTEMS

  • Mei Liu;F. H. (Bud) Griffis;Andrew Bates
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.338-343
    • /
    • 2013
  • Integrated Project Delivery (IPD) as a delivery method fully capitalizes on an integrated project team that takes advantage of the knowledge of all team members to maximize project outcomes. IPD is currently the highest form of collaboration available because all three core project stakeholders, owner, designer and contractor, are aligned to the same purpose. Compared with traditional project delivery approaches such as Design-Bid-Build (DBB), Design-Build (DB), and CM at-Risk, IPD is distinguished in that it eliminates the adversarial nature of the business by encouraging transparency, open communication, honesty and collaboration among all project stakeholders. The team appropriately shares the project risk and reward. Sharing reward is easy, while it is hard to fairly share a failure. So the compensation structure and the contingency in IPD are very different from those in traditional delivery methods and they are expected to encourage motivation, inspiration and creativity of all project stakeholders to achieve project success. This paper investigates the compensation structure in IPD and provides a method to determine the proper level of contingency allocation to reduce the risk of cost overrun. It also proposes a method in which contingency could be used as a functional monetary incentive when established to produce the desired level of collaboration in IPD. Based on the compensation structure scenario discovered, a probabilistic contingency calculation model was created by evaluating the random nature of changes and various risk drivers. The model can be used by the IPD team to forecast the probability of the cost overrun and equip the IPD team with confidence to really enjoy the benefits of collaborative team work.

  • PDF

Safety Evaluation of Subway Tunnel Structures According to Adjacent Excavation (인접굴착공사에 따른 지하철 터널 구조물 안전성 평가)

  • Jung-Youl Choi;Dae-Hui Ahn;Jee-Seung Chung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.559-563
    • /
    • 2024
  • Currently, in Korea, large-scale, deep excavations are being carried out adjacent to structures due to overcrowding in urban areas. for adjacent excavations in urban areas, it is very important to ensure the safety of earth retaining structures and underground structures. accordingly, an automated measurement system is being introduced to manage the safety of subway tunnel structures. however, the utilization of automated measurement system results is very low. existing evaluation techniques rely only on the maximum value of measured data, which can overestimate abnormal behavior. accordingly, in this study, a vast amount of automated measurement data was analyzed using the Gaussian probability density function, a technique that can quantitatively evaluate. highly reliable results were derived by applying probabilistic statistical analysis methods to a vast amount of data. therefore, in this study, the safety evaluation of subway tunnel structures due to adjacent excavation work was performed using a technique that can process a large amount of data.

Comparative Study of Reliability Design Methods by Application to Donghae Harbor Breakwaters. 1. Stability of Amor Blocks (동해항 방파제를 대상으로 한 신뢰성 설계법의 비교 연구. 1 피복 블록의 안정성)

  • Kim Seung-Woo;Suh Kyung-Duck;Oh Young Min
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.17 no.3
    • /
    • pp.188-201
    • /
    • 2005
  • This is the first part of a two-part paper which describes comparison of reliability design methods by application to Donghae Harbor Breakwaters. This paper, Part 1, is restricted to stability of armor blocks, while Part 2 deals with sliding of caissons. Reliability design methods have been developed fur breakwater designs since the mid-1980s. The reliability design method is classified into three categories depending on the level of probabilistic concepts being employed. In the Level 1 method, partial safety factors are used, which are predetermined depending on the allowable probability of failure. In the Level 2 method, the probability of failure is evaluated with the reliability index, which is calculated using the means and standard deviations of the load and resistance. The load and resistance are assumed to distribute normally. In the Level 3 method, the cumulative quantity of failure (e.g. cumulative damage of armor blocks) during the lifetime of the breakwater is calculated without assumptions of normal distribution of load and resistance. Each method calculates different design parameters, but they can be expressed in terms of probability of failure so that tile difference can be compared among the different methods. In this study, we applied the reliability design methods to the stability of armor blocks of the breakwaters of Donghae Harbor, which was constructed by traditional deterministic design methods to be damaged in 1987. Analyses are made for the breakwaters before the damage and after reinforcement. The probability of failure before the damage is much higher than the target probability of failure while that for the reinforced breakwater is much lower than the target value, indicating that the breakwaters before damage and after reinforcement were under- and over-designed, respectively. On the other hand, the results of the different reliability design methods were in fairly good agreement, confirming that there is not much difference among different methods.

Comparative Study of Reliability Design Methods by Application to Donghae Harbor Breakwaters. 2. Sliding of Caissons (동해항 방파제를 대상으로 한 신뢰성 설계법의 비교 연구. 2. 케이슨의 활동)

  • Kim, Seung-Woo;Suh, Kyung-Duck;Oh, Young-Min
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.18 no.2
    • /
    • pp.137-146
    • /
    • 2006
  • This is the second of a two-part paper which describes comparison of reliability design methods by application to Donghae Harbor Breakwaters. In this paper, Part 2, we deal with sliding of caissons. The failure modes of a vertical breakwater, which consists of a caisson mounted on a rubble mound, include the sliding and overturning of the caisson and the failure of the rubble mound or subsoil, among which most frequently occurs the sliding of the caisson. The traditional deterministic design method for sliding failure of a caisson uses the concept of a safety factor that the resistance should be greater than the load by a certain factor (e.g. 1.2). However, the safety of a structure cannot be quantitatively evaluated by the concept of a safety factor. On the other hand, the reliability design method, for which active research is being performed recently, enables one to quantitatively evaluate the safety of a structure by calculating the probability of failure of the structure. The reliability design method is classified into three categories depending on the level of probabilistic concepts being employed, i.e., Level 1, 2, and 3. In this study, we apply the reliability design methods to the sliding of the caisson of the breakwaters of Donghae Harbor, which was constructed by traditional deterministic design methods to be damaged in 1987. Analyses are made for the breakwaters before the damage and after reinforcement. The probability of failure before the damage is much higher than the allowable value, indicating that the breakwater was under-designed. The probability of failure after reinforcement, however, is close to the allowable value, indicating that the breakwater is no longer in danger. On the other hand, the results of the different reliability design methods are in fairly good agreement, confirming that there is not much difference among different methods.

Suggestion of an Evaluation Chart for Landslide Susceptibility using a Quantification Analysis based on Canonical Correlation (정준상관 기반의 수량화분석에 의한 산사태 취약성 평가기법 제안)

  • Chae, Byung-Gon;Seo, Yong-Seok
    • Economic and Environmental Geology
    • /
    • v.43 no.4
    • /
    • pp.381-391
    • /
    • 2010
  • Probabilistic prediction methods of landslides which have been developed in recent can be reliable with premise of detailed survey and analysis based on deep and special knowledge. However, landslide susceptibility should also be analyzed with some reliable and simple methods by various people such as government officials and engineering geologists who do not have deep statistical knowledge at the moment of hazards. Therefore, this study suggests an evaluation chart of landslide susceptibility with high reliability drawn by accurate statistical approaches, which the chart can be understood easily and utilized for both specialists and non-specialists. The evaluation chart was developed by a quantification method based on canonical correlation analysis using the data of geology, topography, and soil property of landslides in Korea. This study analyzed field data and laboratory test results and determined influential factors and rating values of each factor. The quantification analysis result shows that slope angle has the highest significance among the factors and elevation, permeability coefficient, porosity, lithology, and dry density are important in descending order. Based on the score assigned to each evaluation factor, an evaluation chart of landslide susceptibility was developed with rating values in each class of a factor. It is possible for an analyst to identify susceptibility degree of a landslide by checking each property of an evaluation factor and calculating sum of the rating values. This result can also be used to draw landslide susceptibility maps based on GIS techniques.

Simulation-Based Stochastic Markup Estimation System $(S^2ME)$ (시뮬레이션을 기반(基盤)으로 하는 영업이윤율(營業利潤率) 추정(推定) 시스템)

  • Yi, Chang-Yong;Kim, Ryul-Hee;Lim, Tae-Kyung;Kim, Wha-Jung;Lee, Dong-Eun
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2007.11a
    • /
    • pp.109-113
    • /
    • 2007
  • This paper introduces a system, Simulation based Stochastic Markup Estimation System (S2ME), for estimating optimum markup for a project. The system was designed and implemented to better represent the real world system involved in construction bidding. The findings obtained from the analysis of existing assumptions used in the previous quantitative markup estimation methods were incorporated to improve the accuracy and predictability of the S2ME. The existing methods has four categories of assumption as follows; (1) The number of competitors and who is the competitors are known, (2) A typical competitor, who is fictitious, is assumed for easy computation, (3) the ratio of bid price against cost estimate (B/C) is assumed to follow normal distribution, (4) The deterministic output obtained from the probabilistic equation of existing models is assumed to be acceptable. However, these assumptions compromise the accuracy of prediction. In practice, the bidding patterns of the bidders are randomized in competitive bidding. To complement the lack of accuracy contributed by these assumptions, bidding project was randomly selected from the pool of bidding database in the simulation experiment. The probability to win the bid in the competitive bidding was computed using the profile of the competitors appeared in the selected bidding project record. The expected profit and probability to win the bid was calculated by selecting a bidding record randomly in an iteration of the simulation experiment under the assumption that the bidding pattern retained in historical bidding DB manifest revival. The existing computation, which is handled by means of deterministic procedure, were converted into stochastic model using simulation modeling and analysis technique as follows; (1) estimating the probability distribution functions of competitors' B/C which were obtained from historical bidding DB, (2) analyzing the sensitivity against the increment of markup using normal distribution and actual probability distribution estimated by distribution fitting, (3) estimating the maximum expected profit and optimum markup range. In the case study, the best fitted probability distribution function was estimated using the historical bidding DB retaining the competitors' bidding behavior so that the reliability was improved by estimating the output obtained from simulation experiment.

  • PDF

Comparison of Methods for the Analysis Percentile of Seismic Hazards (지진재해도의 백분위수 분석 방법 비교)

  • Rhee, Hyun-Me;Seo, Jung-Moon;Kim, Min-Kyu;Choi, In-Kil
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.43-51
    • /
    • 2011
  • Probabilistic seismic hazard analysis (PSHA), which can effectively apply inevitable uncertainties in seismic data, considers a number of seismotectonic models and attenuation equations. The calculated hazard by PSHA is generally a value dependent on peak ground acceleration (PGA) and expresses the value as an annual exceedance probability. To represent the uncertainty range of a hazard which has occurred using various seismic data, a hazard curve figure shows both a mean curve and percentile curves (15, 50, and 85). The percentile performs an important role in that it indicates the uncertainty range of the calculated hazard, could be calculated using various methods by the relation of the weight and hazard. This study using the weight accumulation method, the weighted hazard method, the maximum likelihood method, and the moment method, has calculated the percentile of the computed hazard by PSHA on the Shinuljin 1, 2 site. The calculated percentile using the weight accumulation method, the weighted hazard method, and the maximum likelihood method, have similar trends and represent the range of all computed hazards by PSHA. The calculated percentile using the moment method effectively showed the range of hazards at the source which includes a site. This study suggests the moment method as effective percentile calculation method considering the almost same mean hazard for the seismotectonic model and a source which includes a site.

An Application of Artificial Intelligence System for Accuracy Improvement in Classification of Remotely Sensed Images (원격탐사 영상의 분류정확도 향상을 위한 인공지능형 시스템의 적용)

  • 양인태;한성만;박재국
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.20 no.1
    • /
    • pp.21-31
    • /
    • 2002
  • This study applied each Neural Networks theory and Fuzzy Set theory to improve accuracy in remotely sensed images. Remotely sensed data have been used to map land cover. The accuracy is dependent on a range of factors related to the data set and methods used. Thus, the accuracy of maps derived from conventional supervised image classification techniques is a function of factors related to the training, allocation, and testing stages of the classification. Conventional image classification techniques assume that all the pixels within the image are pure. That is, that they represent an area of homogeneous cover of a single land-cover class. But, this assumption is often untenable with pixels of mixed land-cover composition abundant in an image. Mixed pixels are a major problem in land-cover mapping applications. For each pixel, the strengths of class membership derived in the classification may be related to its land-cover composition. Fuzzy classification techniques are the concept of a pixel having a degree of membership to all classes is fundamental to fuzzy-sets-based techniques. A major problem with the fuzzy-sets and probabilistic methods is that they are slow and computational demanding. For analyzing large data sets and rapid processing, alterative techniques are required. One particularly attractive approach is the use of artificial neural networks. These are non-parametric techniques which have been shown to generally be capable of classifying data as or more accurately than conventional classifiers. An artificial neural networks, once trained, may classify data extremely rapidly as the classification process may be reduced to the solution of a large number of extremely simple calculations which may be performed in parallel.