• Title/Summary/Keyword: probability distributions

Search Result 744, Processing Time 0.035 seconds

IMPROVING DECISIONS IN WIND POWER SIMULATIONS USING MONTE CARLO ANALYSIS

  • Devin Hubbard;Borinara Park
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.122-128
    • /
    • 2013
  • Computer simulations designed to predict technical and financial returns of wind turbine installations are used to make informed investment decisions. These simulations used fixed values to represent real-world variables, while the actual projects can be highly uncertain, resulting in predictions that are less accurate and less useful. In this article, by modifying a popular wind power simulation sourced from the American Wind Energy Association to use Monte Carlo techniques in its calculations, the authors have proposed a way to improve simulation usability by producing probability distributions of likely outcomes, which can be used to draw broader, more useful conclusions about the simulated project.

  • PDF

STOCHASTIC CASHFLOW MODELING INTEGRATED WITH SIMULATION BASED SCHEDULING

  • Dong-Eun Lee;David Arditi;Chang-Baek Son
    • International conference on construction engineering and project management
    • /
    • 2011.02a
    • /
    • pp.395-398
    • /
    • 2011
  • This paper introduces stochastic cash-flow modeling integrated with simulation based scheduling. The system makes use of CPM schedule data exported from commercial scheduling software, computes the best fit probability distribution functions (PDFs) of historical activity durations, assigns the PDFs identified to respective activities, simulates the schedule network, computes the deterministic and stochastic project cash-flows, plots the corresponding cash flow diagrams, and estimates the best fit PDFs of overdraft and net profit of a project. It analyzes the effect of different distributions of activity durations on the distribution of overdrafts and net profits, and improves reliability compared to deterministic cash flow analysis.

  • PDF

An Analysis of Statistics Chapter of the Grade 7's Current Textbook in View of the Distribution Concepts (중학교 1학년 통계단원에 나타난 분포개념에 관한 분석)

  • Lee, Young-Ha;Choi, Ji-An
    • Journal of Educational Research in Mathematics
    • /
    • v.18 no.3
    • /
    • pp.407-434
    • /
    • 2008
  • This research is to analyze the descriptions in the statistic chapter of the grade 7's current textbooks. The analysis is based on the distribution concepts suggested by Nam(2007). Thus we assumed that the goal of this statistic chapter is to establish concepts on the distributions and to learn ways of communication and comparison through distributional presentations. What we learned and wanted to suggest through the study is the followings. 1) Students are to learn what the distribution is and what are not. 2) Every kinds of presentational form of distributions is to given its own right to learn so that students are more encouraged to learn them and use them more adequately. 3) Density histogram is to be introduced to extend student's experiences viewing an area as 3 relative frequency, which is later to be progressed into a probability density. 4) Comparison of two distributions, especially through frequency polygons, is to be an hot issue among educational stakeholder whether to include or not. It is very important when stochastic correlations be learned, because it is nothing but a comparison between conditional distributions. 5) Statistical literacy is also an important issue for student's daily life. Especially the process ahead of the data collection must be introduced so that students acknowledge the importance of accurate and object-oriented data.

  • PDF

Assessment of Extreme Wind Risk for Window Systems in Apartment Buildings Based on Probabilistic Model (확률 모형 기반의 아파트 창호 시스템 강풍 위험도 평가)

  • Ham, Hee Jung;Yun, Woo-Seok;Choi, Seung Hun;Lee, Sungsu;Kim, Ho-Jeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.6
    • /
    • pp.625-633
    • /
    • 2015
  • In this study, a coupled probabilistic framework is developed to assess wind risk on apartment buildings by using the convolution of wind hazard and fragility functions. In this framework, typhoon induced extreme wind is estimated by applying the developed Monte Carlo simulation model to the climatological data of typhoons affecting Korean peninsular from 1951 to 2013. The Monte Carlo simulation technique is also used to assess wind fragility function for 4 different damage states by comparing the probability distributions of the window system's resistance performance and wind load. Wind hazard and fragility functions are modeled by the Weibull and lognormal probability distributions based on simulated wind speeds and failure probabilities. The modeled functions are convoluted to obtain the wind risk for the different damage levels. The developed probabilistic framework clearly shows that wind risk are influenced by various important characteristics of terrain and apartment building such as location of building, exposure category, topographic condition, roof angle, height of building, etc. The risk model presented in this paper can be used as tools to predict economic loss estimation and to establish wind risk mitigation plan for the existing building inventory.

An Epistemological Inquiry on the Development of Statistical Concepts (통계적 개념 발달에 관한 인식론적 고찰)

  • Lee, Young-Ha;Nam, Joo-Hyun
    • The Mathematical Education
    • /
    • v.44 no.3 s.110
    • /
    • pp.457-475
    • /
    • 2005
  • We have inquired on what the statistical classes of the secondary schools had been aiming to, say the epistermlogical objects. And we now appreciate that the main obstacle to the systematic articulation is the lack of anticipation on what the statistical concepts are. This study focuses on the ingredients of the statistical concepts. Those are to be the ground of the systematic articulation of statistic courses, especially of the one for the school kids. Thus we required that those ingredients must satisfy the followings. i) directly related to the contents of statistics ii) psychologically developing iii) mutually exclusive each other as much as possible iv) exhaustive enough to cover all statistical concepts We examined what and how statisticians had been doing and the various previous views on these. After all we suggest the following three concepts are the core of conceptual developments of statistic, say the concept of distributions, the summarizing ability and the concept of samples. By the concepts of distributions we mean the frequency views on each random categories and that is developing from the count through the probability along ages. Summarizing ability is another important resources to embed his probe with the data set. It is not only viewed as a number but also to be anticipated as one reflecting a random phenomena. Inductive generalization is one of the most hazardous thing. Statistical induction is a scientific way of challenging this and this starts from distinguishing the chance with the inevitable consequences. One's inductive logic grows up along with one's deductive arguments, nevertheless they are different. The concept of samples reflects' one's view on the sample data and the way of compounding one's logic with the data within one's hypothesis. With these three in mind we observed Korean Statistic Curriculum from K to 12. Distributional concepts are dealt with throughout but not sequenced well. The way of summarization has been introduced in the 1 st, 5th, 7th and the 10th grade as a numerical value only. One activity on the concept of sample is given at the 6th grade. And it jumps into the statistical reasoning at the selective courses of ' Mathematics I ' or of ' Probability and Statistics ' in the grades of 11-12. We want to suggest further studies on the developing stages of these three conceptual features so as to obtain a firm basis of successive statistical articulation.

  • PDF

On the Statistical Properties of the Parameters B and q in Creep Crack Growth Law, da/dt=B(C*)q, in the Case of Mod. 9Cr-1Mo Steel (Mod. 9Cr-1Mo강의 크리프 균열 성장 법칙의 파라메터 B와 q의 통계적 성질에 관한 연구)

  • Kim, Seon-Jin;Park, Jae-Young;Kim, Woo-Gon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.3
    • /
    • pp.251-257
    • /
    • 2011
  • This paper deals with the statistical properties of parameters B and q in the creep crack growth rate (CCGR) law, da/dt=B$(C^*)^q$, in Mod. 9Cr-1Mo (ASME Gr.91) steel which is considered a candidate materials for fabricating next generation nuclear reactors. The CCGR data were obtained by creep crack growth (CCG) tests performed on 1/2-inch compact tension (CT) specimens under an applied load of 5000N at a temperature of $600^{\circ}C$. The CCG behavior was analyzed statistically using the empirical equation between CCGR, da/dt and the creep fracture mechanics parameter, $C^*$. The B and q values were determined for each specimen by the least-squares fitting method. The probability distribution functions for B and q were investigated using normal, log-normal, and Weibull distributions. As far as this study is considered, it can be appeared that B and q followed the log-normal and Weibull distributions. Moreover, a strong positive linear correlation was found between B and q.

Parameter Estimation and Analysis of Extreme Highest Tide Level in Marginal Seas around Korea (한국 연안 최극 고조위의 매개변수 추정 및 분석)

  • Jeong, Shin-Taek;Kim, Jeong-Dae;Ko, Dong-Hui;Yoon, Gil-Lim
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.20 no.5
    • /
    • pp.482-490
    • /
    • 2008
  • For a coastal or harbor structure design, one of the most important environmental factors is the appropriate extreme highest tide level condition. Especially, the information of extreme highest tide level distribution is essential for reliability design. In this paper, 23 set of extreme highest tide level data obtained from National Oceanographic Research Institute(NORI) were analyzed for extreme highest tide levels. The probability distributions considered in this research were Generalized Extreme Value(GEV), Gumbel, and Weibull distribution. For each of these distributions, three parameter estimation methods, i.e. the method of moments, maximum likelihood and probability weighted moments, were applied. Chi-square and Kolmogorov-Smirnov goodness-offit tests were performed, and the assumed distribution was accepted at the confidence level 95%. Gumbel distribution which best fits to the 22 tidal station was selected as the most probable parent distribution, and optimally estimated parameters and extreme highest tide level with various return periods were presented. The extreme values of Incheon, Cheju, Yeosu, Pusan, and Mukho, which estimated by Shim et al.(1992) are lower than that of this result.

Probabilistic Analysis of Blasting Loads and Blast-Induced Rock Mass Responses in Tunnel Excavation (터널발파로 인한 굴착선주변 암반거동의 확률론적 연구)

  • 이인모;박봉기;박채우
    • Journal of the Korean Geotechnical Society
    • /
    • v.20 no.4
    • /
    • pp.89-102
    • /
    • 2004
  • The generated blasting pressure wave initiated under decoupled-charge condition is a function of peak blasting pressure, rise time, and wave-shape function. The peak blasting pressure and the rise time are also the function of explosive and rock properties. The probabilistic distributions of explosive and rock properties are derived from the results of their property tests. Since the probabilistic distributions of explosive and rock properties displayed a normal distribution, the peak blasting pressure and the rise time can also be regarded as a normal distribution. Parameter analysis and uncertainty analysis were performed to identify the most influential parameter that affects the peak blasting pressure and the rise time. Even though the explosive properties were found to be the most influential parameters on the peak blasting pressure and the rise time from the parameter analyses, the result of uncertainty analysis showed that rock properties constituted major uncertainties in estimating the peak blasting pressure and the rise time rather than explosive properties. Damage and overbreak of the remaining rock around the excavation line induced by blasting were evaluated by dynamic numerical analysis. A user-subroutine to estimate the rock damage was coded based on the continuum damage mechanics. This subroutine was linked to a commercial program called 'ABAQUS/Explicit'. The results of dynamic numerical analysis showed that the rock damages generated by the initiation of stopping hole were larger than those from the initiation of contour hole. Several methods to minimize those damages were proposed such as relocation of stopping hole, detailed subdivision of rock classification, and so on. It was found that fracture probability criteria and fractured zones could be distinctively identified by applying fuzzy-random probability.

A simulation model for evaluating serological monitoring program of Aujeszky's disease (확률모형을 이용한 오제스키병 혈청학적 모니터링 프로그램 평가)

  • Chang, Ki-Yoon;Pak, Son-Il;Park, Choi-Kyu;Lee, Kyoung-Ki;Joo, Yi-Seok
    • Korean Journal of Veterinary Research
    • /
    • v.49 no.2
    • /
    • pp.149-155
    • /
    • 2009
  • The objective of this study was to analyze data from the planned national serological monitoring program for Aujeszky's disease (AD) using a simulation model to evaluate probable outcomes expected in the sample derived from the simulated herds at predefined within-herd prevalence and herd prevalence. Additionally, prevalence at animal- and herd-level estimated by the stochastic simulation model based on the distributions of the proportion of infected herds and test-positive animals was compared with those of data from a national serological survey in 2006, in which 106,762 fattening pigs from 5,325 herds were tested for AD using a commercial ELISA kit. A fixed value of 95% was used for test sensitivity, and the specificity was modeled with a minimum, most likely and maximum of 95, 97 and 99%, respectively. The within-herd prevalence and herd prevalence was modeled using Pert and Triang distributions, respectively with a minimum, most likely and maximum point values. In all calculations, population size of 1,000 was used due to lack of representative information. The mean number of infected herds and true test-positives was estimated to be 27 herds (median = 25; 95% percentile 44) and 214 pigs (median = 196; 95% percentile 423), respectively. When testing 20 pigs (mean of 2006 survey) in each herd, there was a 3.3% probability that the potential for false-positive reactions due to less than 100% specificity of the ELISA test would be detected. It was found that the model showed prevalence of 0.21% (99% percentile 0.50%) and 0.5% (99% percentile 0.99%) at animal- and herd-level, respectively. These rates were much similar to data from the 2006 survey (0.62% versus 0.83%). The overall mean herd-level sensitivity of the 2006 survey for fattening pigs was 99.9%, with only a 0.2% probability of failing to detect at least one infected herd.

An Adaptive Data Compression Algorithm for Video Data (사진데이타를 위한 한 Adaptive Data Compression 방법)

  • 김재균
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.12 no.2
    • /
    • pp.1-10
    • /
    • 1975
  • This paper presents an adaptive data compression algorithm for video data. The coling complexity due to the high correlation in the given data sequence is alleviated by coding the difference data, sequence rather than the data sequence itself. The adaptation to the nonstationary statistics of the data is confined within a code set, which consists of two constant length cades and six modified Shannon.Fano codes. lt is assumed that the probability distributions of tile difference data sequence and of the data entropy are Laplacian and Gaussion, respectively. The adaptive coding performance is compared for two code selection criteria: entropy and $P_r$[difference value=0]=$P_0$. It is shown that data compression ratio 2 : 1 is achievable with the adaptive coding. The gain by the adaptive coding over the fixed coding is shown to be about 10% in compression ratio and 15% in code efficiency. In addition, $P_0$ is found to he not only a convenient criterion for code selection, but also such efficient a parameter as to perform almost like entropy.

  • PDF