• Title/Summary/Keyword: probabilistic estimates

Search Result 90, Processing Time 0.024 seconds

Evaluation of the Uncertainties in Rainfall-Runoff Model Using Meta-Gaussian Approach (Meta-Gaussian 방법을 이용한 강우-유출 모형에서의 불확실성 산정)

  • Kim, Byung-Sik;Kim, Bo-Kyung;Kwon, Hyun-Han
    • Journal of Wetlands Research
    • /
    • v.11 no.1
    • /
    • pp.49-64
    • /
    • 2009
  • Rainfall-runoff models are used for efficient management, distribution, planning, and design of water resources in accordance with the process of hydrologic cycle. The models simplify the transition of rainfall to runoff as rainfall through different processes including evaporation, transpiration, interception, and infiltration. As the models simplify complex physical processes, gaps between the models and actual rainfall events exist. For more accurate simulation, appropriate models that suit analysis goals are selected and reliable long-term hydrological data are collected. However, uncertainty is inherent in models. It is therefore necessary to evaluate reliability of simulation results from models. A number of studies have evaluated uncertainty ingrained in rainfall-runoff models. In this paper, Meta-Gaussian method proposed by Montanari and Brath(2004) was used to assess uncertainty of simulation outputs from rainfall-runoff models. The model, which estimates upper and lower bounds of the confidence interval from probabilistic distribution of a model's error, can quantify global uncertainty of hydrological models. In this paper, Meta-Gaussian method was applied to analyze uncertainty of simulated runoff outputs from $Vflo^{TM}$, a physically-based distribution model and HEC-HMS model, a conceptual lumped model.

  • PDF

Health Risk Assessment of Cryptosporidium in Tap Water in Korea (우리나라 먹는물의 크립토스포리디움에 의한 건강위해도 평가 연구)

  • Lee, Mok-Young;Park, Sang-Jung;Cho, Eun-Joo;Park, Su-Jeong;Han, Sun-Hee;Kwon, Oh-Sang
    • Journal of Environmental Health Sciences
    • /
    • v.39 no.1
    • /
    • pp.32-42
    • /
    • 2013
  • Objectives: Cryptosporidium, a protozoan parasite, has been recognized as a frequent cause of waterborne disease due to its extremely strong resistance against chlorine disinfection. Although there has as yet been no report of a Cryptosporidium outbreak through drinking water in Korea, it is important to estimate the health risk of Cryptosporidium in water supply systems because of the various infection cases in human and domestic animals and frequent detection reports on their oocysts in water environments. Methods: This study evaluated the annual infection risk of Cryptosporidium in tap water using the quantitative microbial risk assessment technique. Exposure assessment was performed upon the results of a national survey on Cryptosporidium on the water sources of 97 large-scale water purification plants in Korea, water treatment efficacy, and daily unboiled tap water consumption. The estimates of the US Environmental Protection Agency on the mean likelihood of infection from ingesting one oocyst were applied for effect assessment. Results: Using probabilistic methods, mean annual infection risk of Cryptosporidiosis by the intake of tap water was estimated to fall within the range of $2.3{\times}10^{-4}$ to $1.0{\times}10^{-3}$ (median $5.7{\times}10^{-4}$). The risk in using river sources was predicted to be four times higher than with lake sources. With 0.5-log higher removal efficacy, the risk was estimated to be $1.8{\times}10^{-4}$, and could then be lowered by one-third. Conclusions: These estimations can be compared with acceptable risk and then used to determine the adequacy and priority of various drinking water quality strategies such as the establishment of new treatment technology.

Development of a software framework for sequential data assimilation and its applications in Japan

  • Noh, Seong-Jin;Tachikawa, Yasuto;Shiiba, Michiharu;Kim, Sun-Min;Yorozu, Kazuaki
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2012.05a
    • /
    • pp.39-39
    • /
    • 2012
  • Data assimilation techniques have received growing attention due to their capability to improve prediction in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modelling software are based on a deterministic approach. In this study, we developed a hydrological modelling framework for sequential data assimilation, namely MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modelling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. In this software framework, sequential data assimilation based on the particle filters is available for any hydrologic models considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and radar rainfall estimates is assessed simultaneously in sequential data assimilation.

  • PDF

COLLAPSE PRESSURE ESTIMATES AND THE APPLICATION OF A PARTIAL SAFETY FACTOR TO CYLINDERS SUBJECTED TO EXTERNAL PRESSURE

  • Yoo, Yeon-Sik;Huh, Nam-Su;Choi, Suhn;Kim, Tae-Wan;Kim, Jong-In
    • Nuclear Engineering and Technology
    • /
    • v.42 no.4
    • /
    • pp.450-459
    • /
    • 2010
  • The present paper investigates the collapse pressure of cylinders with intermediate thickness subjected to external pressure based on detailed elastic-plastic finite element (FE) analyses. The effect of the initial ovality of the tube on the collapse pressure was explicitly considered in the FE analyses. Based on the present FE results, the analytical yield locus, considering the interaction between the plastic collapse and local instability due to initial ovality, was also proposed. The collapse pressure values based on the proposed yield locus agree well with the present FE results; thus, the validity of the proposed yield locus for the thickness range of interest was verified. Moreover, the partial safety factor concept based on the structural reliability theory was also applied to the proposed collapse pressure estimation model, and, thus, the priority of importance of respective parameter constituting for the collapse of cylinders under external pressure was estimated in this study. From the application of the partial safety factor concept, the yield strength was concluded to be the most sensitive, and the initial ovality of tube was not so effective in the proposed collapse pressure estimation model. The present deterministic and probabilistic results are expected to be utilized in the design and maintenance of cylinders subjected to external pressure with initial ovality, such as the once-through type steam generator.

Comparison Study of Kernel Density Estimation according to Various Bandwidth Selectors (다양한 대역폭 선택법에 따른 커널밀도추정의 비교 연구)

  • Kang, Young-Jin;Noh, Yoojeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.3
    • /
    • pp.173-181
    • /
    • 2019
  • To estimate probabilistic distribution function from experimental data, kernel density estimation(KDE) is mostly used in cases when data is insufficient. The estimated distribution using KDE depends on bandwidth selectors that smoothen or overfit a kernel estimator to experimental data. In this study, various bandwidth selectors such as the Silverman's rule of thumb, rule using adaptive estimates, and oversmoothing rule, were compared for accuracy and conservativeness. For this, statistical simulations were carried out using assumed true models including unimodal and multimodal distributions, and, accuracies and conservativeness of estimating distribution functions were compared according to various data. In addition, it was verified how the estimated distributions using KDE with different bandwidth selectors affect reliability analysis results through simple reliability examples.

Fragility Analysis Method Based on Seismic Performance of Bridge Structure considering Earthquake Frequencies (지진 진동수에 따른 교량의 내진성능기반 취약도 해석 방법)

  • Lee, Dae-Hyoung;Chung, Young-Soo;Yang, Dong-Wook
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.2
    • /
    • pp.187-197
    • /
    • 2009
  • This paper presents a systematic approach for estimating fragility curves and damage probability matrices for different frequencies. Fragility curves and damage probability indicate the probabilities that a structure will sustain different degrees of damage at different ground motion levels. The seismic damages are to achieved by probabilistic evaluation because of uncertainty of earthquakes. In contrast to previous approaches, this paper presents a method that is based on nonlinear dynamic analysis of the structure using empirical data. This paper presents the probability of damage as a function of peak ground acceleration and estimates the probability of five damage levels for prestressed concrete (PSC) bridge pier subjected to given ground acceleration. At each level, 100 artificial earthquake motions were generated in terms of soil conditions, and nonlinear time domain analyses was performed for the damage states of PSC bridge pier structures. These damage states are described by displacement ductility resulting from seismic performance based on existing research results. Using the damage states and ground motion parameters, five fragility curves for PSC bridge pier with five types of dominant frequencies were constructed assuming a log-normal distribution. The effect of dominant frequences was found to be significant on fragility curves.

Optimization of Contaminated Land Investigation based on Different Fitness-for-Purpose Criteria (조사목적별 기준에 부합하는 오염부지 조사방법의 최적화 방안에 관한 연구)

  • Jong-Chun Lee;Michael H. Ramsey
    • Economic and Environmental Geology
    • /
    • v.36 no.3
    • /
    • pp.191-200
    • /
    • 2003
  • Investigations on the contaminated lands due to heavy metals from mining activities or hydrocarbons from oil spillage for example, should be planned based on specific fitness-for-purpose criteria(FFP criteria). A FFP criterion is site specific or varies with situation, based on which not only the data quality but also the decision quality can be determined. The limiting factors on the qualities can be, for example, the total budget for the investigation, regulatory guidance or expert's subjective fitness-for-purpose criterion. This paper deals with planning of investigation methods that can satisfy each suggested FFP criterion based on economic factors and the data quality. To this aim, a probabilistic loss function was applied to derive the cost effective investigation method that balances the measurement uncertainty, which estimates the degree of the data quality, with the decision quality. In addition, investigation planning methods when the objectives of investigations do not lie in the classification of the land but simply in producing the estimation of the mean concentration of the contaminant at the site(e.g. for the use in risk assessment), were also suggested. Furthermore, the efficient allocation of resources between sampling and analysis was also devised. These methods were applied to the two contaminated sites in the UK to test the validity of each method.

Improvement of Hydrologic Dam Risk Analysis Model Considering Uncertainty of Hydrologic Analysis Process (수문해석과정의 불확실성을 고려한 수문학적 댐 위험도 해석 기법 개선)

  • Na, Bong-Kil;Kim, Jin-Young;Kwon, Hyun-Han;Lim, Jeong-Yeul
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.10
    • /
    • pp.853-865
    • /
    • 2014
  • Hydrologic dam risk analysis depends on complex hydrologic analyses in that probabilistic relationship need to be established to quantify various uncertainties associated modeling process and inputs. However, the systematic approaches to uncertainty analysis for hydrologic risk analysis have not been addressed yet. In this paper, two major innovations are introduced to address this situation. The first is the use of a Hierarchical Bayesian model based regional frequency analysis to better convey uncertainties associated with the parameters of probability density function to the dam risk analysis. The second is the use of Bayesian model coupled HEC-1 rainfall-runoff model to estimate posterior distributions of the model parameters. A reservoir routing analysis with the existing operation rule was performed to convert the inflow scenarios into water surface level scenarios. Performance functions for dam risk model was finally employed to estimate hydrologic dam risk analysis. An application to the Dam in South Korea illustrates how the proposed approach can lead to potentially reliable estimates of dam safety, and an assessment of their sensitivity to the initial water surface level.

Prototype-based Cost Estimating Model for Building Interior Construction in Design Development Stage (프로토타입기반 기본설계단계 건축마감공사비 산정 모델)

  • Kim, Hae-Gon;Park, Sung-Chul;Hong, Tae-Hoon;Hyun, Chang-Taek;Koo, Kyo-Jin
    • Korean Journal of Construction Engineering and Management
    • /
    • v.8 no.4
    • /
    • pp.110-118
    • /
    • 2007
  • For deciding the owner's budget of the building construction in the predesign stage, the probabilistic methodologies for estimating the cost are often studied, however these parameter-based conceptual estimating methodology has limitation of applying it to the practical business because it hardly can link the design decision-making and the cost estimating and control. Besides if the result of detail estimating after detail design is over the budget, locally and arbitrarily control the level of interior design and fix the design. This research proposed the prototype-based cost estimating model for building interior construction which leads to estimate the interior cost easily linking with design decision-making and supports to evaluate the design alternatives in the schematic design and the design development stage for office buildings. The model divides the building on the design process by Element Breakdown Structure and presents the design alternative by selecting the elements of each room from the database accumulated the historical office buildings' prototypes and estimates the cost. The 2 case studies presented to validate the effectiveness of as the linking tool integrating the design and construction data and applicability to the practical design on the presented prototype-based model.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF