• Title/Summary/Keyword: statistical approach

Search Result 2,335, Processing Time 0.032 seconds

An approach of evaluation and mechanism study on the high and steep rock slope in water conservancy project

  • Yang, Meng;Su, Huaizhi;Wen, Zhiping
    • Computers and Concrete
    • /
    • v.19 no.5
    • /
    • pp.527-535
    • /
    • 2017
  • In this study, an aging deformation statistical model for a unique high and steep rock slope was proposed, and the aging characteristic of the slope deformation was better reflected. The slope displacement was affected by multiple-environmental factors in multiple scales and displayed the same tendency with a rising water level. The statistical model of the high and steep rock including non-aging factors was set up based on previous analyses and the study of the deformation and residual tendency. The rule and importance of the water level factor as a non-aging unit was analyzed. A partitioned statistical model and mutation model were established for the comprehensive cumulative displacement velocity with the monitoring study under multiple factors and multiple parameters. A spatial model was also developed to reflect and predict the whole and sectional deformation character by combining aging, deformation and space coordinates. A neural network model was built to fit and predict the deformation with a high degree of precision by mastering its feature of complexity and randomness. A three-dimensional finite element model of the slope was applied to approach the structure character using numerical simulations. Further, a three-dimensional finite element model of the slope and dam was developed, and the whole deformation state was analyzed. This study is expected to provide a powerful and systematic method to analyze very high, important and dangerous slopes.

Statistical Approach to Discovery of Factors Impacting on Emergence of Blood Cancers in Iran

  • Zand, Ali Mohammad;Imani, Saber;Saadati, Mojtaba;Ziaei, Robabeh;Borna, Hojat;Zaefizadeh, Mohammad;Shazad, Babak
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.13 no.12
    • /
    • pp.5965-5967
    • /
    • 2012
  • Cancer is now the main cause of increasing mortality throughout the world. Minor alterations in the cell cycle which are inherited and not removed by apoptosis are important rsik factors. Blood cancers are asmong the types which most readily cause death. Here in this study, usual but important factors such as age, gender, Rh and ABO blood typing, weight, and platelet counts are analyzed for impact on blood cancers. Frequencies and distributions, correlations and chi-square test were utilized in order to clarify the perspective of important factors. Our statistical results show males and females to have same risk in blood cancer but A blood type (40%) along with positive Rh (73%) had the highest risk. Low platelet counts are related to more than 80% of cases. Obesity has a statistically ignorable role in blood cancer prevalence. The fact that blood cancer cases increase during the second decade of life (45.7%) which might be because of involvement of maturation processes.

Maximum likelihood estimation of stochastic volatility models with leverage effect and fat-tailed distribution using hidden Markov model approximation (두꺼운 꼬리 분포와 레버리지효과를 포함하는 확률변동성모형에 대한 최우추정: HMM근사를 이용한 최우추정)

  • Kim, TaeHyung;Park, JeongMin
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.4
    • /
    • pp.501-515
    • /
    • 2022
  • Despite the stylized statistical features of returns of financial returns such as fat-tailed distribution and leverage effect, no stochastic volatility models that can explicitly capture these features have been presented in the existing frequentist approach. we propose an approximate parameterization of stochastic volatility models that can explicitly capture the fat-tailed distribution and leverage effect of financial returns and a maximum likelihood estimation of the model using Langrock et al. (2012)'s hidden Markov model approximation in a frequentist approach. Through extensive simulation experiments and an empirical analysis, we present the statistical evidences validating the efficacy and accuracy of proposed parameterization.

Reevaluation of failure criteria location and novel improvement of 1/4 PCCV high fidelity simulation model under material uncertainty quantifications

  • Bu-Seog Ju;Ho-Young Son
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3493-3505
    • /
    • 2023
  • Reactor containment buildings serve as the last barrier to prevent radioactive leakage due to accidents and their safety is crucial in overpressurization conditions. Thus, the Regulatory Guide (RG) 1.216 has mentioned the global strain as one of failure criteria in the free-field for cylindrical prestressed concrete containment vessels (PCCV) subject to internal pressure. However, there is a limit that RG 1.216 shows the free-field without the specific locations of failure criteria and also the global strain corresponding to only azimuth 135° has been mentioned in NUREG/CR-6685, regardless of the elevations of the structure. Therefore, in order to reevaluate the failure criteria of the 1:4 scaled PCCV, the high fidelity simulation model based on the experimental test was significantly validated in this study, and it was interesting to find that the experimental and numerical result was very close to each other. In addition, for the consideration of the material uncertainties, the Latin hypercube method was used as a statistical approach. Consequently, it was revealed that the radial displacements of various azimuth area such as 120°, 135°, 150°, 180° and 210° at elevations 4680 mm and 6,200 mm can represent as the global deformation at the free-field, obtained from the statistical approach.

Statistical models from weigh-in-motion data

  • Chan, Tommy H.T.;Miao, T.J.;Ashebo, Demeke B.
    • Structural Engineering and Mechanics
    • /
    • v.20 no.1
    • /
    • pp.85-110
    • /
    • 2005
  • This paper aims at formulating various statistical models for the study of a ten year Weigh-in-Motion (WIM) data collected from various WIM stations in Hong Kong. In order to study the bridge live load model it is important to determine the mathematical distributions of different load affecting parameters such as gross vehicle weights, axle weights, axle spacings, average daily number of trucks etc. Each of the above parameters is analyzed by various stochastic processes in order to obtain the mathematical distributions and the Maximum Likelihood Estimation (MLE) method is adopted to calculate the statistical parameters, expected values and standard deviations from the given samples of data. The Kolmogorov-Smirnov (K-S) method of approach is used to check the suitability of the statistical model selected for the particular parameter and the Monte Carlo method is used to simulate the distributions of maximum value stochastic processes of a series of given stochastic processes. Using the statistical analysis approach the maximum value of gross vehicle weight and axle weight in bridge design life has been determined and the distribution functions of these parameters are obtained under both free-flowing traffic and dense traffic status. The maximum value of bending moments and shears for wide range of simple spans are obtained by extrapolation. It has been observed that the obtained maximum values of the gross vehicle weight and axle weight from this study are very close to their legal limitations of Hong Kong which are 42 tonnes for gross weight and 10 tonnes for axle weight.

A Bayesian Comparison of Two Multivariate Normal Genralized Variances

  • Kim, Hea-Jung
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.05a
    • /
    • pp.73-78
    • /
    • 2002
  • In this paper we develop a method for constructing a Bayesian HPD (highest probability density) interval of a ratio of two multivariate normal generalized variances. The method gives a way of comparing two multivariate populations in terms of their dispersion or spread, because the generalized variance is a scalar measure of the overall multivariate scatter. Fully parametric frequentist approaches for the interval is intractable and thus a Bayesian HPD(highest probability densith) interval is pursued using a variant of weighted Monte Carlo (WMC) sampling based approach introduced by Chen and Shao(1999). Necessary theory involved in the method and computation is provided.

  • PDF

Assessment of Slope Stability With the Uncertainty in Soil Property Characterization (지반성질 불확실성을 고려한 사면안정 해석)

  • 김진만
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2003.03a
    • /
    • pp.123-130
    • /
    • 2003
  • The estimation of key soil properties and subsequent quantitative assessment of the associated uncertainties has always been an important issue in geotechnical engineering. It is well recognized that soil properties vary spatially as a result of depositional and post-depositional processes. The stochastic nature of spatially varying soil properties can be treated as a random field. A practical statistical approach that can be used to systematically model various sources of uncertainty is presented in the context of reliability analysis of slope stability Newly developed expressions for probabilistic characterization of soil properties incorporate sampling and measurement errors, as well as spatial variability and its reduced variance due to spatial averaging. Reliability analyses of the probability of slope failure using the different statistical representations of soil properties show that the incorporation of spatial correlation and conditional simulation leads to significantly lower probability of failure than obtained using simple random variable approach.

  • PDF

Multiple Constrained Optimal Experimental Design

  • Jahng, Myung-Wook;Kim, Young Il
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.3
    • /
    • pp.619-627
    • /
    • 2002
  • It is unpractical for the optimal design theory based on the given model and assumption to be applied to the real-world experimentation. Particularly, when the experimenter feels it necessary to consider multiple objectives in experimentation, its modified version of optimality criteria is indeed desired. The constrained optimal design is one of many methods developed in this context. But when the number of constraints exceeds two, there always exists a problem in specifying the lower limit for the efficiencies of the constraints because the “infeasible solution” issue arises very quickly. In this paper, we developed a sequential approach to tackle this problem assuming that all the constraints can be ranked in terms of importance. This approach has been applied to the polynomial regression model.

A Modified Grey-Based k-NN Approach for Treatment of Missing Value

  • Chun, Young-M.;Lee, Joon-W.;Chung, Sung-S.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.2
    • /
    • pp.421-436
    • /
    • 2006
  • Huang proposed a grey-based nearest neighbor approach to predict accurately missing attribute value in 2004. Our study proposes which way to decide the number of nearest neighbors using not only the deng's grey relational grade but also the wen's grey relational grade. Besides, our study uses not an arithmetic(unweighted) mean but a weighted one. Also, GRG is used by a weighted value when we impute missing values. There are four different methods - DU, DW, WU, WW. The performance of WW(Wen's GRG & weighted mean) method is the best of any other methods. It had been proven by Huang that his method was much better than mean imputation method and multiple imputation method. The performance of our study is far superior to that of Huang.

  • PDF

A Computer-Aided Statistical Approach to Strategic Information Systems Planning (정보시스템 전략적 계획을 위한 컴퓨터지원 통계적 접근방법)

  • Kim, Jin-Su;Hwang, Cheol-Eon
    • Asia pacific journal of information systems
    • /
    • v.4 no.2
    • /
    • pp.188-213
    • /
    • 1994
  • Strategic information systems planning (SISP) remains a critical issue of many organizations and also the top IS concern of chief executives. Therefore, researchers have investigated SISP practices and tried to improve a methodology. Among the various issues of SISP, systematically determining subject database groupings and fully automating the processes are important aspects. This study presents an alternate methodology using a statistical technique, a variable clustering approach, and systematic rules for determining database groupings, which can be fully automated. This methodology provides a strong theoritical justification as well as systematic and simple criteria for database groupings, enhanced interpretability of the output, and would be easy to include in CASE software application.

  • PDF