• Title/Summary/Keyword: Statistical methodology

Search Result 1,293, Processing Time 0.026 seconds

Development of a Simplified Statistical Methodology for Nuclear Fuel Rod Internal Pressure Calculation

  • Kim, Kyu-Tae;Kim, Oh-Hwan
    • Nuclear Engineering and Technology
    • /
    • v.31 no.3
    • /
    • pp.257-266
    • /
    • 1999
  • A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable.

  • PDF

Development of a Statistical Methodology for Nuclear Fuel Rod Internal Pressure Calculation (통계적인 핵연료봉 내압 설계방법론 개발)

  • Kim, Kyu-Tae;Yoo, Jong-Sung;Kim, Ki-Hang;Kim, Young-Jin
    • Nuclear Engineering and Technology
    • /
    • v.26 no.1
    • /
    • pp.100-107
    • /
    • 1994
  • A statistical methodology is developed for calculating the nuclear fuel pod internal pressure of Korean PWR fuel in order to reduce over-conservatism of the current KAERI deterministic methodology. The developed statistical methodology employs the response surface method and Monte Carlo calculation. The simple regression equation for the rod internal pressure is derived by taking into account the various fuel fabrication-related and fuel performance model-related parameters. The validity of the regression equation is examined by the F-test, $R^2$-method and Cp-test The internal pressure predicted by the regression equation is in good agreement with that calculated by he computer code using the KAERI deterministic methodology. The distribution of the internal pressure from the Monte Carlo calculation is found to be normal. Comparison of the 95/95 rod internal pressure predicted by the developed statistical methodology with the maximum rod internal pressure by the deterministic methodology shows that the developed statistical methodology reduces significantly over-conservatism of the deterministic methodology.

  • PDF

Automatic Mapping Between Large-Scale Heterogeneous Language Resources for NLP Applications: A Case of Sejong Semantic Classes and KorLexNoun for Korean

  • Park, Heum;Yoon, Ae-Sun
    • Language and Information
    • /
    • v.15 no.2
    • /
    • pp.23-45
    • /
    • 2011
  • This paper proposes a statistical-based linguistic methodology for automatic mapping between large-scale heterogeneous languages resources for NLP applications in general. As a particular case, it treats automatic mapping between two large-scale heterogeneous Korean language resources: Sejong Semantic Classes (SJSC) in the Sejong Electronic Dictionary (SJD) and nouns in KorLex. KorLex is a large-scale Korean WordNet, but it lacks syntactic information. SJD contains refined semantic-syntactic information, with semantic labels depending on SJSC, but the list of its entry words is much smaller than that of KorLex. The goal of our study is to build a rich language resource by integrating useful information within SJD into KorLex. In this paper, we use both linguistic and statistical methods for constructing an automatic mapping methodology. The linguistic aspect of the methodology focuses on the following three linguistic clues: monosemy/polysemy of word forms, instances (example words), and semantically related words. The statistical aspect of the methodology uses the three statistical formulae ${\chi}^2$, Mutual Information and Information Gain to obtain candidate synsets. Compared with the performance of manual mapping, the automatic mapping based on our proposed statistical linguistic methods shows good performance rates in terms of correctness, specifically giving recall 0.838, precision 0.718, and F1 0.774.

  • PDF

Process operation improvement methodology based on statistical data analysis (통계적 분석기법을 이용한 공정 운전 향상의 방법)

  • Hwang, Dae-Hee;Ahn, Tae-Jin;Han, Chonghun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1516-1519
    • /
    • 1997
  • With disseminationof Distributed Control Systems(DCS), the huge amounts of process operation data could have been available and led to figure out process behaviors better on the statistical basis. Until now, the statistical modeling technology has been susally applied to process monitoring and fault diagnosis. however, it has been also thought that these process information, extracted from statistical analysis, might serve a great opportunity for process operation improvements and process improvements. This paper proposed a general methodolgy for process operation improvements including data analysis, backing up the result of analysis based on the methodology, and the mapping physical physical phenomena to the Principal Components(PC) which is the most distinguished feature in the methodology form traditional statistical analyses. The application of the proposed methodology to the Balst Furnace(BF) process has been presented for details. The BF process is one of the complicated processes, due to the highly nonlinear and correlated behaviors, and so the analysis for the process based on the mathematical modeling has been very difficult. So the statisitical analysis has come forward as a alternative way for the useful analysis. Using the proposed methodology, we could interpret the complicated process, the BF, better than any other mathematical methods and find the direction for process operation improvement. The direction of process operationimprovement, in the BF case, is to increase the fludization and the permeability, while decreasing the effect of tapping operation. These guide directions, with those physical meanings, could save fuel cost and process operator's pressure for proper actions, the better set point changes, in addition to the assistance with the better knowledge of the process. Open to set point change, the BF has a variety of steady state modes. In usual almost chemical processes are under the same situation with the BF in the point of multimode steady states. The proposed methodology focused on the application to the multimode steady state process such as the BF, consequently can be applied to any chemical processes set point changing whether operator intervened or not.

  • PDF

A Statistical Expert System for Simulation Analysis-Revised (시뮬레이션의 통계적 분석을 위한 전문가 시스템)

  • Park, Young-Hong
    • IE interfaces
    • /
    • v.7 no.1
    • /
    • pp.81-91
    • /
    • 1994
  • Simulation is one of the most widely used techniques in operations research and management science, but there are several impediments to even wider acceptance and use of simulation. One of the more significant limitations is the dependence of simulation on statistical methodology. This research identifies eighteen different statistical issues in simulation methodology and develops an expert system which could through interactive dialog with simulation analysts offer advice on statistical approaches which might be used to deal with particular issues and to accomplish required statistical computations. This research revises the previous study published in Simulation by incorporating additional statistical issues in the expert system to enhance its performance in analyzing a given simulation problem with statistical methodologies. An overview of the revised system is given and illustrations of the capabilities of the system are presented.

  • PDF

Methodology for Determining Functional Forms in Developing Statistical Collision Models (교통사고모형 개발에서의 함수식 도출 방법론에 관한 연구)

  • Baek, Jong-Dae;Hummer, Joseph
    • International Journal of Highway Engineering
    • /
    • v.14 no.5
    • /
    • pp.189-199
    • /
    • 2012
  • PURPOSES: The purpose of this study is to propose a new methodology for developing statistical collision models and to show the validation results of the methodology. METHODS: A new modeling method of introducing variables into the model one by one in a multiplicative form is suggested. A method for choosing explanatory variables to be introduced into the model is explained. A method for determining functional forms for each explanatory variable is introduced as well as a parameter estimating procedure. A model selection method is also dealt with. Finally, the validation results is provided to demonstrate the efficacy of the final models developed using the method suggested in this study. RESULTS: According to the results of the validation for the total and injury collisions, the predictive powers of the models developed using the method suggested in this study were better than those of generalized linear models for the same data. CONCLUSIONS: Using the methodology suggested in this study, we could develop better statistical collision models having better predictive powers. This was because the methodology enabled us to find the relationships between dependant variable and each explanatory variable individually and to find the functional forms for the relationships which can be more likely non-linear.

A Logistic Regression Analysis of Two-Way Binary Attribute Data (이원 이항 계수치 자료의 로지스틱 회귀 분석)

  • Ahn, Hae-Il
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.3
    • /
    • pp.118-128
    • /
    • 2012
  • An attempt is given to the problem of analyzing the two-way binary attribute data using the logistic regression model in order to find a sound statistical methodology. It is demonstrated that the analysis of variance (ANOVA) may not be good enough, especially for the case that the proportion is very low or high. The logistic transformation of proportion data could be a help, but not sound in the statistical sense. Meanwhile, the adoption of generalized least squares (GLS) method entails much to estimate the variance-covariance matrix. On the other hand, the logistic regression methodology provides sound statistical means in estimating related confidence intervals and testing the significance of model parameters. Based on simulated data, the efficiencies of estimates are ensured with a view to demonstrate the usefulness of the methodology.

Statistical Analysis and Prediction for Behaviors of Tracked Vehicle Traveling on Soft Soil Using Response Surface Methodology (반응표면법에 의한 연약지반 차량 거동의 통계적 분석 및 예측)

  • Lee Tae-Hee;Jung Jae-Jun;Hong Sup;Km Hyung-Woo;Choi Jong-Su
    • Journal of Ocean Engineering and Technology
    • /
    • v.20 no.3 s.70
    • /
    • pp.54-60
    • /
    • 2006
  • For optimal design of a deep-sea ocean mining collector system, based on self-propelled mining vehicle, it is imperative to develop and validate the dynamic model of a tracked vehicle traveling on soft deep seabed. The purpose of this paper is to evaluate the fidelity of the dynamic simulation model by means of response surface methodology. Various statistical techniques related to response surface methodology, such as outlier analysis, detection of interaction effect, analysis of variance, inference of the significance of design variables, and global sensitivity analysis, are examined. To obtain a plausible response surface model, maximum entropy sampling is adopted. From statistical analysis and prediction for dynamic responses of the tracked vehicle, conclusions will be drawn about the accuracy of the dynamic model and the performance of the response surface model.

An electromechanical impedance-based method for tensile force estimation and damage diagnosis of post-tensioning systems

  • Min, Jiyoung;Yun, Chung-Bang;Hong, Jung-Wuk
    • Smart Structures and Systems
    • /
    • v.17 no.1
    • /
    • pp.107-122
    • /
    • 2016
  • We propose an effective methodology using electromechanical impedance characteristics for estimating the remaining tensile force of tendons and simultaneously detecting damages of the anchorage blocks. Once one piezoelectric patch is attached on the anchor head and the other is bonded on the bearing plate, impedance responses are measured through these two patches under varying tensile force conditions. Then statistical indices are calculated from the impedances, and two types of relationship curves between the tensile force and the statistical index (TE Curve) and between statistical indices of two patches (SR Curve) are established. Those are considered as database for monitoring both the tendon and the anchorage system. If damage exists on the bearing plate, the statistical index of patch on the bearing plate would be out of bounds of the SR curve and damage can be detected. A change in the statistical index by damage is calibrated with the SR curve, and the tensile force can be estimated with the corrected index and the TE Curve. For validation of the developed methodology, experimental studies are performed on the scaled model of an anchorage system that is simplified only with 3 solid wedges, a 3-hole anchor head, and a bearing plate. Then, the methodology is applied to a real scale anchorage system that has 19 strands, wedges, an anchor head, a bearing plate, and a steel duct. It is observed that the proposed scheme gives quite accurate estimation of the remaining tensile forces. Therefore, this methodology has great potential for practical use to evaluate the remaining tensile forces and damage status in the post-tensioned structural members.

Big Data Smoothing and Outlier Removal for Patent Big Data Analysis

  • Choi, JunHyeog;Jun, Sunghae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.8
    • /
    • pp.77-84
    • /
    • 2016
  • In general statistical analysis, we need to make a normal assumption. If this assumption is not satisfied, we cannot expect a good result of statistical data analysis. Most of statistical methods processing the outlier and noise also need to the assumption. But the assumption is not satisfied in big data because of its large volume and heterogeneity. So we propose a methodology based on box-plot and data smoothing for controling outlier and noise in big data analysis. The proposed methodology is not dependent upon the normal assumption. In addition, we select patent documents as target domain of big data because patent big data analysis is a important issue in management of technology. We analyze patent documents using big data learning methods for technology analysis. The collected patent data from patent databases on the world are preprocessed and analyzed by text mining and statistics. But the most researches about patent big data analysis did not consider the outlier and noise problem. This problem decreases the accuracy of prediction and increases the variance of parameter estimation. In this paper, we check the existence of the outlier and noise in patent big data. To know whether the outlier is or not in the patent big data, we use box-plot and smoothing visualization. We use the patent documents related to three dimensional printing technology to illustrate how the proposed methodology can be used for finding the existence of noise in the searched patent big data.