• Title/Summary/Keyword: Probabilistic methods

Search Result 587, Processing Time 0.033 seconds

An Empirical Estimation Procedure of Concrete Compressive Strength Based on the In-Situ Nondestructive Tests Result of the Existing Bridges (공용중 교량 비파괴시험 결과에 기반한 경험적 콘크리트 압축강도 추정방법의 제안)

  • Oh, Hong-Seob;Oh, Kwang-Chin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.20 no.4
    • /
    • pp.111-119
    • /
    • 2016
  • Rebound hammer test, SonReb method and concrete core test are most useful testing methods for estimate the concrete compressive strength of deteriorated concrete structures. But the accuracy of the NDE results on the existing structures could be reduced by the effects of the uncertainty of nondestructive test methods, material effects by aging and carbonation, and mechanical damage by drilling of core. In this study, empirical procedure for verifying the in-situ compressive strength of concrete is suggested through the probabilistic analysis on the 268 data of rebound and ultra-pulse velocity and core strengths obtained from 106 bridges. To enhance the accuracy of predicted concrete strength, the coefficients of core strength, and surface hardness caused by ageing or carbonation was adopted. From the results, the proposed equation by KISTEC and the estimation procedures proposed by authors is reliable than previously suggested equation and correction coefficient.

Statistical and Probabilistic Assessment for the Misorientation Angle of a Grain Boundary for the Precipitation of in a Austenitic Stainless Steel (II) (질화물 우선석출이 발생하는 결정립계 어긋남 각도의 통계 및 확률적 평가 (II))

  • Lee, Sang-Ho;Choe, Byung-Hak;Lee, Tae-Ho;Kim, Sung-Joon;Yoon, Kee-Bong;Kim, Seon-Hwa
    • Korean Journal of Metals and Materials
    • /
    • v.46 no.9
    • /
    • pp.554-562
    • /
    • 2008
  • The distribution and prediction interval for the misorientation angle of grain boundary at which $Cr_2N$ was precipitated during heating at $900^{\circ}C$ for $10^4$ sec were newly estimated, and followed by the estimation of mathematical and median rank methods. The probability density function of the misorientation angle can be estimated by a statistical analysis. And then the ($1-{\alpha}$)100% prediction interval of misorientation angle obtained by the estimated probability density function. If the estimated probability density function was symmetric then a prediction interval for the misorientation angle could be derived by the estimated probability density function. In the case of non-symmetric probability density function, the prediction interval could be obtained from the cumulative distribution function of the estimated probability density function. In this paper, 95, 99 and 99.73% prediction interval obtained by probability density function method and cumulative distribution function method and compared with the former results by median rank regression or mathematical method.

Generation of Time Series Data from Octave Bandwidth SPL of Acoustic Loading Using Interpolation Method (보간법을 이용한 옥타브 밴드폭 음향 하중 SPL의 시계열 데이터 생성)

  • Go, Eun-Su;Kim, In-Gul;Jeon, Minhyeok;Cho, Hyun-Jun;Park, Jae-Sang;Kim, Min-Sung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • Thermal protection system structures such as double-panel structures are used on the skin of the fuselage and wings to prevent the transfer of high heat into the interior of an high supersonic/hypersonic aircraft. The thin-walled double-panel skin can be exposed to acoustic loads by high power engine noise and jet flow noise, which can cause sonic fatigue damage. In order to predict the fatigue life of the skin, the octave bandwidth SPL should be calculated as narrow bandwidth PSD or acoustic load history using interpolation method. In this paper, a method of converting the octave bandwidth SPL acoustic load into a narrow bandwidth PSD and reconstructed acoustic load history was investigated. The octave bandwidth SPL was converted to the narrow bandwidth PSD using various interpolation methods such as flat, log and linear scale, and the probabilistic characteristics and fatigue damage results were compared. It was found that average error of fatigue damage index by the log scale interpolation method was relatively small among three methods.

Hybrid machine learning with moth-flame optimization methods for strength prediction of CFDST columns under compression

  • Quang-Viet Vu;Dai-Nhan Le;Thai-Hoan Pham;Wei Gao;Sawekchai Tangaramvong
    • Steel and Composite Structures
    • /
    • v.51 no.6
    • /
    • pp.679-695
    • /
    • 2024
  • This paper presents a novel technique that combines machine learning (ML) with moth-flame optimization (MFO) methods to predict the axial compressive strength (ACS) of concrete filled double skin steel tubes (CFDST) columns. The proposed model is trained and tested with a dataset containing 125 tests of the CFDST column subjected to compressive loading. Five ML models, including extreme gradient boosting (XGBoost), gradient tree boosting (GBT), categorical gradient boosting (CAT), support vector machines (SVM), and decision tree (DT) algorithms, are utilized in this work. The MFO algorithm is applied to find optimal hyperparameters of these ML models and to determine the most effective model in predicting the ACS of CFDST columns. Predictive results given by some performance metrics reveal that the MFO-CAT model provides superior accuracy compared to other considered models. The accuracy of the MFO-CAT model is validated by comparing its predictive results with existing design codes and formulae. Moreover, the significance and contribution of each feature in the dataset are examined by employing the SHapley Additive exPlanations (SHAP) method. A comprehensive uncertainty quantification on probabilistic characteristics of the ACS of CFDST columns is conducted for the first time to examine the models' responses to variations of input variables in the stochastic environments. Finally, a web-based application is developed to predict ACS of the CFDST column, enabling rapid practical utilization without requesting any programing or machine learning expertise.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

The efficacy of Quantitative Analysis of Basal/Acetazolamide SPECT Using SPM and Statistical Probabilistic Brain Atlas in Patients with Internal Carotid Artery Stenosis (뇌혈관 협착 환자에서 SPM과 확률뇌지도를 이용한 기저/아세타졸아미드 SPECT의 정량적 분석법의 유용성)

  • Lee, Ho-Young;Lee, Dong-Soo;Paeng, Jin-Chul;Oh, Chang-Wan;Cho, Maeng-Jae;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.357-367
    • /
    • 2002
  • Purpose: While cerebral blood flow and cerebrovascular reserve could be evaluated with basal/acetazolamide Tc-99m-HMPAO SPECT in cerebrovascular disease, objective quantification is necessary to assess the efficacy of the revascularization. In this study we adopted the SPM method to quantify basal cerebral blood flow and cerebrovascular reserve on basal/acetazolamide SPECT in assessment of the patients who underwent bypass surgery for linternal carotid artery (ICA) stenosis. Materials and Methods: Twelve patients ($51{\pm}15$ years) with ICA stenosis were enrolled. Tc-99m-HMPAO basal/acetazolamide perfusion SPECT was peformed before and after bypass surgery. After spatia1 and count normalization to cerebellum, basal cerebral blood flow and cerebrovascular reserve were compared with 21 age-matched normal controls and postoperative changes of regional blood flow and reserve were assessed by Statistical Parametric Mapping method. Mean pixel values of each brain region were calculated using probabilistic anatomical map of lobes. Perfusion reserve was defined as the % changes after acetazolamide over basal counts. Results: Preoperative cerebral blood flow and cerebrovascular reserve were significantly decreased in involved ICA territory, comparing with normal control (p<0.05). Postoperative improvement of cerebral blood flow and cerebrovascular reserve was observed in grafted ICA territories, but cerebrovasculr reserve remained with significant difference with normal control. Improvement of the cerebrovascular reserve was most prominent in the superior temporal and the angular gyrus, nearest to the anastomosis sites. Conclusion: Using SPM quantification method on hasal/acetazolamide Tc-99m-HMPAO SPECT, the cerebral blood flow and cerebrovascular reserve could be assessed before revascularization and so could the efficacy of the bypass surgery.

An Introduction to Kinetic Monte Carlo Methods for Nano-scale Diffusion Process Modeling (나노 스케일 확산 공정 모사를 위한 동력학적 몬테칼로 소개)

  • Hwang, Chi-Ok;Seo, Ji-Hyun;Kwon, Oh-Seob;Kim, Ki-Dong;Won, Tae-Young
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.6
    • /
    • pp.25-31
    • /
    • 2004
  • In this paper, we introduce kinetic Monte Carlo (kMC) methods for simulating diffusion process in nano-scale device fabrication. At first, we review kMC theory and backgrounds and give a simple point defect diffusion process modeling in thermal annealing after ion (electron) implantation into Si crystalline substrate to help understand kinetic Monte Carlo methods. kMC is a kind of Monte Carlo but can simulate time evolution of diffusion process through Poisson probabilistic process. In kMC diffusion process, instead of. solving differential reaction-diffusion equations via conventional finite difference or element methods, it is based on a series of chemical reaction (between atoms and/or defects) or diffusion events according to event rates of all possible events. Every event has its own event rate and time evolution of semiconductor diffusion process is directly simulated. Those event rates can be derived either directly from molecular dynamics (MD) or first-principles (ab-initio) calculations, or from experimental data.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

Analysis of Design Live Load of Railway Bridge Through Statistical Analysis of WIM Data for High-speed Rail (고속철도 WIM 데이터에 대한 통계분석을 통한 철도교량 설계활하중 분석)

  • Park, Sumin;Yeo, Inho;Paik, Inyeol
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.6
    • /
    • pp.589-597
    • /
    • 2015
  • In this paper, the live load model for the design of high-speed railway bridge is analyzed by statistic and probabilistic methods and the safety level that is given by the load factors of the load combination is analyzed. This study is a part of the development of the limit state design method for the railway bridge, and the train data collected from the Gyeongbu high-speed railway for about one month are utilized. The four different statistical methods are applied to estimate the design load to match the bridge design life and the results are compared. In order to examine the safety level that the design load combination of the railway bridge gives, the reliability indexes are determined and the results are analyzed. The load effect from the current design live load for the high-speed rail bridge which is 0.75 times of the standard train load is came out greater than at least 30-22% that from the estimated load from the measured data. If it is judged based on the ultimate limit state, there is a possibility of additional reduction of the safety factors through the reliability analysis.

Optimization of Contaminated Land Investigation based on Different Fitness-for-Purpose Criteria (조사목적별 기준에 부합하는 오염부지 조사방법의 최적화 방안에 관한 연구)

  • Jong-Chun Lee;Michael H. Ramsey
    • Economic and Environmental Geology
    • /
    • v.36 no.3
    • /
    • pp.191-200
    • /
    • 2003
  • Investigations on the contaminated lands due to heavy metals from mining activities or hydrocarbons from oil spillage for example, should be planned based on specific fitness-for-purpose criteria(FFP criteria). A FFP criterion is site specific or varies with situation, based on which not only the data quality but also the decision quality can be determined. The limiting factors on the qualities can be, for example, the total budget for the investigation, regulatory guidance or expert's subjective fitness-for-purpose criterion. This paper deals with planning of investigation methods that can satisfy each suggested FFP criterion based on economic factors and the data quality. To this aim, a probabilistic loss function was applied to derive the cost effective investigation method that balances the measurement uncertainty, which estimates the degree of the data quality, with the decision quality. In addition, investigation planning methods when the objectives of investigations do not lie in the classification of the land but simply in producing the estimation of the mean concentration of the contaminant at the site(e.g. for the use in risk assessment), were also suggested. Furthermore, the efficient allocation of resources between sampling and analysis was also devised. These methods were applied to the two contaminated sites in the UK to test the validity of each method.