• Title/Summary/Keyword: Failure Analysis Framework

Search Result 162, Processing Time 0.024 seconds

Seismic Fragility of Steel Piping System Based on Pipe Size, Coupling Type, and Wall Thickness

  • Ju, Bu Seog;Gupta, Abhinav;Ryu, Yonghee
    • International journal of steel structures
    • /
    • v.18 no.4
    • /
    • pp.1200-1209
    • /
    • 2018
  • In this study, a probabilistic framework of the damage assessment of pipelines subjected to extreme hazard scenario was developed to mitigate the risk and enhance design reliability. Nonlinear 3D finite element models of T-joint systems were developed based on experimental tests with respect to leakage detection of black iron piping systems, and a damage assessment analysis of the vulnerability of their components according to nominal pipe size, coupling type, and wall thickness under seismic wave propagations was performed. The analysis results showed the 2-inch schedule 40 threaded T-joint system to be more fragile than the others with respect to the nominal pipe sizes. As for the coupling types, the data indicated that the probability of failure of the threaded T-joint coupling was significantly higher than that of the grooved type. Finally, the seismic capacity of the schedule 40 wall thickness was weaker than that of schedule 10 in the 4-inch grooved coupling, due to the difference in the prohibition of energy dissipation. Therefore, this assessment can contribute to the damage detection and financial losses due to failure of the joint piping system in a liquid pipeline, prior to the decision-making.

A case study of competing risk analysis in the presence of missing data

  • Limei Zhou;Peter C. Austin;Husam Abdel-Qadir
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.1
    • /
    • pp.1-19
    • /
    • 2023
  • Observational data with missing or incomplete data are common in biomedical research. Multiple imputation is an effective approach to handle missing data with the ability to decrease bias while increasing statistical power and efficiency. In recent years propensity score (PS) matching has been increasingly used in observational studies to estimate treatment effect as it can reduce confounding due to measured baseline covariates. In this paper, we describe in detail approaches to competing risk analysis in the setting of incomplete observational data when using PS matching. First, we used multiple imputation to impute several missing variables simultaneously, then conducted propensity-score matching to match statin-exposed patients with those unexposed. Afterwards, we assessed the effect of statin exposure on the risk of heart failure-related hospitalizations or emergency visits by estimating both relative and absolute effects. Collectively, we provided a general methodological framework to assess treatment effect in incomplete observational data. In addition, we presented a practical approach to produce overall cumulative incidence function (CIF) based on estimates from multiple imputed and PS-matched samples.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Reliability Updates of Driven Piles Based on Bayesian Theory Using Proof Pile Load Test Results (베이지안 이론을 이용한 타입강관말뚝의 신뢰성 평가)

  • Park, Jae-Hyun;Kim, Dong-Wook;Kwak, Ki-Seok;Chung, Moon-Kyung;Kim, Jun-Young;Chung, Choong-Ki
    • Journal of the Korean Geotechnical Society
    • /
    • v.26 no.7
    • /
    • pp.161-170
    • /
    • 2010
  • For the development of load and resistance factor design, reliability analysis is required to calibrate resistance factors in the framework of reliability theory. The distribution of measured-to-predicted pile resistance ratio was obrained based on only the results of load tests conducted to failure for the assessment of uncertainty regarding pile resistance and used in the conventional reliability analysis. In other words, successful pile load test (piles resisted twice their design loads without failure) results were discarded, and therefore, were not reflected in the reliability analysis. In this paper, a new systematic method based on Bayesian theory is used to update reliability indices of driven steel pipe piles by adding more proof pile load test results, even not conducted to failure, to the prior distribution of pile resistance ratio. Fifty seven static pile load tests performed to failure in Korea were compiled for the construction of prior distribution of pile resistance ratio. The empirical method proposed by Meyerhof is used to calculate the predicted pile resistance. Reliability analyses were performed using the updated distribution of pile resistance ratio. The challenge of this study is that the distribution updates of pile resistance ratio are possible using the load test results even not conducted to failure, and that Bayesian updates are most effective when limited data are available for reliability analysis.

Advanced discretization of rock slope using block theory within the framework of discontinuous deformation analysis

  • Wang, Shuhong;Huang, Runqiu;Ni, Pengpeng;Jeon, Seokwon
    • Geomechanics and Engineering
    • /
    • v.12 no.4
    • /
    • pp.723-738
    • /
    • 2017
  • Rock is a heterogeneous material, which introduces complexity in the analysis of rock slopes, since both the existing discontinuities within the rock mass and the intact rock contribute to the degradation of strength. Rock failure is often catastrophic due to the brittle nature of the material, involving the sliding along structural planes and the fracturing of rock bridge. This paper proposes an advanced discretization method of rock mass based on block theory. An in-house software, GeoSMA-3D, has been developed to generate the discrete fracture network (DFN) model, considering both measured and artificial joints. Measured joints are obtained from the photogrammetry analysis on the excavation face. Statistical tools then facilitate to derive artificial joints within the rock mass. Key blocks are searched to provide guidance on potential reinforcement measures. The discretized blocky system is subsequently implemented into a discontinuous deformation analysis (DDA) code. Strength reduction technique is employed to analyze the stability of the slope, where the factor of safety can be obtained once excessive deformation of slope profile is observed. The combined analysis approach also provides the failure mode, which can be used to guide the choice of strengthening strategy if needed. Finally, an illustrated example is presented for the analysis of a rock slope of 20 m height inclined at $60^{\circ}$ using combined GeoSMA-3D and DDA calculation.

Analysis of RC beams subjected to shock loading using a modified fibre element formulation

  • Valipour, Hamid R.;Huynh, Luan;Foster, Stephen J.
    • Computers and Concrete
    • /
    • v.6 no.5
    • /
    • pp.377-390
    • /
    • 2009
  • In this paper an improved one-dimensional frame element for modelling of reinforced concrete beams and columns subjected to impact is presented. The model is developed in the framework of a flexibility fibre element formulation that ignores the shear effect at material level. However, a simple shear cap is introduced at section level to take account of possible shear failure. The effect of strain rate at the fibre level is taken into account by using the dynamic increase factor (DIF) concept for steel and concrete. The capability of the formulation for estimating the element response history is demonstrated by some numerical examples and it is shown that the developed 1D element has the potential to be used for dynamic analysis of large framed structures subjected to impact of air blast and rigid objects.

Review of Human Reliability Analysis Methods for Railway Risk Assessment (철도 위험도 평가를 위한 인간신뢰도분석 방법 검토)

  • Jung, Won-Dea;Jang, Seung-Cheol;Kwak, Sang-Log;Kim, Jae-Whan
    • Proceedings of the KSR Conference
    • /
    • 2006.11b
    • /
    • pp.1140-1145
    • /
    • 2006
  • The railway human reliability analysis (R-HRA) plays a role of identifying and assessing human failure events in the framework of the probabilistic risk assessment (PRA) of the railway systems. This paper reviews three existing HRA methods including the K-HRA (THERP/ASEP-based) method, the HEART method, the RSSB-HRA method, and introduces a case study that was performed to select an appropriate method for a railway risk assessment. The case is the signal passed at danger (SPAD) events, which are caused from a variety of factors. From the case study, the strengths and limitations of each method were derived and compared with each other from the viewpoint of the applicability to the railway industry.

  • PDF

Nonlinear Finite Element Analysis of Considering Interface Behaviors between Steel and Concrete (강-콘크리트 계면파괴에 관한 비선형 유한요소해석)

  • Joo, Young-Tae;Lee, Yong-Hak
    • Proceedings of the Korea Concrete Institute Conference
    • /
    • 2004.11a
    • /
    • pp.105-108
    • /
    • 2004
  • In general, the nonlinear behavior of composite structures composing of steel and concrete is analyzed on the basis of the assumption of the perfect bond actions in steel-concrete interface in which the interface slip or separation is not allowed. The assumption is based on the fact that the full interface bond behavior is provided with the mechanical connectors of studs. However, since the number and spacing of the studs are determined by the stress resultants calculated in the interface area, the interface analysis is required to evaluate the stress resultants. This paper describes the nonlinear steel-concrete interface behavior considering the two interface failure mechanisms of slip and separation. Elastoplastic constitutive relation is developed. thru the formulation framework using the two energy dissipation mechanisms. As the result, the steel plate push-out tests sandwitched between concrete blocks are analyzed and compared with the test results with which the good agreements are observed.

  • PDF

A Study on the Influence of the Recovery Methods of Information Service Failure on Online User Justice and Satisfaction (정보서비스 실패에 대한 회복 방법이 온라인 이용자의 공정성과 만족도에 미치는 영향에 관한 연구)

  • Kim, Young-Gon
    • Journal of the Korean Society for information Management
    • /
    • v.30 no.2
    • /
    • pp.35-59
    • /
    • 2013
  • The aim of this study is to investigate the role of information service failure severity within existing framework of service recovery justice research and analyse the effects of the attribution of service recoveries on recovered user satisfaction and revisit. For empirical analysis, A total of 452 valid questionnaires were used to analyse the data gathered from university students who experienced the information service failures of university library. Some findings of the research are as follows: First, service failure severity has negative effect on service recovery justice. Second, procedural and interactional recovery justice has positive effect on recovered user satisfaction. Third, service recovery justice has significant influence on procedural and interactional justice. Finally, recovered user satisfaction has positive effect on user revisit and mouth of word.

A Development of Hydrologic Dam Risk Analysis Model Using Bayesian Network (BN) (Bayesian Network (BN)를 활용한 수문학적 댐 위험도 해석 기법 개발)

  • Kim, Jin-Young;Kim, Jin-Guk;Choi, Byoung-Han;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.10
    • /
    • pp.781-791
    • /
    • 2015
  • Dam risk analysis requires a systematic process to ensure that hydrologic variables (e.g. precipitation, discharge and water surface level) contribute to each other. However, the existing dam risk approach showed a limitation in assessing the interdependencies across the variables. This study aimed to develop Bayesian network based dam risk analysis model to better characterize the interdependencies. It was found that the proposed model provided advantages which would enable to better identify and understand the interdependencies and uncertainties over dam risk analysis. The proposed model also provided a scenario-based risk evaluation framework which is a function of the failure probability and the consequence. This tool would give dam manager a framework for prioritizing risks more effectively.