• Title/Summary/Keyword: process fault

Search Result 939, Processing Time 0.029 seconds

Development of Risk Assessment Models for Railway Casualty Accidents (철도 사상사고 위험도 평가 모델 개발에 관한 연구)

  • Park, Chan-Woo;Wang, Jong-Bae;Kim, Min-Su;Choi, Don-Bum;Kwak, Sang-Log
    • Journal of the Korean Society for Railway
    • /
    • v.12 no.2
    • /
    • pp.190-198
    • /
    • 2009
  • This study shows the developing process of the risk assessment models for railway casualty accidents. To evaluate the risks of these accidents, the hazardous events and the hazardous factors were identified by the review of the accident history and engineering interpretation of the accident behavior. The frequency of each hazardous event was evaluated from the historical accident data and structured expert judgments by using the Fault Tree Analysis (FTA) technique. In addition, to assess the severity of each hazardous event, the ETA (Event Tree Analysis) technique and other safety techniques were applied. The risk assessment models developed can be effectively utilized in defining the risk reduction measures in connection with the option analysis.

A Framework for Wide-area Monitoring of Tree-related High Impedance Faults in Medium-voltage Networks

  • Bahador, Nooshin;Matinfar, Hamid Reza;Namdari, Farhad
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.1-10
    • /
    • 2018
  • Wide-area monitoring of tree-related high impedance fault (THIF) efficiently contributes to increase reliability of large-scaled network, since the failure to early location of them may results in critical lines tripping and consequently large blackouts. In the first place, this wide-area monitoring of THIF requires managing the placement of sensors across large power grid network according to THIF detection objective. For this purpose, current paper presents a framework in which sensors are distributed according to a predetermined risk map. The proposed risk map determines the possibility of THIF occurrence on every branch in a power network, based on electrical conductivity of trees and their positions to power lines which extracted from spectral data. The obtained possibility value can be considered as a weight coefficient assigned to each branch in sensor placement problem. The next step after sensors deployment is to on-line monitor based on moving data window. In this on-line process, the received data window is evaluated for obtaining a correlation between low frequency and high frequency components of signal. If obtained correlation follows a specified pattern, received signal is considered as a THIF. Thereafter, if several faulted section candidates are found by deployed sensors, the most likely location is chosen from the list of candidates based on predetermined THIF risk map.

Hybrid Superconducting Fault Current Limiters for Distribution Electric Networks (하이브리드 방식을 적용한 배전급 초전도 한류기 개발)

  • Lee, B.W.;Park, K.B.;Sim, J.;Oh, I.S.;Lim, S.W.;Kim, H.R.;Hyun, O.B.
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.102-103
    • /
    • 2007
  • In order to apply resistive superconducting fault current limiters into electric power systems, the urgent issues to be settled are as follows, such as initial installation price of SFCL, operation and maintenance cost due to ac loss of superconductor and the life of cryostat, and high voltage and high current problems. The ac loss and high cost of superconductor and cryostat system are main bottlenecks for real application. Furthermore in order to increase voltage and current ratings of SFCL, a lot of superconductor components should be connected in series and parallel which resulted in extreme high cost. Thus, in order to make practical SFCL, we designed novel hybrid SFCL which combines superconductor and conventional electric equipment including vacuum interrupter, power fuse and current limiting reactor. The main purpose of hybrid SFCL is to drastically reduce total usage of superconductor by adopting current commutation method by use of superconductor and high fast switch. Consequently, it was possible to get the satisfactory test results using this method, and further works for practical applications are in the process.

  • PDF

A source and phase identification study of the M/syb L/ 3.6 Cheolwon, Korea, earthquake occurred on December 10, 2002 (2002년 12월 10일 규모 3.6 철원지진의 진원요소 및 파상분석)

  • 김우한;박종찬;김성균;박창업
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2003.03a
    • /
    • pp.3-11
    • /
    • 2003
  • We analysed phases recorded by the M$_{L}$ 3.6 Cheolwon, Korea, earthquake occurred on the 10th of December, 2002 and computed source parameters such as hypocenter, origin time, earthquake magnitude and focal solutions. We used PmP and SmS phases to increase the accuracy in determinations of the hypocenter and origin time in addition to the phases such as Pg, Pn, Sg and Sn which are generally used in routine processes. The epicenter, depth, and origin time of the Cheolwon earthquake determined based on data of 11 stations within 200 km from the epicenter are 38.8108$^{\circ}$N, N, 127.2214'E, 11.955 km, and on 7:42:51.436. The earthquake magnitude obtained from all the stations is 3.6 M$_{L}$. The fault plane solution calculated based on data from 19 stations indicates slip process of a normal fault including strike-slip motion. The direction of compressional stress field has a large vertical component and a ESE-WNW direction of horizontal component, which is different from the mainly horizontal direction of main compressional stress field in the Korean Peninsula (ENE-WSW) obtained by previous studies.ies.s.

  • PDF

The Estimated Source of 2017 Pohang Earthquake Using Surface Deformation Modeling Based on Multi-Frequency InSAR Data

  • Fadhillah, Muhammad Fulki;Lee, Chang-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.1
    • /
    • pp.57-67
    • /
    • 2021
  • An earthquake occurred on 17 November 2017 in Pohang, South Korea with a strength of 5.4 Mw. This is the second strongest earthquake recorded by local authorities since the equipment was first installed. In order to improve understanding of earthquakes and surface deformation, many studies have been conducted according to these phenomena. In this research, we will estimate the surface deformation using the Okada model equation. The SAR images of three satellites with different wavelengths (ALOS-2, Cosmo SkyMed and Sentinel-1) were used to produce the interferogram pairs. The interferogram is used as a reference for surface deformation changes by using Okada to determine the source of surface deformation that occurs during an earthquake. The Non-linear optimization (Levemberg-Marquadrt algorithm) and Monte Carlo restart was applied to optimize the fault parameter on modeling process. Based on the modeling results of each satellite data, the fault geometry is ~6 km length, ~2 km width and ~5 km depth. The root mean square error values in the surface deformation model results for Sentinel, CSK and ALOS are 0.37 cm, 0.79 cm and 1.47 cm, respectively. Furthermore, the results of this modeling can be used as learning material in understanding about seismic activity to minimize the impacts that arise in the future.

An Approach for the NHPP Software Reliability Model Using Erlang Distribution (어랑 분포를 이용한 NHPP 소프트웨어 신뢰성장 모형에 관한 연구)

  • Kim Hee-Cheul;Choi Yue-Soon;Park Jong-Goo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.7-14
    • /
    • 2006
  • The finite failure NHPP models proposed in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, we propose the Erlang reliability model, which can capture the increasing nature of the failure occurrence rate per fault. Equations to estimate the parameters of the Erlang finite failure NHPP model based on failure data collected in the form of inter-failure times are developed. For the sake of proposing shape parameter of the Erlang distribution, we used to the goodness-of-fit test of distribution. Data set, where the underlying failure process could not be adequately described by the existing models, which motivated the development of the Erlang model. Analysis of the failure data set which led us to the Erlang model, using arithmetic and Laplace trend tests, goodness-of-fit test, bias tests is presented.

Sensitivity Enhancement of RF Plasma Etch Endpoint Detection With K-means Cluster Analysis

  • Lee, Honyoung;Jang, Haegyu;Lee, Hak-Seung;Chae, Heeyeop
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2015.08a
    • /
    • pp.142.2-142.2
    • /
    • 2015
  • Plasma etch endpoint detection (EPD) of SiO2 and PR layer is demonstrated by plasma impedance monitoring in this work. Plasma etching process is the core process for making fine pattern devices in semiconductor fabrication, and the etching endpoint detection is one of the essential FDC (Fault Detection and Classification) for yield management and mass production. In general, Optical emission spectrocopy (OES) has been used to detect endpoint because OES can be a simple, non-invasive and real-time plasma monitoring tool. In OES, the trend of a few sensitive wavelengths is traced. However, in case of small-open area etch endpoint detection (ex. contact etch), it is at the boundary of the detection limit because of weak signal intensities of reaction reactants and products. Furthemore, the various materials covering the wafer such as photoresist (PR), dielectric materials, and metals make the analysis of OES signals complicated. In this study, full spectra of optical emission signals were collected and the data were analyzed by a data-mining approach, modified K-means cluster analysis. The K-means cluster analysis is modified suitably to analyze a thousand of wavelength variables from OES. This technique can improve the sensitivity of EPD for small area oxide layer etching processes: about 1.0 % oxide area. This technique is expected to be applied to various plasma monitoring applications including fault detections as well as EPD.

  • PDF

NHPP Software Reliability Model based on Generalized Gamma Distribution (일반화 감마 분포를 이용한 NHPP 소프트웨어 신뢰도 모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.27-36
    • /
    • 2005
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates Per fault. This Paper Proposes reliability model using the generalized gamma distribution, which can capture the monotonic increasing(or monotonic decreasing) nature of the failure occurrence rate per fault. Equations to estimate the parameters of the generalized gamma finite failure NHPP model based on failure data collected in the form of interfailure times are developed. For the sake of proposing shape parameter of the generalized gamma distribution, used to the special pattern. Data set, where the underlying failure process could not be adequately described by the knowing models, which motivated the development of the gamma or Weibull model. Analysis of failure data set for the generalized gamma modell, using arithmetic and Laplace trend tests . goodness-of-fit test, bias tests is presented.

  • PDF

The Power Line Deflection Monitoring System using Panoramic Video Stitching and Deep Learning (딥 러닝과 파노라마 영상 스티칭 기법을 이용한 송전선 늘어짐 모니터링 시스템)

  • Park, Eun-Soo;Kim, Seunghwan;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.13-24
    • /
    • 2020
  • There are about nine million power line poles and 1.3 million kilometers of the power line for electric power distribution in Korea. Maintenance of such a large number of electric power facilities requires a lot of manpower and time. Recently, various fault diagnosis techniques using artificial intelligence have been studied. Therefore, in this paper, proposes a power line deflection detect system using artificial intelligence and computer vision technology in images taken by vision system. The proposed system proceeds as follows. (i) Detection of transmission tower using object detection system (ii) Histogram equalization technique to solve the degradation in image quality problem of video data (iii) In general, since the distance between two transmission towers is long, a panoramic video stitching process is performed to grasp the entire power line (iv) Detecting deflection using computer vision technology after applying power line detection algorithm This paper explain and experiment about each process.

Improvement of Service Tree Analysis Using Service Importance (서비스 중요도를 사용한 서비스나무분석의 개선)

  • Park, Jong Hun;Hwang, Young Hun;Lee, Sang Cheon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.2
    • /
    • pp.41-50
    • /
    • 2017
  • The purpose of this paper is to improve the service tree analysis introduced recently by Geum et al. [15]. Service tree analysis structures the service based on the customer participation perspective and provides a qualitative analysis method categorizing the service elements on the basis of its impact to top service. This paper attempts to apply the concept of reliability importance to the service tree analysis as a perspective of quantitative analysis, which is considered little in Geum et al. [15]. Reliability importance is a measure of the structural impact of the components that make up the system on the system lifetime in reliability engineering field and often used in fault tree analysis. We transform the reliability importance into service importance in accordance with service tree analysis, so that the influence of service elements on the service can be judged and compared. The service importance is defined as the amount of change of the service according to the change of the service element, therefore, it can be utilized as an index for determining a service element for service improvement. In addition, as an index for paired service elements, the relationship between the two service components can be measured by joint service importance. This paper introduces conceptual changes in the process of applying reliability importance to service analysis, and shows how to use the service importance for identifying the priority of service element for the final service and improving customer satisfaction through an example. By using the service importance and joint service importance in service tree analysis, it is possible to make efficient decision making in the process of determining the service elements for analyzing and improving the service.