• Title/Summary/Keyword: Fault Management Process

Search Result 145, Processing Time 0.024 seconds

A Software Performance Evaluation Model with Mixed Debugging Process (혼합수리 과정을 고려한 소프트웨어성능 평가 모형)

  • Jang, Kyu-Beom;Lee, Chong-Hyung
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.6
    • /
    • pp.741-750
    • /
    • 2011
  • In this paper, we derive an software mixed debugging model based on a Markov process, assuming that the length of time to perform the debugging is random and its distribution may depend on the fault type causing the failure. We assume that the debugging process starts as soon as a software failure occurs, and either a perfect debugging or an imperfect debugging is performed upon each fault type. One type is caused by a fault that is easily corrected and in this case, the perfect debugging process is performed. An Imperfect debugging process is performed to fix the failure caused by a fault that is difficult to correct. Distribution of the first passage time and working probability of the software system are obtained; in addition, an availability function of a software system which is the probability that the software is in working at a given time, is derived. Numerical examples are provided for illustrative purposes.

A Probability Embedded Expert System to Detect and Resolve Network Faults Intelligently (지능적 네트워크 장애 판별 및 문제해결을 위한 확률기반 시스템)

  • Yang, Young-Moon;Chang, Byeong-Yun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.2
    • /
    • pp.135-143
    • /
    • 2011
  • Currently network management systems(NMS) just give useful information about the criticality of the alarms and the process of the fault analysis is mainly dependent on the experts who have many years experiences in the field. Due to these reasons it takes very much time and manpower cost to localize the real root of the fault from the alarm information. Therefore, to solve these problems in this research we analyze the probability of the fault for each alarm and provide how to give the problem solving procedure with confidence level and give idea to build a system to realize the problem solving procedure. In addition, we give a case study to show how to use the proposed ideas.

Risk analysis of offshore terminals in the Caspian Sea

  • Mokhtari, Kambiz;Amanee, Jamshid
    • Ocean Systems Engineering
    • /
    • v.9 no.3
    • /
    • pp.261-285
    • /
    • 2019
  • Nowadays in offshore industry there are emerging hazards with vague property such as act of terrorism, act of war, unforeseen natural disasters such as tsunami, etc. Therefore industry professionals such as offshore energy insurers, safety engineers and risk managers in order to determine the failure rates and frequencies for the potential hazards where there is no data available, they need to use an appropriate method to overcome this difficulty. Furthermore in conventional risk based analysis models such as when using a fault tree analysis, hazards with vague properties are normally waived and ignored. In other word in previous situations only a traditional probability based fault tree analysis could be implemented. To overcome this shortcoming fuzzy set theory is applied to fault tree analysis to combine the known and unknown data in which the pre-combined result will be determined under a fuzzy environment. This has been fulfilled by integration of a generic bow-tie based risk analysis model into the risk assessment phase of the Risk Management (RM) cycles as a backbone of the phase. For this reason Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) are used to analyse one of the significant risk factors associated in offshore terminals. This process will eventually help the insurers and risk managers in marine and offshore industries to investigate the potential hazards more in detail if there is vagueness. For this purpose a case study of offshore terminal while coinciding with the nature of the Caspian Sea was decided to be examined.

Fault-Causing Process and Equipment Analysis of PCB Manufacturing Lines Using Data Mining Techniques (데이터마이닝 기법을 이용한 PCB 제조라인의 불량 혐의 공정 및 설비 분석)

  • Sim, Hyun Sik;Kim, Chang Ouk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.2
    • /
    • pp.65-70
    • /
    • 2015
  • In the PCB(Printed Circuit Board) manufacturing industry, the yield is an important management factor because it affects the product cost and quality significantly. In real situation, it is very hard to ensure a high yield in a manufacturing shop because products called chips are made through hundreds of nano-scale manufacturing processes. Therefore, in order to improve the yield, it is necessary to analyze main fault process and equipment that cause low PCB yield. This paper proposes a systematic approach to discover fault-causing processes and equipment by using a logistic regression and a stepwise variable selection procedure. We tested our approach with lot trace records of real work-site. A lot trace record consists of the equipment sequence that the lot passed through and the number of faults for each fault type in the lot. We demonstrated that the test results reflected the real situation of a PCB manufacturing line.

The Implementation of Fault Tolerance Service for QoS in Grid Computing (그리드 컴퓨팅에서 서비스 품질을 위한 결함 포용 서비스의 구현)

  • Lee, Hwa- Min
    • The Journal of Korean Association of Computer Education
    • /
    • v.11 no.3
    • /
    • pp.81-89
    • /
    • 2008
  • The failure occurrence of resources in the grid computing is higher than in a tradition parallel computing. Since the failure of resources affects job execution fatally, fault tolerance service is essential in computational grids. And grid services are often expected to meet some minimum levels of quality of service (QoS) for desirable operation. However Globus toolkit does not provide fault tolerance service that supports fault detection service and management service and satisfies QoS requirement. Thus this paper proposes fault tolerance service to satisfy QoS requirement in computational grids. In order to provide fault tolerance service and satisfy QoS requirements, we expand the definition of failure, such as process failure, processor failure, and network failure. And we propose resource scheduling service, fault detection service and fault management service and show implement and experiment results.

  • PDF

A Heuristic Methodology for Fault Diagnosis using Statistical Patterns

  • Kwon, Young-il;Song, Suh-ill
    • Journal of Korean Society for Quality Management
    • /
    • v.21 no.2
    • /
    • pp.17-26
    • /
    • 1993
  • Process fault diagnosis is a complicated matter because quality control problems can result from a variety of causes. These causes include problems with electrical components, mechanical components, human errors, job justification errors, and air conditioning influences. In order to make the system run smoothly with minimum delay, it is necessary to suggest heuristic remedies for the detected faults. Hence, this paper describes a heuristic methodology of fault diagnosis that is performed using statistical patterns generated by quality characteristics The proposed methodology is described briefly as follows: If a sample pattern generated by random variables is similar to the number of prototype patterns, the sample pattern may be matched by any prototype pattern among them to be resembled. This concept is based on the similarity between a sample pattern and the matched prototype pattern. The similarity is calculated as the weighted average of squared deviation, which is expressed as the difference between the relative values of standard normal distribution to be transformed by the observed values of quality characteristics in a sample pattern and the critical values of the corresponding ones in a matched prototype pattern.

  • PDF

A Study on Software Reliability Growth Model for Isolated Testing-Domain under Imperfect Debugging (불완전수정에서 격리된 시험영역에 대한 소프트웨어 신뢰도 성장모형 연구)

  • Nam, Kyung-H.;Kim, Do-Hoon
    • Journal of Korean Society for Quality Management
    • /
    • v.34 no.3
    • /
    • pp.73-78
    • /
    • 2006
  • In this paper, we propose a software reliability growth model based on the testing domain in the software system, which is isolated by the executed test cases in software testing. In particular, our model assumes an imperfect debugging environment in which new faults are introduced in the fault-correction process, and is formulated as a nonhomogeneous Poisson process(NHPP). Further, it is applied to fault-detection data, the results of software reliability assessment are shown, and comparison of goodness-of-fit with the existing software reliability growth model is performed.

A Study on Software Reliability Assessment Model of Superposition NHPP (중첩 NHPP를 이용한 소프트웨어 신뢰도 평가 모형 연구)

  • Kim, Do-Hoon;Nam, Kyung-H.
    • Journal of Korean Society for Quality Management
    • /
    • v.36 no.1
    • /
    • pp.89-95
    • /
    • 2008
  • In this paper, we propose a software reliability growth model based on the superposition cause in the software system, which is isolated by the executed test cases in software testing. In particular, our model assumes an imperfect debugging environment in which new faults are introduced in the fault-correction process, and is formulated as a nonhomogeneous Poisson process(NHPP). Further, it is applied to fault-detection data, the results of software reliability assessment are shown, and comparison of goodness-of-fit with the existing software reliability growth model is performed.

The Comparative Software Cost Model of Considering Logarithmic Fault Detection Rate Based on Failure Observation Time (로그형 관측고장시간에 근거한 결함 발생률을 고려한 소프트웨어 비용 모형에 관한 비교 연구)

  • Kim, Kyung-Soo;Kim, Hee-Cheul
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.335-342
    • /
    • 2013
  • In this study, reliability software cost model considering logarithmic fault detection rate based on observations from the process of software product testing was studied. Adding new fault probability using the Goel-Okumoto model that is widely used in the field of reliability problems presented. When correcting or modifying the software, finite failure non-homogeneous Poisson process model. For analysis of software cost model considering the time-dependent fault detection rate, the parameters estimation using maximum likelihood estimation of inter-failure time data was made. In this research, Software developers to identify the best time to release some extent be able to help is considered.

A New Type of Differential Fault Analysis on DES Algorithm (DES 알고리즘에 대한 새로운 차분오류주입공격 방법)

  • So, Hyun-Dong;Kim, Sung-Kyoung;Hong, Seok-Hie;Kang, Eun-Sook
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.6
    • /
    • pp.3-13
    • /
    • 2010
  • Differential Fault Analysis (DFA) is widely known for one of the most efficient method analyzing block cipher. In this paper, we propose a new type of DFA on DES (Data Encryption Standard). DFA on DES was first introduced by Biham and Shamir, then Rivain recently introduced DFA on DES middle rounds (9-12 round). However previous attacks on DES can only be applied to the encryption process. Meanwhile, we first propose the DFA on DES key-schedule. In this paper, we proposed a more efficient DFA on DES key schedule with random fault. The proposed DFA method retrieves the key using a more practical fault model and requires fewer faults than the previous DFA on DES.