• Title/Summary/Keyword: log Data Analysis

Search Result 975, Processing Time 0.032 seconds

A Comparative Study of Software Reliability Model Considering Log Type Mean Value Function (로그형 평균값함수를 고려한 소프트웨어 신뢰성모형에 대한 비교연구)

  • Shin, Hyun Cheul;Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.4
    • /
    • pp.19-27
    • /
    • 2014
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log type mean value function (Musa-Okumoto and log power model), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure using real data set for the sake of proposing log type mean value function was employed. This analysis of failure data compared with log type mean value function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.

Separation Effect Analysis for Rainfall Data (강우자료의 분리효과)

  • 김양수;허준행
    • Water for future
    • /
    • v.26 no.4
    • /
    • pp.73-83
    • /
    • 1993
  • This study focuses on the separation effect analysis of rainfall data for 2-parameter log-normal, 3-parameter log-normal, type-extreme value, 2-parameter gamma, 3-parameter gamma, log-Pearson type-III, and general extreme value distribution functions. Difference in the relationship between the mean and standard deviation of skewness for historical data and relations derived from 7 distribution functions are analyzed suing the Monte Carlo experiment. The results show that rainfall data has the separation effect for 6 distribution functions except 3-parameter gamma distribution function.

  • PDF

Quantitative Analysis of Coal Logging Data (석탄층 검층자료의 정량적 해석법 연구)

  • Kwon, Byung Doo;Son, Se Jo;Son, Jeong Woo
    • Economic and Environmental Geology
    • /
    • v.21 no.1
    • /
    • pp.85-96
    • /
    • 1988
  • Geophysical well logging at various coal fields were carried out to study the characteristic response of domestic coal seams. Also a computer program is developed for quantitative analysis of coal logging data. Most coal seams penetrated by the drill holes, where the well logging were carried out, showed poor thickness and quality, and were severely altered. Therefore, majority of log data are inadequate for detailed quantitative analysis. The logs show, however, typical characteristics with related to coal seams, but interpretation should be made with caution because certain log response of demestic coals, mostly anthracite, are quite different to those of foreign coals, mostly bituminous. The developed comuter program has been proved as an effective one for identification of coal seams and lithology anslysis, and is expected to be succesfully used for coal quality analysis in cases of more diversified log data of good quality being obtained.

  • PDF

A New Analysis Method of the Consolidation Test Data for an Undisturbed Clay (불교란 점토 압밀시험 결과의 새로운 해석법)

  • 박종화;고우모또타쯔야
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.44 no.6
    • /
    • pp.106-114
    • /
    • 2002
  • In this study, the results of a series of consolidation test for undisturbed Ariake clay in Japan were analyzed by three methods, e-log p (e: void ratio, p: consolidation pressure), log e-log p and n-log p (n: porosity). Moreover, the characteristics of each analysis method were studied. For undisturbed Ariake clay, the log o-Log p and the n-log p relationships can be found as two groups of straight lines of different gradients, but both the elastic consolidation and plastic consolidation regions of e-log p relationship are expressed as a curve. In this paper, the porosity of consolidation yield n$\_$y/, consolidation yield stress p$\_$y/, and the gradient of the plastic consolidation region C$\_$p/ were represented by the log e-log p method, and n$\_$c/, P$\_$cn/ and C$\_$cn/ were represented by the n-log p method. The meaning and the relationships of each value were studied, and the interrelationships among compression indices i.e. C$\_$cn/, C$\_$p/ and C$\_$c/ are obtained from each analysis method as a function of initial porosity n$\_$0/.

XML-based Windows Event Log Forensic tool design and implementation (XML기반 Windows Event Log Forensic 도구 설계 및 구현)

  • Kim, Jongmin;Lee, DongHwi
    • Convergence Security Journal
    • /
    • v.20 no.5
    • /
    • pp.27-32
    • /
    • 2020
  • The Windows Event Log is a Log that defines the overall behavior of the system, and these files contain data that can detect various user behaviors and signs of anomalies. However, since the Event Log is generated for each action, it takes a considerable amount of time to analyze the log. Therefore, in this study, we designed and implemented an XML-based Event Log analysis tool based on the main Event Log list of "Spotting the Adversary with Windows Event Log Monitoring" presented at the NSA.

A Study on the Intrusion Detection Method using Firewall Log (방화벽 로그를 이용한 침입탐지기법 연구)

  • Yoon, Sung-Jong;Kim, Jeong-Ho
    • Journal of Information Technology Applications and Management
    • /
    • v.13 no.4
    • /
    • pp.141-153
    • /
    • 2006
  • According to supply of super high way internet service, importance of security becomes more emphasizing. Therefore, flawless security solution is needed for blocking information outflow when we send or receive data. large enterprise and public organizations can react to this problem, however, small organization with limited work force and capital can't. Therefore they need to elevate their level of information security by improving their information security system without additional money. No hackings can be done without passing invasion blocking system which installed at the very front of network. Therefore, if we manage.isolation log effective, we can recognize hacking trial at the step of pre-detection. In this paper, it supports information security manager to execute isolation log analysis very effectively. It also provides isolation log analysis module which notifies hacking attack by analyzing isolation log.

  • PDF

Frequency Distribution Characteristics of Formation Density Derived from Log and Core Data throughout the Southern Korean Peninsula (남한지역 검층밀도 자료의 특성 분석)

  • Kim, Yeonghwa;Kim, Ki Hwan;Kim, Jongman;Hwang, Se Ho
    • The Journal of Engineering Geology
    • /
    • v.25 no.2
    • /
    • pp.281-290
    • /
    • 2015
  • Log density data were collected and compared with the core density data throughout the southern Korean Peninsula. The comparison reveals that the log densities obtained from gamma-gamma log are much lower than the core densities obtained from laboratory density measurement of core samples. The anomalously low log densities can be attributed to the small-source density log data. Correlation analysis reveals differences between densities derived from the two methods, indicating that a data quality problem arises when using small-source log data. The problem is probably due to the fact that small-source data have not been obtained under ideal conditions for maintaining the appropriate relationship between gamma response and formation density. The frequency distribution characteristics of formation density in the southern Korean Peninsula could be determined using the core and the standard-source log data which are well-correlated.

MLE for Incomplete Contingency Tables with Lagrangian Multiplier

  • Kang, Shin-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.3
    • /
    • pp.919-925
    • /
    • 2006
  • Maximum likelihood estimate(MLE) is obtained from the partial log-likelihood function for the cell probabilities of two way incomplete contingency tables proposed by Chen and Fienberg(1974). The partial log-likelihood function is modified by adding lagrangian multiplier that constraints can be incorporated with. Variances of MLE estimators of population proportions are derived from the matrix of second derivatives of the loglikelihood with respect to cell probabilities. Simulation results, when data are missing at random, reveal that Complete-case(CC) analysis produces biased estimates of joint probabilities under MAR and less efficient than either MLE or MI. MLE and MI provides consistent results under either the MAR situation. MLE provides more efficient estimates of population proportions than either multiple imputation(MI) based on data augmentation or complete case analysis. The standard errors of MLE from the proposed method using lagrangian multiplier are valid and have less variation than the standard errors from MI and CC.

  • PDF

A Log Analysis System with REST Web Services for Desktop Grids and its Application to Resource Group-based Task Scheduling

  • Gil, Joon-Min;Kim, Mi-Hye
    • Journal of Information Processing Systems
    • /
    • v.7 no.4
    • /
    • pp.707-716
    • /
    • 2011
  • It is important that desktop grids should be able to aggressively deal with the dynamic properties that arise from the volatility and heterogeneity of resources. Therefore, it is required that task scheduling be able to positively consider the execution behavior that is characterized by an individual resource. In this paper, we implement a log analysis system with REST web services, which can analyze the execution behavior by utilizing the actual log data of desktop grid systems. To verify the log analysis system, we conducted simulations and showed that the resource group-based task scheduling, based on the analysis of the execution behavior, offers a faster turnaround time than the existing one even if few resources are used.

Auto Configuration Module for Logstash in Elasticsearch Ecosystem

  • Ahmed, Hammad;Park, Yoosang;Choi, Jongsun;Choi, Jaeyoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.39-42
    • /
    • 2018
  • Log analysis and monitoring have a significant importance in most of the systems. Log management has core importance in applications like distributed applications, cloud based applications, and applications designed for big data. These applications produce a large number of log files which contain essential information. This information can be used for log analytics to understand the relevant patterns from varying log data. However, they need some tools for the purpose of parsing, storing, and visualizing log informations. "Elasticsearch, Logstash, and Kibana"(ELK Stack) is one of the most popular analyzing tools for log management. For the ingestion of log files configuration files have a key importance, as they cover all the services needed to input, process, and output the log files. However, creating configuration files is sometimes very complicated and time consuming in many applications as it requires domain expertise and manual creation. In this paper, an auto configuration module for Logstash is proposed which aims to auto generate the configuration files for Logstash. The primary purpose of this paper is to provide a mechanism, which can be used to auto generate the configuration files for corresponding log files in less time. The proposed module aims to provide an overall efficiency in the log management system.