• Title/Summary/Keyword: log Data Analysis

Search Result 975, Processing Time 0.025 seconds

Integrated approach using well data and seismic attributes for reservoir characterization

  • Kim Ji- Yeong;Lim Jong-Se;Shin Sung-Ryul
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.723-730
    • /
    • 2003
  • In general, well log and core data have been utilized for reservoir characterization. These well data can provide valuable information on reservoir properties with high vertical resolution at well locations. While the seismic surveys cover large areas of field but give only indirect features about reservoir properties. Therefore it is possible to estimate the reservoir properties guided by seismic data on entire area if a relationship of seismic data and well data can be defined. Seismic attributes calculated from seismic surveys contain the particular reservoir features, so that they should be extracted and used properly according to the purpose of study. The method to select the suitable seismic attributes among enormous ones is needed. The stepwise regression and fuzzy curve analysis based on fuzzy logics are used for selecting the best attributes. The relationship can be utilized to estimate reservoir properties derived from seismic attributes. This methodology is applied to a synthetic seismogram and a sonic log acquired from velocity model. Seismic attributes calculated from the seismic data are reflection strength, instantaneous phase, instantaneous frequency and pseudo sonic logging data as well as seismic trace. The fuzzy curve analysis is used for choosing the best seismic attributes compared to sonic log as well data, so that seismic trace, reflection strength, instantaneous frequency, and pseudo sonic logging data are selected. The relationship between the seismic attribute and well data is found out by the statistical regression method and estimates the reliable well data at a specific field location derived from only seismic attributes. For a future work in this study, the methodology should be checked an applicability of the real fields with more complex and various reservoir features.

  • PDF

Systematic Analysis of Microbial Contamination in Leaf and Stem Products in Korea (Systematic analysis 방법을 이용한 국내 엽경채류 농산물의 미생물학적 오염도 분석)

  • Sung, Seung-Mi;Min, Ji-Hyeon;Kim, Hyun Jung;Yoon, Ki-Sun;Lee, Jong-Kyung
    • Journal of Food Hygiene and Safety
    • /
    • v.32 no.4
    • /
    • pp.306-313
    • /
    • 2017
  • This study systemically analyzed the data on the microbial levels in fresh vegetables in Korea to identify the points to control. We scanned the studies published between 2001 and 2015 in peer-reviewed research papers on the microbial levels in fresh vegetables produced in Korea. Plant products were categorized by using the US IFSAC (Interagency Food Safety Analytics Collaboration) category. The most consumed, the non-heat treated, the epidemiological foodborne diseases sources of fresh vegetable in foodservice (KCDC data) were identified by literature review. Articles were screened using National Digital Science Library (NDSL) search engine regarding to microbial hazards in plant products. Based on the total plate count number and coliforms on the 89 data cases from 26 published articles, the total plate count number was high in the order of sprouts, leaf and stem, bulbs and roots, vine-grown, solanaceous, melons, and pome. Escherichia coli was frequently detected in leaf and stem and sprouts products. Focused on the microbial data of leek, lettuce and cabbage, the levels of total plate count, coliforms and Bacillus cereus showed the levels of 4.15~7.69 log CFU/g, 1~6.99 log CFU/g, and 0.51~3.9 log CFU/g, respectively, by 33 published papers. The levels of environmental factors affecting the microbial safety of lettuce and leek before harvest were investigated. Manure, soil, hands, scale, gloves were the major potential microbial contamination points to control. In addition, GAP (good agricultural practice), microbial testing, and improvement of irrigation methods are required to provide the safer fresh produce.

XML-based Modeling for Semantic Retrieval of Syslog Data (Syslog 데이터의 의미론적 검색을 위한 XML 기반의 모델링)

  • Lee Seok-Joon;Shin Dong-Cheon;Park Sei-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.13D no.2 s.105
    • /
    • pp.147-156
    • /
    • 2006
  • Event logging plays increasingly an important role in system and network management, and syslog is a de-facto standard for logging system events. However, due to the semi-structured features of Common Log Format data most studies on log analysis focus on the frequent patterns. The extensible Markup Language can provide a nice representation scheme for structure and search of formatted data found in syslog messages. However, previous XML-formatted schemes and applications for system logging are not suitable for semantic approach such as ranking based search or similarity measurement for log data. In this paper, based on ranked keyword search techniques over XML document, we propose an XML tree structure through a new data modeling approach for syslog data. Finally, we show suitability of proposed structure for semantic retrieval.

Dropout Prediction Modeling and Investigating the Feasibility of Early Detection in e-Learning Courses (일반대학에서 교양 e-러닝 강좌의 중도탈락 예측모형 개발과 조기 판별 가능성 탐색)

  • You, Ji Won
    • The Journal of Korean Association of Computer Education
    • /
    • v.17 no.1
    • /
    • pp.1-12
    • /
    • 2014
  • Since students' behaviors during e-learning are automatically stored in LMS(Learning Management System), the LMS log data convey the valuable information of students' engagement. The purpose of this study is to develop a prediction model of e-learning course dropout by utilizing LMS log data. Log data of 578 college students who registered e-learning courses in a traditional university were used for the logistic regression analysis. The results showed that attendance and study time were significant to predict dropout, and the model classified between dropouts and completers of e-learning courses with 96% accuracy. Furthermore, the feasibility of early detection of dropouts by utilizing the model were discussed.

  • PDF

Analysis of Network Log based on Hadoop (하둡 기반 네트워크 로그 시스템)

  • Kim, Jeong-Joon;Park, Jeong-Min;Chung, Sung-Taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.125-130
    • /
    • 2017
  • Since field control equipment such as PLC has no function to log key event information in the log, it is difficult to analyze the accident. Therefore, it is necessary to secure information that can analyze when a cyber accident occurs by logging the main event information of the field control equipment such as PLC and IED. The protocol analyzer is required to analyze the field control device (the embedded device) communication protocol for event logging. However, the conventional analyzer, such as Wireshark is difficult to process the data identification and extraction of the large variety of protocols for event logging is difficult analysis of the payload data based and classification. In this paper, we developed a system for Big Data based on field control device communication protocol payload data extraction for event logging of large studies.

Automatic Electrofacies Classification from Well Logs Using Multivariate Statistical Techniques (다변량 통계 기법을 이용한 물리검층 자료로부터의 암석물리학상 결정)

  • Lim Jong-Se;Kim Jungwhan;Kang Joo-Myung
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.3
    • /
    • pp.170-175
    • /
    • 1998
  • A systematic methodology is developed for the prediction of the lithology using electrofacies classification from wireline log data. Multivariate statistical techniques are adopted to segment well log measurements and group the segments into electrofacies types. To consider corresponding contribution of each log and reduce the computational dimension, multivariate logs are transformed into a single variable through principal components analysis. Resultant principal components logs are segmented using the statistical zonation method to enhance the quality and efficiency of the interpreted results. Hierarchical cluster analysis is then used to group the segments into electrofacies. Optimal number of groups is determined on the basis of the ratio of within-group variance to total variance and core data. This technique is applied to the wells in the Korea Continental Shelf. The results of field application demonstrate that the prediction of lithology based on the electrofacies classification works well with reliability to the core and cutting data. This methodology for electrofacies determination can be used to define reservoir characterization which is helpful to the reservoir management.

  • PDF

The Comparative Study of NHPP Software Reliability Model Based on Log and Exponential Power Intensity Function (로그 및 지수파우어 강도함수를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.445-452
    • /
    • 2015
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log and power intensity function (log linear, log power and exponential power), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure, using real data set for the sake of proposing log and power intensity function, was employed. This analysis of failure data compared with log and power intensity function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.

Extreme Value Analysis of Metocean Data for Barents Sea

  • Park, Sung Boo;Shin, Seong Yun;Shin, Da Gyun;Jung, Kwang Hyo;Choi, Yong Ho;Lee, Jaeyong;Lee, Seung Jae
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.1
    • /
    • pp.26-36
    • /
    • 2020
  • An extreme value analysis of metocean data which include wave, wind, and current data is a prerequisite for the operation and survival of offshore structures. The purpose of this study was to provide information about the return wave, wind, and current values for the Barents Sea using extreme value analysis. Hindcast datasets of the Global Reanalysis of Ocean Waves 2012 (GROW2012) for a waves, winds and currents were obtained from the Oceanweather Inc. The Gumbel distribution, 2 and 3 parameters Weibull distributions and log-normal distribution were used for the extreme value analysis. The least square method was used to estimate the parameters for the extreme value distribution. The return values, including the significant wave height, spectral peak wave period, wind speed and current speed at surface, were calculated and it will be utilized to design offshore structures to be operated in the Barents Sea.

Analysis of Bioequivalence Study using a Log-transformed Model (로그변환 모델에 따른 생물학적 동등성 판정 연구)

  • 이영주;김윤균;이명걸;정석재;이민화;심창구
    • YAKHAK HOEJI
    • /
    • v.44 no.4
    • /
    • pp.308-314
    • /
    • 2000
  • Logarithmic transformation of pharmacokinetic parameters is routinely used in bioequivalence studies based on pharmacokinetic and statistical grounds by the United States Food and Drug Administration (FDA), European Committee for Proprietary Medicinal Products (CPMP), and Japanese National Institute of Health and Science (NIHS). Although it has not yet been recommended by the Korea Food and Drug Administration (KFDA), its use is becoming increasingly necessary in order to harmonize with international standards. In the present study, statistical procedures for the analysis of a bioequivalence based on the log transformation and a related SAS procedure were demonstrated in order to aid the understanding and application. The AUC parameters used in this demonstration were taken from the previous bioequivalence study for two aceclofenac tablets, which were performed in a single-dose crossover design. Analysis of variance (ANOVA), statistical power to detect 20% difference between the tablets, minimum detectable difference and confidence intervals were all assessed following log-transformation of the data. Bioequivalence of two aceclofenac tablets was then estimated based on the guideline of FDA. Considering the international effort for harmaonization of guidelines for bioequivalence tests, this approach may require a further evaluation for a future adaptation in the Korea Guidelines of Bioequivalence Tests (KGBT).

  • PDF

A Text Mining-based Intrusion Log Recommendation in Digital Forensics (디지털 포렌식에서 텍스트 마이닝 기반 침입 흔적 로그 추천)

  • Ko, Sujeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.6
    • /
    • pp.279-290
    • /
    • 2013
  • In digital forensics log files have been stored as a form of large data for the purpose of tracing users' past behaviors. It is difficult for investigators to manually analysis the large log data without clues. In this paper, we propose a text mining technique for extracting intrusion logs from a large log set to recommend reliable evidences to investigators. In the training stage, the proposed method extracts intrusion association words from a training log set by using Apriori algorithm after preprocessing and the probability of intrusion for association words are computed by combining support and confidence. Robinson's method of computing confidences for filtering spam mails is applied to extracting intrusion logs in the proposed method. As the results, the association word knowledge base is constructed by including the weights of the probability of intrusion for association words to improve the accuracy. In the test stage, the probability of intrusion logs and the probability of normal logs in a test log set are computed by Fisher's inverse chi-square classification algorithm based on the association word knowledge base respectively and intrusion logs are extracted from combining the results. Then, the intrusion logs are recommended to investigators. The proposed method uses a training method of clearly analyzing the meaning of data from an unstructured large log data. As the results, it complements the problem of reduction in accuracy caused by data ambiguity. In addition, the proposed method recommends intrusion logs by using Fisher's inverse chi-square classification algorithm. So, it reduces the rate of false positive(FP) and decreases in laborious effort to extract evidences manually.