• Title/Summary/Keyword: Log management

Search Result 734, Processing Time 0.025 seconds

A rule based file management tool for facility log files integration in disaster environments

  • Lee, Young-Geol;Lee, Younlae;Kim, Hyunah;Jang, Yeonyi;Park, Minjae
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.73-82
    • /
    • 2018
  • We often experience complex presence of files within the compute environment. It could be an end-user's desktop environment or a professional part of file management. We would like to suggest one way to manage these different cases of duplicate files. It is a rule-based file management tool, and we want to use it as a tool to consolidate facility management log files. In this paper, we will describe the rule-based file management tools and show examples of how files are managed using them. We are interested in the management of the disaster environment and would like to apply this method to the management of log data related to facilities to be considered in the event of a disaster.

A Non-fixed Log Area Management Technique in Block for Flash Memory DBMS (플래시메모리 DBMS를 위한 블록의 비고정적 로그 영역 관리 기법)

  • Cho, Bye-Won;Han, Yong-Koo;Lee, Young-Koo
    • Journal of KIISE:Databases
    • /
    • v.37 no.5
    • /
    • pp.238-249
    • /
    • 2010
  • Flash memory has been studied as a storage medium in order to improve the performance of the system using its high computing speed in the DBMS field where frequent data access is needed. The most difficulty using the flash memory is the performance degradation and the life span shortening of flash memory coming from inefficient in-place update. Log based approaches have been studied to solve inefficient in-place update problem in the DBMS where write operations occur in smaller size of data than page frequently. However the existing log based approaches suffer from the frequent merging operations, which are the principal cause of performance deterioration. Thus is because their fixed log area management can not guarantee a sufficient space for logs. In this paper, we propose non-fixed log area management technique that can minimize the occurrence of the merging operations by promising an enough space for logs. We also suggest the cost calculation model of the optimal log sector number minimizing the system operation cost in a block. In experiment, we show that our non-fixed log area management technique can have the improved performance compared to existing approaches.

Design of Log Management System based on Document Database for Big Data Management (빅데이터 관리를 위한 문서형 DB 기반 로그관리 시스템 설계)

  • Ryu, Chang-ju;Han, Myeong-ho;Han, Seung-jo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.11
    • /
    • pp.2629-2636
    • /
    • 2015
  • Recently Big Data management have a rapid increases interest in IT field, much research conducting to solve a problem of real-time processing to Big Data. Lots of resources are required for the ability to store data in real-time over the network but there is the problem of introducing an analyzing system due to aspect of high cost. Need of redesign of the system for low cost and high efficiency had been increasing to solve the problem. In this paper, the document type of database, MongoDB, is used for design a log management system based a document type of database, that is good at big data managing. The suggested log management system is more efficient than other method on log collection and processing, and it is strong on data forgery through the performance evaluation.

Analysis of Pathogenic Microorganism's Contamination on Cultivation Environment of Strawberry and Tomato in Korea

  • Oh, Soh-Young;Nam, Ki-Woong;Kim, Won-Il;Lee, Mun Haeng;Yoon, Deok-Hoon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.47 no.6
    • /
    • pp.510-517
    • /
    • 2014
  • The purpose of this study was to analyze microbial hazards for cultivation environments and personal hygiene of strawberry and tomato farms at the growth and harvesting stage. Samples were collected from thirty strawberry farms and forty tomato farms located in Korea and tested for Staphylococcus aureus and Bacillus cereus. To investigate the change in the distribution of the S. aureus and B. cereus, a total of 4,284 samples including air born, soil or medium, mulching film, harvest basket, groves and irrigation water etc. were collected from eight strawberry farms and nine tomato farms for one year. As a result, total S. aureus and B. cereus in all samples were detected. Among the total bacteria of strawberry farms, S. aureus (glove: $0{\sim}2.1Log\;CFU/100cm^2$, harvest basket: $0{\sim}3.0Log\;CFU/100cm^2$, soil or culture media: 0~4.1 Log CFU/g, mulching film: $0{\sim}3.8Log\;CFU/100cm^2$), B. cereus (glove: $0{\sim}2.8Log\;CFU/100cm^2$, harvest basket: $0{\sim}4.8Log\;CFU/100cm^2$, soil or culture media: 0~5.3 Log CFU/g, mulching film: $0{\sim}4.5Log\;CFU/100cm^2$) were detected in all samples. The total bacteria of tomato farms, S. aureus (glove: $0{\sim}4.0Log\;CFU/100cm^2$, harvest basket: $0{\sim}5.0Log\;CFU/100cm^2$, soil or culture media: 0~6.1 Log CFU/g, mulching film: $0{\sim}4.0Log\;CFU/100cm^2$), B. cereus (glove: $0{\sim}4.0Log\;CFU/100cm^2$, harvest basket: $0{\sim}4.3Log\;CFU/100cm^2$, soil or culture media: 0~5.9 Log CFU/g, mulching film: $0{\sim}4.7Log\;CFU/100cm^2$) were detected in all samples. The contamination of S. aureus and B. cereus were detected in soil, mulching film and harvest basket from planting until harvest to processing, with the highest count recorded from the soil. But S. aureus and B. cereus were not detected in irrigation water samples. The incidence of S. aureus and B. cereus in hydroponics culture farm were less than those in soil culture. The amount of S. aureus and B. cereus detected in strawberry and tomato farms were less than the minimum amount required to produce a toxin that induces food poisoning. In this way, the degree of contamination of food poisoning bacteria was lower in the production environment of the Korea strawberry and tomato, but problems can be caused by post-harvest management method. These results will be used as fundamental data to create a manual for sanitary agricultural environment management, and post-harvest management should be performed to reduce the contamination of hazardous microorganisms.

Biological Hazard Analysis of Angelica gigas Nakai on Production and Marketing Steps (당귀의 재배 및 유통과정 중 생물적 위해요소 분석)

  • Park, Kyeong-Hun;Kim, Byeong-Seok;Lee, Jeong-Ju;Yun, Hye-Jeong;Kim, Se-Ri;Kim, Won-Il;Yun, Jong-Chul;Ryu, Kyoung-Yul
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.6
    • /
    • pp.1216-1221
    • /
    • 2012
  • This study is aimed to investigate microbiological contamination of Angelica gigas Nakai. A total of 111 samples including root, soil, and irrigation water were collected from farms and market to detect aerobic bacteria, Bacillus cereus, coliform, Escherichia coli, Listeria monocytogenes,. Salmonella spp., and Staphylococcus aureus. The contaminations of aerobic bacteria, coliform, and Bacillus cereus in the root during cultivation were found 6.71 log CFU $g^{-1}$, 4.13 log CFU $g^{-1}$, and 3.54 log CFU $g^{-1}$, respectively. The contamination of coliform and B. cereus were detected in all steps from harvesting to processing, with the highest count recorded from the cutting step. In marketing, the contaminations of aerobic bacterial, coliform, and B. cereus were 5.5~6.0 log CFU $g^{-1}$, 2.4~2.6 log CFU $g^{-1}$, and 3.5~4.0 log CFU $g^{-1}$, respectively. Listeria monocytogenes, Salmonella spp, and Staphylococcus aureus were not detected in any of samples. This result indicated that hygienic soil management and post harvest management should be performed to reduce the contamination of hazard microorganisms and to produce safe agro-products.

A Digital Forensic Method for File Creation using Journal File of NTFS File System (NTFS 파일 시스템의 저널 파일을 이용한 파일 생성에 대한 디지털 포렌식 방법)

  • Kim, Tae Han;Cho, Gyu Sang
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.6 no.2
    • /
    • pp.107-118
    • /
    • 2010
  • This paper proposes a digital forensic method to a file creation transaction using a journal file($LogFile) on NTFS File System. The journal file contains lots of information which can help recovering the file system when system failure happens, so knowledge of the structure is very helpful for a forensic analysis. The structure of the journal file, however, is not officially opened. We find out the journal file structure with analyzing the structure of log records by using reverse engineering. We show the digital forensic procedure extracting information from the log records of a sample file created on a NTFS volume. The related log records are as follows: bitmap and segment allocation information of MFT entry, index entry allocation information, resident value update information($FILE_NAME, $STANDARD_INFORMATION, and INDEX_ALLOCATION attribute etc.).

Nonparametric Inference for Accelerated Life Testing (가속화 수명 실험에서의 비모수적 추론)

  • Kim Tai Kyoo
    • Journal of Korean Society for Quality Management
    • /
    • v.32 no.4
    • /
    • pp.242-251
    • /
    • 2004
  • Several statistical methods are introduced 1=o analyze the accelerated failure time data. Most frequently used method is the log-linear approach with parametric assumption. Since the accelerated failure time experiments are exposed to many environmental restrictions, parametric log-linear relationship might not be working properly to analyze the resulting data. The models proposed by Buckley and James(1979) and Stute(1993) could be useful in the situation where parametric log-linear method could not be applicable. Those methods are introduced in accelerated experimental situation under the thermal acceleration and discussed through an illustrated example.

An O($n^2log n$) Algorithm for the Linear Knapsack Problem with SUB and Extended GUB Constraints (단순상한 및 확장된 일반상한제약을 갖는 선형배낭문제의 O($n^2log n$) 해법)

    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.22 no.3
    • /
    • pp.1-9
    • /
    • 1997
  • We present an extension of the well-known generalized upper bound (GUB) constraint and consider a linear knapsack problem with both the extended GUB constraints and the simple upper bound (SUB) constraints. An efficient algorithm of order O($n^2log n$) is developed by exploiting structural properties and applying binary search to ordered solution sets, where n is the total number of variables. A numerical example is presented.

  • PDF

An Efficient Log Data Management Architecture for Big Data Processing in Cloud Computing Environments (클라우드 환경에서의 효율적인 빅 데이터 처리를 위한 로그 데이터 수집 아키텍처)

  • Kim, Julie;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.1-7
    • /
    • 2013
  • Big data management is becoming increasingly important in both industry and academia of information science community. One of the important categories of big data generated from software systems is log data. Log data is generally used for better services in various service providers and can also be used as information for qualification. This paper presents a big data management architecture specialized for log data. Specifically, it provides the aggregation of log messages sent from multiple clients and provides intelligent functionalities such as analyzing log data. The proposed architecture supports an asynchronous process in client-server architectures to prevent the potential bottleneck of accessing data. Accordingly, it does not affect the client performance although using remote data store. We implement the proposed architecture and show that it works well for processing big log data. All components are implemented based on open source software and the developed prototypes are now publicly available.

A Comparison of Data Extraction Techniques and an Implementation of Data Extraction Technique using Index DB -S Bank Case- (원천 시스템 환경을 고려한 데이터 추출 방식의 비교 및 Index DB를 이용한 추출 방식의 구현 -ㅅ 은행 사례를 중심으로-)

  • 김기운
    • Korean Management Science Review
    • /
    • v.20 no.2
    • /
    • pp.1-16
    • /
    • 2003
  • Previous research on data extraction and integration for data warehousing has concentrated mainly on the relational DBMS or partly on the object-oriented DBMS. Mostly, it describes issues related with the change data (deltas) capture and the incremental update by using the triggering technique of active database systems. But, little attention has been paid to data extraction approaches from other types of source systems like hierarchical DBMS, etc. and from source systems without triggering capability. This paper argues, from the practical point of view, that we need to consider not only the types of information sources and capabilities of ETT tools but also other factors of source systems such as operational characteristics (i.e., whether they support DBMS log, user log or no log, timestamp), and DBMS characteristics (i.e., whether they have the triggering capability or not, etc), in order to find out appropriate data extraction techniques that could be applied to different source systems. Having applied several different data extraction techniques (e.g., DBMS log, user log, triggering, timestamp-based extraction, file comparison) to S bank's source systems (e.g., IMS, DB2, ORACLE, and SAM file), we discovered that data extraction techniques available in a commercial ETT tool do not completely support data extraction from the DBMS log of IMS system. For such IMS systems, a new date extraction technique is proposed which first creates Index database and then updates the data warehouse using the Index database. We illustrates this technique using an example application.