• Title/Summary/Keyword: Log management

Search Result 734, Processing Time 0.031 seconds

Safety Measures for the Track Work Safety (선로 작업자 안전향상을 위한 안전연구 현황)

  • Kwak Sang-Log;Hong Seon-Ho;Park Chan-Woo;Bhang Youn-Keun
    • Proceedings of the KSR Conference
    • /
    • 2004.10a
    • /
    • pp.263-268
    • /
    • 2004
  • In many countries safety management on track worker have been carried out alter serious accidents and drastic safety improvement have been achieved as a result of safety management. In order to improve track worker safety in Korea, recent 5-year accident data were analysed and other countries' safety management systems were reviewed. As an result of this study 5-basic requirements for the safety improvement on track worker are derived: education, certification, clear responsibility definition, work planning and upgrade of train warning system.

  • PDF

The Implementation of a HACCP System through u-HACCP Application and the Verification of Microbial Quality Improvement in a Small Size Restaurant (소규모 외식업체용 IP-USN을 활용한 HACCP 시스템 적용 및 유효성 검증)

  • Lim, Tae-Hyeon;Choi, Jung-Hwa;Kang, Young-Jae;Kwak, Tong-Kyung
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.3
    • /
    • pp.464-477
    • /
    • 2013
  • There is a great need to develop a training program proven to change behavior and improve knowledge. The purpose of this study was to evaluate employee hygiene knowledge, hygiene practice, and cleanliness, before and after HACCP system implementation at one small-size restaurant. The efficiency of the system was analyzed using time-temperature control after implementation of u-HACCP$^{(R)}$. The employee hygiene knowledge and practices showed a significant improvement (p<0.05) after HACCP system implementation. In non-heating processes, such as seasoned lettuce, controlling the sanitation of the cooking facility and the chlorination of raw ingredients were identified as the significant CCP. Sanitizing was an important CCP because total bacteria were reduced 2~4 log CFU/g after implementation of HACCP. In bean sprouts, microbial levels decreased from 4.20 logCFU/g to 3.26 logCFU/g. There were significant correlations between hygiene knowledge, practice, and microbiological contamination. First, personnel hygiene had a significant correlation with 'total food hygiene knowledge' scores (p<0.05). Second, total food hygiene practice scores had a significant correlation (p<0.05) with improved microbiological qualities of lettuce salad. Third, concerning the assessment of microbiological quality after 1 month, there were significant (p<0.05) improvements in times of heating, and the washing and division process. On the other hand, after 2 months, microbiological was maintained, although only two categories (division process and kitchen floor) were improved. This study also investigated time-temperature control by using ubiquitous sensor networks (USN) consisting of an ubi reader (CCP thermometer), an ubi manager (tablet PC), and application software (HACCP monitoring system). The result of the temperature control before and after USN showed better thermal management (accuracy, efficiency, consistency of time control). Based on the results, strict time-temperature control could be an effective method to prevent foodborne illness.

DIMPLE-II: Dynamic Membership Protocol for Epidemic Protocols

  • Sun, Jin;Choi, Byung-K.;Jung, Kwang-Mo
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.3
    • /
    • pp.249-273
    • /
    • 2008
  • Epidemic protocols have two fundamental assumptions. One is the availability of a mechanism that provides each node with a set of log(N) (fanout) nodes to gossip with at each cycle. The other is that the network size N is known to all member nodes. While it may be trivial to support these assumptions in small systems, it is a challenge to realize them in large open dynamic systems, such as peer-to-peer (P2P) systems. Technically, since the most fundamental parameter of epidemic protocols is log(N), without knowing the system size, the protocols will be limited. Further, since the network churn, frequently observed in P2P systems, causes rapid membership changes, providing a different set of log(N) at each cycle is a difficult problem. In order to support the assumptions, the fanout nodes should be selected randomly and uniformly from the entire membership. This paper investigates one possible solution which addresses both problems; providing at each cycle a different set of log(N) nodes selected randomly and uniformly from the entire network under churn, and estimating the dynamic network size in the number of nodes. This solution improves the previously developed distributed algorithm called Shuffle to deal with churn, and utilizes the Shuffle infrastructure to estimate the dynamic network size. The effectiveness of the proposed solution is evaluated by simulation. According to the simulation results, the proposed algorithms successfully handle network churn in providing random log(N0 fanout nodes, and practically and accurately estimate the network size. Overall, this work provides insights in designing epidemic protocols for large scale open dynamic systems, where the protocols behave autonomically.

Anomaly Detection Technique of Log Data Using Hadoop Ecosystem (하둡 에코시스템을 활용한 로그 데이터의 이상 탐지 기법)

  • Son, Siwoon;Gil, Myeong-Seon;Moon, Yang-Sae
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.2
    • /
    • pp.128-133
    • /
    • 2017
  • In recent years, the number of systems for the analysis of large volumes of data is increasing. Hadoop, a representative big data system, stores and processes the large data in the distributed environment of multiple servers, where system-resource management is very important. The authors attempted to detect anomalies from the rapid changing of the log data that are collected from the multiple servers using simple but efficient anomaly-detection techniques. Accordingly, an Apache Hive storage architecture was designed to store the log data that were collected from the multiple servers in the Hadoop ecosystem. Also, three anomaly-detection techniques were designed based on the moving-average and 3-sigma concepts. It was finally confirmed that all three of the techniques detected the abnormal intervals correctly, while the weighted anomaly-detection technique is more precise than the basic techniques. These results show an excellent approach for the detection of log-data anomalies with the use of simple techniques in the Hadoop ecosystem.

Microbiological Quality and Safety Assessment of Salad in Lunchbox's according to the Holding Time and Temperature - Convenience and Franchise Stores - (보관시간과 온도에 따른 판매 도시락의 샐러드 미생물 품질 평가 - 편의점과 프랜차이즈 도시락 전문점 제품 -)

  • Choi, Jung-Hwa;Wang, Tae-Hwan;Kwak, Tong-Kyung
    • Korean journal of food and cookery science
    • /
    • v.32 no.6
    • /
    • pp.724-733
    • /
    • 2016
  • Purpose: This study was to evaluate the microbiological quality of salads in lunchbox's based on the holding time and temperature at convenience and franchise stores. Methods: Cabbage salad and crab meat salad were targeted for microbiological quality assessment. They were tested for aerobic plate counts, coliforms, Escherichia coli, Staphylococcus aureus and Enterobacteriaceae and assessment were performed by Korean Food Standards Codex. Results: In cabbage salad at convenience franchise store's at $5^{\circ}C$, the aerobic plate counts did not exceed the Korean Food Standards Codex. For cabbage salad stored at $25^{\circ}C$, the aerobic plate counts was 5.08 log CFU/g we hours after purchase, which exceeded the Korean Food Standards Codex. In case of cabbage salad in franchise store, the E. coli and S. aureus count exceeded Korean Food Standards Codex 3 hours after purchase. Microbiological analysis did not exceed the Korean Food Standards Codex at $5^{\circ}C$ in crab meat salad in convenience store. At $25^{\circ}C$, the aerobic plate count was detected at 4.45 log CFU/g after 32 hours, coliforms, E. coli, and S. aureus did not detect, but Enterobacteriaceae was found to be 2.34 log CFU/g after 9 hours in franchise store's crab salad. Coliforms was detected at 1.18 log CFU/g after 3 hours, and S. aureus was detected at 2.04 log CFU/g after 6 hours at $25^{\circ}C$ in the franchise store. The lunchbox' salad under cold storage ($5^{\circ}C$) generally meet the Korean Food Standards Codex. Conclusion: The results indicate an urgent need to implement proper management guidelines for the production of lunchbox foods to ensure microbiological safety, and to improve the shelf life from production to consumption.

Hygienic effect of modified atmosphere film packaging on ginseng sprout for microbial safety

  • Jangnam Choi;Sosoo Kim;Jiseon Baek;Mijeong Lee;Jihyun Lee;Jayeong Jang;Theresa Lee
    • Food Science and Preservation
    • /
    • v.31 no.1
    • /
    • pp.24-32
    • /
    • 2024
  • This study evaluates the microbial safety of ginseng sprouts packaged in moss and a modified atmosphere (MA) film within Styrofoam boxes. Ginseng sprout samples were stored at 4℃ for seven days, and the total fungi and aerobic bacteria counts, relative humidity, and moisture content were measured at 0, 1, 3, 5, and 7 days. During the storage period, both packaging treatments caused an increase in the total fungi and aerobic bacteria counts. However, by the seventh day, the ginseng sprouts packaged in the MA film demonstrated significantly lower counts of total fungi (3.03 log CFU/g) and aerobic bacteria (7.32 log CFU/g) than those in moss (3.66 and 7.63 log CFU/g, respectively). Moss packaging alone resulted in the total fungi count reaching up to 3.36 log CFU/g, with the aerobic bacteria count consistently exceeding 7 log CFU/g, highlighting the importance of hygienic management. Moreover, no significant differences were observed in the moisture content and relative humidity between the MA-film- and moss-packaged groups throughout storage. These findings indicate that the functional MA film is a more hygienic packaging solution for ginseng sprouts than moss.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Bioequivalence of Onfran Tablet to Zofran Tablet (Ondansetron 8mg) (조프란 정(온단세트론 8mg)에 대한 온프란 정의 생물학적동등성)

  • 신인철;홍정욱;박윤영;고현철
    • Biomolecules & Therapeutics
    • /
    • v.11 no.1
    • /
    • pp.58-64
    • /
    • 2003
  • Ondansetron is a potent, highly selective 5-hydroxytryptamin $e_3$(5-H $T_3$) receptor-antagonist, for the management of nausea and vomiting induced by cytotoxic chemotherapy and radiography, and the treatment of post-operative nausea and vomiting. The purpose of the present study was to evaluate the bioequivalence of two ondansetron tablets, Zofran (Glaxo Smithcline Korea Ltd.) and Onfran (Korea United Pharmaceutical Co., Ltd.), according to the guidelines of Korea Food and Drug Administration (KFDA). Eighteen normal male volunteers, 24.39$\pm$1.69 year in age and 69.00$\pm$6.74kg in body weight, were divided into two groups and a randomized 2${\times}$2 cross-over study was employed. After one tablet containing 8mg of ondansetron was orally administered, blood was taken at predetermined time intervals and the concentrations of ondansetron in plasma were determined using HPLC with UV detector. Pharmacokinetic parameters such as AVC, $C_{max}$ and $T_{max}$ were calculated and ANOVA test was utilized for the statistical analysis of the parameters. The results showed that the differences in AUC, $C_{max}$ and T max between two tablets were 5.83%, 5.75% and -5.71%, respectively when calculated against the Zofran, tablet. The powers (1-$\beta$) for AUC, $C_{max}$ and $T_{max}$ were above 90%, above 90% and below 60%, respectively. Minimum detectable differences($\Delta$) at alpha=0.1 and 1-$\beta$=0.8 were less than 20% (e.g., 12.74% and 11.78% for AUC and $C_{max}$ respectively). But minimum detectable differences($\Delta$) at alpha=0.1 and 1-$\beta$=0.8 for $T_{max}$ were more than 20% (e.g., 34.22%). The 90% confidence intervals were within $\pm$20% (e.g., -2.73∼14.39 and -2.16∼13.67 for AUC and $C_{max}$ respectively). But 90% confidence intervals for $T_{max}$ were not within $\pm$20% (e.g., -28.71∼17.28). Another ANOVA test was conducted for logarithmically transformed AUC and $C_{max}$. These results showed that there are no significant difference in AUC and $C_{max}$ between the two formulations: The differences between the formulations in these log transformed parameters were all for less than 20% (e.g., 5.83% and 5.75% for AUC and $C_{max}$ respectively). The 90% confidence intervals for the log transformed data were the acceptance range of log 0.8 to log 1.25 (e.g., log 0.99∼log 1.15 and log 0.98∼log 1.15 for AUC and $C_{max}$ respectively). The major parameters, AUC and $C_{max}$, met the criteria of KFDA for bioequivalence although $T_{max}$ did not meet the criteria of KFDA for bioequivalence, indicating that Onfran tablet is bioequivalent to Zofrm1 tablet.t is bioequivalent to Zofrm1 tablet.m1 tablet.m1 tablet.m1 tablet.

Assessment of Microbial Contamination and Safety of Commercial Shrimp Jeotgal (Salt Fermented Shrimp) (유통 중인 새우젓의 미생물학적 오염도 및 안전성 평가)

  • Ha, Ji-Hyoung;Moon, Eun-Sook;Ha, Sang-Do
    • Journal of Food Hygiene and Safety
    • /
    • v.22 no.2
    • /
    • pp.105-109
    • /
    • 2007
  • This study monitored and compared the contamination levels of total aerobic bacteria, coliform groups and S. aureus of 16 Shrimp Jeotgal (Salt Fermented Shrimp) products from 3 traditional markets (TM), 3 department stores (DS) and 3 super markets (SM) located on seoul, Korea. Moreover this study was carried out to survey the concentrations of NaCl and heavy metal (lead; Pb) of the Shrimp Jeotgal. The contamination levels of total aerobic bacteria in the Shrimp Jeotgai were $3.35log_{10}CFU/g$ as a mean and $3.71log_{10}CFU/g$ for TM, $3.16log_{10}CFU/g\;for\;DS,\;2.84log_{10}CFU/g$ for SM. The coliform groups were contaminated in 50% of Shrimp Jeotgal and it means that the hygienic control is needed urgently. S. aureus were not detected in every sample. The levels of NaCl were between 17.9 and 28.5%. Heavy metal (lead; Pb) was detected in only 1 product at the level of 0.02 ppm out of 16 products. Although microbiological contamination levels of Shrimp Jeotgal were not much high, hygienic management like HACCP is thought to be needed for the production of Shrimp Jeotgal in traditional market.

Active Enterprise Security Management System for Intrusion Prevension (침입 방지를 위한 능동형 통합 보안 관리 시스템)

  • Park, Jae-Sung;Park, Jae-Pyo;Kim, Won;Jeon, Moon-Seok
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.4
    • /
    • pp.427-434
    • /
    • 2004
  • Attacks such as hacking, a virus intimidating a system and a network are increasing recently. However, the existing system security or network management system(NMS) cannot be safe on various threats. Therefore, Firewall, IDS, VPN, LAS(Log Analysis System) establishes security system and has defended a system and a network against a threat. But mutual linkage between security systems was short and cannot prepare an effective correspondence system, and inefficiency was indicated with duplication of security. Therefore, an active security and an Enterprise Security Management came to need. An effective security network was established recently by Enterprise Security Management, Intrusion Tracking, Intrustion Induction. But an internetworking is hard for an enterprise security systems, and a correspondence method cannot be systematic, and it is responded later. Therefore, we proposes the active enterprise security management module that can manage a network safely in this paper.

  • PDF