• Title/Summary/Keyword: Log Record

Search Result 80, Processing Time 0.022 seconds

The Recovery Method for MySQL InnoDB Using Feature of IBD Structure (IBD 구조적특징을이용한 MySQL InnoDB의레코드복구기법)

  • Jang, Jeewon;Jeoung, Doowon;Lee, Sang Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.2
    • /
    • pp.59-66
    • /
    • 2017
  • MySQL database is the second place in the market share of the current database. Especially InnoDB storage engine has been used in the default storage engine from the version of MySQL5.5. And many companies are using the MySQL database with InnoDB storage engine. Study on the structural features and the log of the InnoDB storage engine in the field of digital forensics has been steadily underway, but for how to restore on a record-by-record basis for the deleted data, has not been studied. In the process of digital forensic investigation, database administrators damaged evidence for the purpose of destruction of evidence. For this reason, it is important in the process of forensic investigation to recover deleted record in database. In this paper, We proposed the method of recovering deleted data on a record-by-record in database by analyzing the structure of MySQL InnoDB storage engine. And we prove this method by tools. This method can be prevented by database anti forensic, and used to recover deleted data when incident which is related with MySQL InnoDB database is occurred.

Log-Structured B-Tree for NAND Flash Memory (NAND 플래시 메모리를 위한 로그 기반의 B-트리)

  • Kim, Bo-Kyeong;Joo, Young-Do;Lee, Dong-Ho
    • The KIPS Transactions:PartD
    • /
    • v.15D no.6
    • /
    • pp.755-766
    • /
    • 2008
  • Recently, NAND flash memory is becoming into the spotlight as a next-generation storage device because of its small size, fast speed, low power consumption, and etc. compared to the hard disk. However, due to the distinct characteristics such as erase-before-write architecture, asymmetric operation speed and unit, disk-based systems and applications may result in severe performance degradation when directly implementing them on NAND flash memory. Especially when a B-tree is implemented on NAND flash memory, intensive overwrite operations may be caused by record inserting, deleting, and reorganizing. These may result in severe performance degradation. Although ${\mu}$-tree has been proposed in order to overcome this problem, it suffers from frequent node split and rapid increment of its height. In this paper, we propose Log-Structured B-Tree(LSB-Tree) where the corresponding log node to a leaf node is allocated for update operation and then the modified data in the log node is stored at only one write operation. LSB-tree reduces additional write operations by deferring the change of parent nodes. Also, it reduces the write operation by switching a log node to a new leaf node when inserting the data sequentially by the key order. Finally, we show that LSB-tree yields a better performance on NAND flash memory by comparing it to ${\mu}$-tree through various experiments.

Design of Open Gateway Framework for Personalized Healing Data Access (개인화된 힐링 데이터 접근을 위한 개방형 게이트웨이 프레임워크 설계)

  • Jeon, YoungJun;Im, SeokJin;Hwang, HeeJoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.229-235
    • /
    • 2015
  • ICT healing platform is based on bio-signal and life habit information target to alarm early sickness concept prevention chronic pain. ICT(Information & Communication Technology) healing platform target on personal lead health management care of several health agencies and open of the (hospital, fitness center, health examination center, personal health device) personal health information together to personal device. Support Analysis Platform and Open API to vitalization optional services. In this paper proposal to access personality healing data Open Gateway Framework of Healing Platform Adaptor (HPAdaptor) ICT healing platform means Data relaying link to EMR(Electronic health record), korean medicine, life log, wellness, chronic pain, and fineness several personal health data provider and service provider personal healing data with software engine. After Design HPAdaptor can use for data and service provider record storage, mobile platform and analytics platform need data service or platform relying reference model.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Authentication technique of Internal Information Hacking Protection based on H/W Information (H/W 정보의 인증을 통한 내부정보유출 방지 기법)

  • Yang, Sun Ok;Choi, Nak Gui;Park, Jae Pyo;Choi, Hyung Il
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.1
    • /
    • pp.71-81
    • /
    • 2009
  • To the cause of the development of IT technology and the Internet, information leakage of industry is also facing a serious situation. However, most of the existing techniques to prevent leakage of information disclosure after finding the cause of defense. Therefore, in this paper by adding information about the Hardware to offer a way to protect the information. User authentication information to access the data according to different security policies to reflect a little more to strengthen security. And the security agent for the data by using a log of all actions by the record was so easy to analyze. It also analyzes and apply the different scenarios possible. And the analysis of how to implement and how to block. The future without the use of security agents to be able to control access to data and H/W information will be updated for the study will be done.

A Study of the hydrological generation - The generation and comparison with annual and monthly dicharge at Wacgwan in the Nakdong River (수문학적 모의기법에 대한 연구 - 낙동강 왜관지점의 연유량과 월유량의 모의발생 및 비교 -)

  • 천덕진;최영박
    • Water for future
    • /
    • v.13 no.1
    • /
    • pp.49-56
    • /
    • 1980
  • The thesis of this analytical study includes 1) the generation of annual and monthly discharge regarding single hydrological variable at single site, 2)comparsion with the historical records and the generation, and 3) changing the monthly generatied discharge into annual. The conclusion of this will be used for the future plan for water resources development. Annual discharges at waegwan are characterized by log-normal distirbution and persistence-absent. Also, the random number generator causes the errors in the generation of annual discharge. The serial correlation coefficients of the generated annual discharge have less value than that of historical records, while the correlation coefficient and slope in January have(+) value and opposite to historical record. To change the monthly generated discharge into annual is not proper.

  • PDF

Speech Emotion Recognition on a Simulated Intelligent Robot (모의 지능로봇에서의 음성 감정인식)

  • Jang Kwang-Dong;Kim Nam;Kwon Oh-Wook
    • MALSORI
    • /
    • no.56
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF

Suggestions on the Development of Standard Engineering Communication Phrases

  • Doo, Hyun-Wook;Choi, Seung-Hee
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2017.11a
    • /
    • pp.34-35
    • /
    • 2017
  • Under the STCW Convention, marine engineers are required to have a satisfactory level of maritime English proficiency so that they can successfully perform their duties on board or maintain and operate the various equipment and facilities installed in the ship. More specifically, the importance of the engineers' written communication skills has been highlighted since their documents (for instance, post-work records, legal, and/or internal reports) have a significant legal impact in the event of a marine casualty or maritime crime. To suggest the necessity of developing standard engineering logbook phrases (SELP), therefore, this paper will closely analyse three-month authentic marine engineers' work records written by Korean officers. From the analysis, the problems and errors in the logbook will be analysed, and considerations to be taken into account in the development of SELP will be illustrated. Finally, the future actions for this standardised written communication for the logbook entry will be sought.

  • PDF

Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot (모의 지능로봇에서 음성신호에 의한 감정인식)

  • Jang, Kwang-Dong;Kwon, Oh-Wook
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

Design & Implementation of a Host Based Access Control System (호스트 기반 접근제어시스템의 설계 및 구현)

  • Kim, Jin-Chun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.34-39
    • /
    • 2007
  • According to the active use of internet the need for security in various environment is being emphasized. Moreover with the broad use of Messenger on PC and P2P applications. the security and management of individual hosts on internet became very important issues. Therefore in this paper we propose the design and implementation of a host based access control system for the hosts on internet including window based PC which provides access control, information on packets, and record and monitoring of log files.