• Title/Summary/Keyword: Log server

Search Result 172, Processing Time 0.024 seconds

A Recovery Technique Using Client-based Logging in Client/Server Environment

  • Park, Yong-Mun;Lee, Chan-Seob;Kim, Dong-Hyuk;Park, Eui-In
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.429-432
    • /
    • 2002
  • The existing recovery technique using the logging technique in the client/sewer database system only administers the log as a whole in a server. This contains the logging record transmission cost on the transaction that is executed in each client potentially and increases network traffic. In this paper, the logging technique for redo-only log is suggested, which removes the redundant before-image and supports the client-based logging to eliminate the transmission cost of the logging record. Also, in case of a client crash, redo recovery through a backward client analysis log is performed in a self-recovering way. In case of a server crash, the after-image of the pages which needs recovery through simultaneous backward analysis log is only transmitted and redo recovery is done through the received after-image and backward analysis log. Also, we select the comparing model to estimate the performance about the proposed recovery technique. And we analyzed the redo and recovery time about the change of the number of client and the rate of updating operation.

  • PDF

An Efficient Log Data Processing Architecture for Internet Cloud Environments

  • Kim, Julie;Bahn, Hyokyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.8 no.1
    • /
    • pp.33-41
    • /
    • 2016
  • Big data management is becoming an increasingly important issue in both industry and academia of information science community today. One of the important categories of big data generated from software systems is log data. Log data is generally used for better services in various service providers and can also be used to improve system reliability. In this paper, we propose a novel big data management architecture specialized for log data. The proposed architecture provides a scalable log management system that consists of client and server side modules for efficient handling of log data. To support large and simultaneous log data from multiple clients, we adopt the Hadoop infrastructure in the server-side file system for storing and managing log data efficiently. We implement the proposed architecture to support various client environments and validate the efficiency through measurement studies. The results show that the proposed architecture performs better than the existing logging architecture by 42.8% on average. All components of the proposed architecture are implemented based on open source software and the developed prototypes are now publicly available.

Design and Implementation of Intrusion Detection System of Packet Reduction Method (패킷 리덕션 방식의 침입탐지 시스템 설계 및 구현)

  • JUNG, Shin-Il;KIM, Bong-Je;KIM, Chang-Soo
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.17 no.2
    • /
    • pp.270-280
    • /
    • 2005
  • Many researchers have proposed the various methods to detect illegal intrusion in order to improve internet environment. Among these researches, IDS(Intrusion Detection System) is classified the most common model to protect network security. In this paper, we propose new log format instead of Apache log format for SSL integrity verification. We translate file-DB log format into R-DB log format. Using these methods we can manage Web server's integrity, and log data is transmitted verification system to be able to perform both primary function of IDS and Web server's integrity management at the same time. The proposed system in this paper is also able to use for wire and wireless environment based on PDA.

Partial Segement Effects on the Log-structured Disk Server (로그구조 디스크 서버에서 부분 세그먼트의 영향)

  • 김대웅;남영진;박찬익
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.721-723
    • /
    • 1998
  • 본 논문에서는 기존의 로그구조 디스크 서버(Log-structured Disk Server ; LDS)상에서 부분 세그먼트에 의한 디스크 대역폭 이용률 저하 및 높은 공간 정리 부하(cleaning overhead)량과 같은 문제점을 제기하고, 이러한 부분 세그먼트를 제거하기 위한 방법의 하나로 비휘발성 메모리를 사용한 LDS를 제안한다. 이 새로운 LDS를 Solaris운영체제 상에서 구현하였으며, 다양한 실험을 통하여 제안된 구조가 디스크 대역폭 이용률 및 공간 정리 부하 측면에서 기존의 구조에 비해서 월등히 좋은 성능을 보임을 볼 수 있었다.

  • PDF

A research on improving client based detection feature by using server log analysis in FPS games (FPS 게임 서버 로그 분석을 통한 클라이언트 단 치팅 탐지 기능 개선에 관한 연구)

  • Kim, Seon Min;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.6
    • /
    • pp.1465-1475
    • /
    • 2015
  • Cheating detection models in the online games can be divided into two parts. The one is on client based model, which is designed to detect malicious programs not to be run while playing the games. The other one is server based model, which distinguishes the difference between benign users and cheaters by the server log analysis. The client based model provides various features to prevent games from cheating, For instance, Anti-reversing, memory manipulation and so on. However, being deployed and operated on the client side is a huge weak point as cheaters can analyze and bypass the detection features. That Is why the server based model is an emerging way to detect cheating users in online games. But the simple log data such as FPS's one can be hard to find validate difference between two of them. In this paper, In order to compensate for the disadvantages of the two detection model above, We use the existing game security solution log as well as the server one to bring high performance as well as detection ratio compared to the existing detection models in the market.

Novelty Detection on Web-server Log Dataset (웹서버 로그 데이터의 이상상태 탐지 기법)

  • Lee, Hwaseong;Kim, Ki Su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.10
    • /
    • pp.1311-1319
    • /
    • 2019
  • Currently, the web environment is a commonly used area for sharing information and conducting business. It is becoming an attack point for external hacking targeting on personal information leakage or system failure. Conventional signature-based detection is used in cyber threat but signature-based detection has a limitation that it is difficult to detect the pattern when it is changed like polymorphism. In particular, injection attack is known to the most critical security risks based on web vulnerabilities and various variants are possible at any time. In this paper, we propose a novelty detection technique to detect abnormal state that deviates from the normal state on web-server log dataset(WSLD). The proposed method is a machine learning-based technique to detect a minor anomalous data that tends to be different from a large number of normal data after replacing strings in web-server log dataset with vectors using machine learning-based embedding algorithm.

Comparison of Remaining Data According to Deletion Events on Microsoft SQL Server (Microsoft SQL Server 삭제 이벤트의 데이터 잔존 비교)

  • Shin, Jiho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.2
    • /
    • pp.223-232
    • /
    • 2017
  • Previous research on data recovery in Microsoft SQL Server has focused on restoring data based on in the transaction log that might have deleted records exist. However, there was a limit that was not applicable if the related transaction log did not exist or the physical database file was not connected to Server. Since the suspect in the crime scene may delete the data records using a different deletion statements besides "delete", we need to check the remaining data and a recovery possibility of the deleted record. In this paper, we examined the changes "Page Allocation information" of the table, "Unallocation deleted data", "Row Offset Array" in the page according to "delete", "truncate" and "drop" events. Finally it confirmed the possibility of data recovery and availability of management tools in Microsoft SQL Server digital forensic investigation.

A Data-Consistency Scheme for the Distributed-Cache Storage of the Memcached System

  • Liao, Jianwei;Peng, Xiaoning
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.3
    • /
    • pp.92-99
    • /
    • 2017
  • Memcached, commonly used to speed up the data access in big-data and Internet-web applications, is a system software of the distributed-cache mechanism. But it is subject to the severe challenge of the loss of recently uncommitted updates in the case where the Memcached servers crash due to some reason. Although the replica scheme and the disk-log-based replay mechanism have been proposed to overcome this problem, they generate either the overhead of the replica synchronization or the persistent-storage overhead that is caused by flushing related logs. This paper proposes a scheme of backing up the write requests (i.e., set and add) on the Memcached client side, to reduce the overhead resulting from the making of disk-log records or performing the replica consistency. If the Memcached server fails, a timestamp-based recovery mechanism is then introduced to replay the write requests (buffered by relevant clients), for regaining the lost-data updates on the rebooted Memcached server, thereby meeting the data-consistency requirement. More importantly, compared with the mechanism of logging the write requests to the persistent storage of the master server and the server-replication scheme, the newly proposed approach of backing up the logs on the client side can greatly decrease the time overhead by up to 116.8% when processing the write workloads.

A Case Study for Improving Performance of A Banking System Using Load Test (부하테스트를 이용한 금융 시스템의 성능개선 사례)

  • Kim, Tai Suk;Lee, Jong Yun;Kim, Jong Soo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.12
    • /
    • pp.1501-1508
    • /
    • 2015
  • In this paper, we describe a case study to improve performance through the load testing of multi-tired system for financial accounts before the system opening. The load test was conducted after the data collection tools(Performance Monitor, DB PSSDiag) were installed. By analyzing the collected log, we were able to identify the main sector requiring performance improvements among the presentation tier, web tier, business logic tier and data tier. The ASP.NET server-down on the web tier could be improved by modifying the parameter values in the configuration file. Some server downs occurred on the business logic tier when a large number of users access at the same time, were more difficult to be solved. By analyzing the hang-dump at the server-down time, we were able to find a process that caused the problem. and we had to modify the relevant codes. For major performance improvements of the data-tier, indices of some queries was optimized by using the built-in DBMS query analyzer, after analyzing the log of long-response-time queries. The problems and solutions considered in this case study will be a reference for the performance improvement of a multi-layer system with the similar structure.

A Precursor Phenomena Analysis of APT Hacking Attack and IP Traceback (APT 해킹 공격에 대한 전조현상 분석 및 IP역추적)

  • Noh, Jung Ho;Park, Dea-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.275-278
    • /
    • 2013
  • Log is a file system, a system that uses all remaining data. Want situation now being issued in the IT, media Nate on information disclosure, the press agency server hack by numbness crime occurred. Hacking crisis that's going through this log analysis software professionally for professional analysis is needed. The present study, about APT attacks happening intelligently Log In case of more than traceback in advance to prevent the technology to analyze the pattern for log analysis techniques.

  • PDF