• Title/Summary/Keyword: transaction log

Search Result 62, Processing Time 0.03 seconds

Applications of Transaction Log Analysis for the Web Searching Field (웹 검색 분야에서의 로그 분석 방법론의 활용도)

  • Park, So-Yeon;Lee, Joon-Ho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.41 no.1
    • /
    • pp.231-242
    • /
    • 2007
  • Transaction logs capture the interactions between online information retrieval systems and the users. Given the nature of the Web and Web users, transaction logs appear to be a reasonable and relevant method to collect and investigate information searching behaviors from a large number of Web users. Based on a series of research studies that analyzed Naver transaction logs, this study examines how transaction log analysis can be applied and contributed to the field of web searching and suggests future implications for the web searching field. It is expected that this study could contribute to the development and implementation of more effective Web search systems and services.

Investigating Web Search Behavior via Query Log Analysis (로그분석을 통한 이용자의 웹 문서 검색 행태에 관한 연구)

  • 박소연;이준호
    • Journal of the Korean Society for information Management
    • /
    • v.19 no.3
    • /
    • pp.111-122
    • /
    • 2002
  • In order to investigate information seeking behavior of web search users, this study analyzes transaction logs posed by users of NAVER, a major Korean Internet search service. We present a session definition method for Web transaction log analysis, a way of cleaning original logs and a query classification method. We also propose a query term definition method that is necessary for Korean Web transaction log analysis. It is expected that this study could contribute to the development and implementation of more effective Web search systems and services.

Selective Redo recovery scheme for fine-Granularity Locking in Database Management (데이터베이스 관리 시스템에서 섬세 입자 잠금기법을 위한 선택적 재수행 회복기법)

  • 이상희
    • Journal of the Korea Society of Computer and Information
    • /
    • v.6 no.2
    • /
    • pp.27-33
    • /
    • 2001
  • In this thesis, we present a simple and efficient recovery method, called ARIES/SR(ARIES/Selective Redo) which is based on ARIES(Algorithm for Recovery and Isolation Exploiting Semantics) ARIES performs redo for all updates done by either nonloser transaction or loser transaction, and thus significant overhead appears during restart after a system failure. To reduce this overhead, we propose ARIES/SR recovery algorithm. In this algorithm, to reduce the redo operations, redo is performed, using log record for updates done by only nonloser transaction. Also selective undo is performed. using log record for update done by only loser transaction for reducing recovery operation.

A Study on the Improvement of Information Service Using Information System Log Analysis (정보 시스템 이용기록 분석을 통한 정보 서비스 개선방안 연구)

  • Jho, Jae-Hyeong
    • Journal of Information Management
    • /
    • v.36 no.4
    • /
    • pp.137-153
    • /
    • 2005
  • For the improvement of information service, users' transaction log can be stored to the system, and the log analysis should be included in the process of service improvement. Also there are differences within kinds of users' log records and methods of analysis according to the institution's strategy. This paper describes the kinds of log records from users' behavior on information system. And its goal is to consider the case of information center which operates log analysis, and to derive a plan for improvement of information services.

A Stability Verification of Backup System for Disaster Recovery (재해 복구를 위한 백업 시스템의 안정성 검증)

  • Lee, Moon-Goo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.205-214
    • /
    • 2012
  • The main thing that IT operation managers consider is protecting assets of corporation from system failure and disaster. Therefore, this research proposed a backup system for a disaster recovery. Previous backup method is that if database update occurs, this record is saved in redo log, and if the size of record file is over than expected, this file is saved in archive log in order. Thus, it is possible to occur errors of data loss from the process of data backup which change in real time while changes of database occur. Suggested backup system is back redo log up to database of transaction log in real time, and back a record that can be omitted from previous backup method up to archive log. When recover the data, it is possible to recover redo log in real time online, and it minimizes data loss. Also, throughout multi thread processing method data recovery is performed and it is designed that system performance is improved. To verify stability of backup system CPN(Coloured Petri Net) is introduced, and each step of backup system is displayed in diagram form, and th e stability is verified based on the definition and theorem of CPN.

Trends of Web-based OPAC Search Behavior via Transaction Log Analysis (트랜잭션 로그 분석을 통한 웹기반 온라인목록의 검색행태 추이 분석)

  • Lee, Sung-Sook
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.23 no.2
    • /
    • pp.209-233
    • /
    • 2012
  • In this study in order to verify the overall information seeking behavior of the Web-based OPAC users, it was analyzed transaction log file for 7 years. Regarding Web-based OPAC information seeking behavior, it was studied from the perspective of information seeking strategy and information seeking failure. In search strategy, it was analyzed search type, search options, Boolean operator, length of search text, number of uses of word, number of use Web-based OPAC, number of use by time, by week day. Also, in search failure, search failure ratio, search failure ratio by search options, search failure ratio by Boolean operator were analyzed. The result of this study is expected to be utilized for OPAC system and service improvement in the future.

Analyzing Fluctuation of the Rent-Transaction price ratio under the Influence of the Housing Transaction, Jeonse Rental price (주택매매가격 및 전세가격 변화에 따른 전세/매매가격비율 변동 분석)

  • Park, Jae-Hyun;Lee, Sang-Hyo;Kim, Jae-Jun
    • Journal of The Korean Digital Architecture Interior Association
    • /
    • v.10 no.2
    • /
    • pp.13-20
    • /
    • 2010
  • Uncertainty in housing price fluctuation has great impact on the overall economy due to importance of housing market as both place of residence and investment target. Therefore, estimating housing market condition is a highly important task in terms of setting national policy. Primary indicator of the housing market is a ratio between rent and transaction price of housing. The research explores dynamic relationships between Rent-Transaction price ratio, housing transaction price and jeonse rental price, using Vector Autoregressive Model, in order to demonstrate significance of shifting rent-transaction price that is subject to changes in housing transaction and housing rental market. The research applied housing transaction price index and housing rental price index as an indicator to measure transaction and rental price of housing. The price index and data for price ratio was derived from statistical data of the Kookmin Bank. The time-series data contains monthly data ranging between January 1999 and November 2009; the data was log transformed to convert to level variable. The analysis result suggests that the rising ratio between rent-transaction price of housing should be interpreted as a precursor for rise of housing transaction price, rather than judging as a mere indicator of a current trend.

Performance Analysis of Flash Memory SSD with Non-volatile Cache for Log Storage (비휘발성 캐시를 사용하는 플래시 메모리 SSD의 데이터베이스 로깅 성능 분석)

  • Hong, Dae-Yong;Oh, Gi-Hwan;Kang, Woon-Hak;Lee, Sang-Won
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.107-113
    • /
    • 2015
  • In a database system, updates on pages that are made by a transaction should be stored in a secondary storage before the commit is complete. Generic secondary storages have volatile DRAM caches to hide long latency for non-volatile media. However, as logs that are only written to the volatile DRAM cache don't ensure durability, logging latency cannot be hidden. Recently, a flash SSD with capacitor-backed DRAM cache was developed to overcome the shortcoming. Storage devices, like those with a non-volatile cache, will increase transaction throughput because transactions can commit as soon as the logs reach the cache. In this paper, we analyzed performance in terms of transaction throughput when the SSD with capacitor-backed DRAM cache was used as log storage. The transaction throughput can be improved over three times, by committing right after storing the logs to the DRAM cache, rather than to a secondary storage device. Also, we showed that it could acquire over 73% of the ideal logging performance with proper tuning.

온라인 목록 검색 행태에 관한 연구-LINNET 시스템의 Transaction log 분석을 중심으로-

  • 윤구호;심병규
    • Journal of Korean Library and Information Science Society
    • /
    • v.21
    • /
    • pp.253-289
    • /
    • 1994
  • The purpose of this study is about the search pattern of LINNET (Library Information Network System) OPAC users by transaction log, maintained by POSTECH(Pohang University of Science and Technology) Central Library, to provide feedback information of OPAC system design. The results of this study are as follows. First, for the period of this analysis, there were totally 11, 218 log-ins, 40, 627 transaction logs and 3.62 retrievals per a log-in. Title keyword was the most frequently used, but accession number, bibliographic control number or call number was very infrequently used. Second, 47.02% of OPAC, searches resulted in zero retrievals. Bibliographic control number was the least successful search. User displayed 2.01% full information and 64.27% local information per full information. Third, special or advanced retrieval features are very infrequently used. Only 22.67% of the searches used right truncation and 0.71% used the qualifier. Only 1 boolean operator was used in every 22 retrievals. The most frequently used operator is 'and (&)' with title keywords. But 'bibliographical control number (N) and accessionnumber (R) are not used at all with any operators. The causes of search failure are as follows. 1. The item was not used in the database. (15, 764 times : 79.42%). 2. The wrong search key was used. (3, 761 times : 18.95%) 3. The senseless string (garbage) was entered. (324 times : 1.63%) On the basis of these results, some recommendations are suggested to improve the search success rate as follows. First, a n.0, ppropriate user education and online help function let users retrieve LINNET OPAC more efficiently. Second, several corrections of retrieval software will decrease the search failure rate. Third, system offers right truncation by default to every search term. This methods will increase success rate but should considered carefully. By a n.0, pplying this method, the number of hit can be overnumbered, and system overhead can be occurred. Fourth, system offers special boolean operator by default to every keyword retrieval when user enters more than two words at a time. Fifth, system assists searchers to overcome the wrong typing of selecting key by automatic korean/english mode change.

  • PDF

The Analysis Framework for User Behavior Model using Massive Transaction Log Data (대규모 로그를 사용한 유저 행동모델 분석 방법론)

  • Lee, Jongseo;Kim, Songkuk
    • The Journal of Bigdata
    • /
    • v.1 no.2
    • /
    • pp.1-8
    • /
    • 2016
  • User activity log includes lots of hidden information, however it is not structured and too massive to process data, so there are lots of parts uncovered yet. Especially, it includes time series data. We can reveal lots of parts using it. But we cannot use log data directly to analyze users' behaviors. In order to analyze user activity model, it needs transformation process through extra framework. Due to these things, we need to figure out user activity model analysis framework first and access to data. In this paper, we suggest a novel framework model in order to analyze user activity model effectively. This model includes MapReduce process for analyzing massive data quickly in the distributed environment and data architecture design for analyzing user activity model. Also we explained data model in detail based on real online service log design. Through this process, we describe which analysis model is fit for specific data model. It raises understanding of processing massive log and designing analysis model.

  • PDF