• Title/Summary/Keyword: Frequent Itemsets

Search Result 57, Processing Time 0.022 seconds

IMPLEMENTATION OF SUBSEQUENCE MAPPING METHOD FOR SEQUENTIAL PATTERN MINING

  • Trang, Nguyen Thu;Lee, Bum-Ju;Lee, Heon-Gyu;Ryu, Keun-Ho
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.627-630
    • /
    • 2006
  • Sequential Pattern Mining is the mining approach which addresses the problem of discovering the existent maximal frequent sequences in a given databases. In the daily and scientific life, sequential data are available and used everywhere based on their representative forms as text, weather data, satellite data streams, business transactions, telecommunications records, experimental runs, DNA sequences, histories of medical records, etc. Discovering sequential patterns can assist user or scientist on predicting coming activities, interpreting recurring phenomena or extracting similarities. For the sake of that purpose, the core of sequential pattern mining is finding the frequent sequence which is contained frequently in all data sequences. Beside the discovery of frequent itemsets, sequential pattern mining requires the arrangement of those itemsets in sequences and the discovery of which of those are frequent. So before mining sequences, the main task is checking if one sequence is a subsequence of another sequence in the database. In this paper, we implement the subsequence matching method as the preprocessing step for sequential pattern mining. Matched sequences in our implementation are the normalized sequences as the form of number chain. The result which is given by this method is the review of matching information between input mapped sequences.

  • PDF

Implementation of Subsequence Mapping Method for Sequential Pattern Mining

  • Trang Nguyen Thu;Lee Bum-Ju;Lee Heon-Gyu;Park Jeong-Seok;Ryu Keun-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.5
    • /
    • pp.457-462
    • /
    • 2006
  • Sequential Pattern Mining is the mining approach which addresses the problem of discovering the existent maximal frequent sequences in a given databases. In the daily and scientific life, sequential data are available and used everywhere based on their representative forms as text, weather data, satellite data streams, business transactions, telecommunications records, experimental runs, DNA sequences, histories of medical records, etc. Discovering sequential patterns can assist user or scientist on predicting coming activities, interpreting recurring phenomena or extracting similarities. For the sake of that purpose, the core of sequential pattern mining is finding the frequent sequence which is contained frequently in all data sequences. Beside the discovery of frequent itemsets, sequential pattern mining requires the arrangement of those itemsets in sequences and the discovery of which of those are frequent. So before mining sequences, the main task is checking if one sequence is a subsequence of another sequence in the database. In this paper, we implement the subsequence matching method as the preprocessing step for sequential pattern mining. Matched sequences in our implementation are the normalized sequences as the form of number chain. The result which is given by this method is the review of matching information between input mapped sequences.

Intelligent Speech Web Considering User Inclination (사용자의 성향을 고려하는 지능형 음성 웹)

  • Kwon, Hyeong-Joon;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.347-354
    • /
    • 2008
  • In this paper, we propose a method for personalizing and intelligence of speech Web. The proposed system records information that was demanded in the past as a transaction, explores association rules from those transactions, and discovers itemsets from frequent requests. This method is to recommend relevant information, based on frequent itemsets, to users who have similar inclinations to previous users. As a result of experimenting and implementation of proposed system for verification, we confirmed that the proposed system can recommend previously frequently requested information as relevant information.

Discovery Temporal Association Rules in Distributed Database (분산데이터베이스 환경하의 시간연관규칙 적용)

  • Yan Zhao;Kim, Long;Sungbo Seo;Ryu, Keun-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.115-117
    • /
    • 2004
  • Recently, mining far association rules in distributed database environments is a central problem in knowledge discovery area. While the data are located in different share-nothing machines, and each data site grows by time. Mining global frequent itemsets is hard and not efficient in large number of distributed sewen. In many distributed databases. time component(which is usually attached to transactions in database), contains meaningful time-related rules. In this paper, we design a new DTA(distributed temporal association) algorithm that combines temporal concepts inside distributed association rules. The algorithm confirms the time interval for applying association rules in distributed databases. The experiment results show that DTA can generate interesting correlation frequent itemsets related with time periods.

  • PDF

Verification Algorithm for the Duplicate Verification Data with Multiple Verifiers and Multiple Verification Challenges

  • Xu, Guangwei;Lai, Miaolin;Feng, Xiangyang;Huang, Qiubo;Luo, Xin;Li, Li;Li, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.558-579
    • /
    • 2021
  • The cloud storage provides flexible data storage services for data owners to remotely outsource their data, and reduces data storage operations and management costs for data owners. These outsourced data bring data security concerns to the data owner due to malicious deletion or corruption by the cloud service provider. Data integrity verification is an important way to check outsourced data integrity. However, the existing data verification schemes only consider the case that a verifier launches multiple data verification challenges, and neglect the verification overhead of multiple data verification challenges launched by multiple verifiers at a similar time. In this case, the duplicate data in multiple challenges are verified repeatedly so that verification resources are consumed in vain. We propose a duplicate data verification algorithm based on multiple verifiers and multiple challenges to reduce the verification overhead. The algorithm dynamically schedules the multiple verifiers' challenges based on verification time and the frequent itemsets of duplicate verification data in challenge sets by applying FP-Growth algorithm, and computes the batch proofs of frequent itemsets. Then the challenges are split into two parts, i.e., duplicate data and unique data according to the results of data extraction. Finally, the proofs of duplicate data and unique data are computed and combined to generate a complete proof of every original challenge. Theoretical analysis and experiment evaluation show that the algorithm reduces the verification cost and ensures the correctness of the data integrity verification by flexible batch data verification.

An efficient algorithm to search frequent itemsets using TID Lists (TID List를 이용한 빈발항목의 효율적인 탐색 알고리즘)

  • 고윤희;김현철
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.136-139
    • /
    • 2002
  • 연관규칙 마이닝과정에서의 빈발항목 탐색의 대표적인 방법으로 알려진 Apriori 알고리즘의 성능을 향상시키기 위한 많은 연구가 진행되어 왔다. 본 논문에서는 트랜잭션 데이터베이스(TDB)에서 생성되는 각 패스의 k-itemset들에 대해 각각 트랜잭션 ID List(TIDist)를 유지하고 이를 이용해 (k+1)-itemset을 효율적으로 찾아내는 방법을 제안한다. 이 방법은 frequent (k+1)-itemset(k>0)의 빈도수 및 TIDList를 TDB 에 대한 스캔이 전혀 없이 k-itemset의 TIDList로부터 직접 구한다. 이는 빈발항목집합을 찾기 위한 탐색 complexity는 크게 줄여줄 뿐 아니라 시간 변화에 따른 빈발항목집합의 분포 정보를 제공해 준다.

  • PDF

Decision process for right association rule generation (올바른 연관성 규칙 생성을 위한 의사결정과정의 제안)

  • Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.2
    • /
    • pp.263-270
    • /
    • 2010
  • Data mining is the process of sorting through large amounts of data and picking out useful information. An important goal of data mining is to discover, define and determine the relationship between several variables. Association rule mining is an important research topic in data mining. An association rule technique finds the relation among each items in massive volume database. Association rule technique consists of two steps: finding frequent itemsets and then extracting interesting rules from the frequent itemsets. Some interestingness measures have been developed in association rule mining. Interestingness measures are useful in that it shows the causes for pruning uninteresting rules statistically or logically. This paper explores some problems for two interestingness measures, confidence and net confidence, and then propose a decision process for right association rule generation using these interestingness measures.

A Method for Frequent Itemsets Mining from Data Stream (데이터 스트림 환경에서 효율적인 빈발 항목 집합 탐사 기법)

  • Seo, Bok-Il;Kim, Jae-In;Hwang, Bu-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.139-146
    • /
    • 2012
  • Data Mining is widely used to discover knowledge in many fields. Although there are many methods to discover association rule, most of them are based on frequency-based approaches. Therefore it is not appropriate for stream environment. Because the stream environment has a property that event data are generated continuously. it is expensive to store all data. In this paper, we propose a new method to discover association rules based on stream environment. Our new method is using a variable window for extracting data items. Variable windows have variable size according to the gap of same target event. Our method extracts data using COBJ(Count object) calculation method. FPMDSTN(Frequent pattern Mining over Data Stream using Terminal Node) discovers association rules from the extracted data items. Through experiment, our method is more efficient to apply stream environment than conventional methods.

An Algorithm for Updating Discovered Association Rules in Data Mining (데이타 마이닝에서 기존의 연관 규칙을 갱신하는 앨고리듬 개발)

  • 이동명;지영근;황종원;강맹규
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.20 no.43
    • /
    • pp.265-276
    • /
    • 1997
  • There have been many studies on efficient discovery of association rules in large databases. However, it is nontrivial to maintain such discovered rules in large databases because a database may allow frequent or occasional updates and such updates may not only invalidate some existing strong association rules but also turn some weak rules into strong ones. The major idea of updating algorithm is to resuse the information of the old large itemsets and to integrate the support information of the new large itemsets in order to substantially reduce the pool of candidate sets to be re-exmained. In this paper, an updating algorithm is proposed for efficient maintenance of discovered assocation rules when new transaction data are added to a transaction database. And superiority of the proposed updating algorithm will be shown by comparing with FUP algorithm that was already proposed.

  • PDF

Partition Algorithm for Updating Discovered Association Rules in Data Mining (데이터마이닝에서 기존의 연관규칙을 갱신하는 분할 알고리즘)

  • 이종섭;황종원;강맹규
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.23 no.54
    • /
    • pp.1-11
    • /
    • 2000
  • This study suggests the partition algorithm for updating the discovered association rules in large database, because a database may allow frequent or occasional updates, and such update may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. the Partition algorithm updates strong association rules efficiently in the whole update database reuseing the information of the old large itemsets. Partition algorithms that is suggested in this study scans an incremental database in view of the fact that it is difficult to find the new set of large itemset in the whole updated database after an incremental database is added to the original database. This method of generating large itemsets is different from that of FUP(Fast Update) and KDP(Kim Dong Pil)

  • PDF