• Title/Summary/Keyword: Frequent itemsets

Search Result 57, Processing Time 0.025 seconds

Discovering Frequent Itemsets Reflected User Characteristics Using Weighted Batch based on Data Stream (스트림 데이터 환경에서 배치 가중치를 이용하여 사용자 특성을 반영한 빈발항목 집합 탐사)

  • Seo, Bok-Il;Kim, Jae-In;Hwang, Bu-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.1
    • /
    • pp.56-64
    • /
    • 2011
  • It is difficult to discover frequent itemsets based on whole data from data stream since data stream has the characteristics of infinity and continuity. Therefore, a specialized data mining method, which reflects the properties of data and the requirement of users, is required. In this paper, we propose the method of FIMWB discovering the frequent itemsets which are reflecting the property that the recent events are more important than old events. Data stream is splitted into batches according to the given time interval. Our method gives a weighted value to each batch. It reflects user's interestedness for recent events. FP-Digraph discovers the frequent itemsets by using the result of FIMWB. Experimental result shows that FIMWB can reduce the generation of useless items and FP-Digraph method shows that it is suitable for real-time environment in comparison to a method based on a tree(FP-Tree).

Finding Frequent Itemsets Over Data Streams in Confined Memory Space (한정된 메모리 공간에서 데이터 스트림의 빈발항목 최적화 방법)

  • Kim, Min-Jung;Shin, Se-Jung;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.15D no.6
    • /
    • pp.741-754
    • /
    • 2008
  • Due to the characteristics of a data stream, it is very important to confine the memory usage of a data mining process regardless of the amount of information generated in the data stream. For this purpose, this paper proposes the Prime pattern tree(PPT) for finding frequent itemsets over data streams with using the confined memory space. Unlike a prefix tree, a node of a PPT can maintain the information necessary to estimate the current supports of several itemsets together. The length of items in a prime pattern can be reduced the total number of nodes and controlled by split_delta $S_{\delta}$. The size and the accuracy of the PPT is determined by $S_{\delta}$. The accuracy is better as the value of $S_{\delta}$ is smaller since the value of $S_{\delta}$ is large, many itemsets are estimated their frequencies. So it is important to consider trade-off between the size of a PPT and the accuracy of the mining result. Based on this characteristic, the size and the accuracy of the PPT can be flexibly controlled by merging or splitting nodes in a mining process. For finding all frequent itemsets over the data stream, this paper proposes a PPT to replace the role of a prefix tree in the estDec method which was proposed as a previous work. It is efficient to optimize the memory usage for finding frequent itemsets over a data stream in confined memory space. Finally, the performance of the proposed method is analyzed by a series of experiments to identify its various characteristics.

Mining Frequent Itemsets using Time Unit Grouping (시간 단위 그룹핑을 이용한 빈발 아이템셋 마이닝)

  • Hwang, Jeong Hee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.647-653
    • /
    • 2022
  • Data mining is a technique that explores knowledge such as relationships and patterns between data by exploring and analyzing data. Data that occurs in the real world includes a temporal attribute. Temporal data mining research to find useful knowledge from data with temporal properties can be effectively utilized for predictive judgment that can predict the future. In this paper, we propose an algorithm using time-unit grouping to classify the database into regular time period units and discover frequent pattern itemsets in time units. The proposed algorithm organizes the transaction and items included in the time unit into a matrix, and discovers frequent items in the time unit through grouping. In the experimental results for the performance evaluation, it was found that the execution time was 1.2 times that of the existing algorithm, but more than twice the frequent pattern itemsets were discovered.

An Extended Frequent Pattern Tree for Hiding Sensitive Frequent Itemsets (민감한 빈발 항목집합 숨기기 위한 확장 빈발 패턴 트리)

  • Lee, Dan-Young;An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.18D no.3
    • /
    • pp.169-178
    • /
    • 2011
  • Recently, data sharing between enterprises or organizations is required matter for task cooperation. In this process, when the enterprise opens its database to the affiliates, it can be occurred to problem leaked sensitive information. To resolve this problem it is needed to hide sensitive information from the database. Previous research hiding sensitive information applied different heuristic algorithms to maintain quality of the database. But there have been few studies analyzing the effects on the items modified during the hiding process and trying to minimize the hided items. This paper suggests eFP-Tree(Extended Frequent Pattern Tree) based FP-Tree(Frequent Pattern Tree) to hide sensitive frequent itemsets. Node formation of eFP-Tree uses border to minimize impacts of non sensitive frequent itemsets in hiding process, by organizing all transaction, sensitive and border information differently to before. As a result to apply eFP-Tree to the example transaction database, the lost items were less than 10%, proving it is more effective than the existing algorithm and maintain the quality of database to the optimal.

An Efficient Hashing Mechanism of the DHP Algorithm for Mining Association Rules (DHP 연관 규칙 탐사 알고리즘을 위한 효율적인 해싱 메카니즘)

  • Lee, Hyung-Bong
    • The KIPS Transactions:PartD
    • /
    • v.13D no.5 s.108
    • /
    • pp.651-660
    • /
    • 2006
  • Algorithms for mining association rules based on the Apriori algorithm use the hash tree data structure for storing and counting supports of the candidate frequent itemsets and the most part of the execution time is consumed for searching in the hash tree. The DHP(Direct Hashing and Pruning) algorithm makes efforts to reduce the number of the candidate frequent itemsets to save searching time in the hash tree. For this purpose, the DHP algorithm does preparative simple counting supports of the candidate frequent itemsets. At this time, the DHP algorithm uses the direct hash table to reduce the overhead of the preparative counting supports. This paper proposes and evaluates an efficient hashing mechanism for the direct hash table $H_2$ which is for pruning in phase 2 and the hash tree $C_k$, which is for counting supports of the candidate frequent itemsets in all phases. The results showed that the performance improvement due to the proposed hashing mechanism was 82.2% on the maximum and 18.5% on the average compared to the conventional method using a simple mod operation.

Improved Association Rule Mining by Modified Trimming (트리밍 방식 수정을 통한 연관규칙 마이닝 개선)

  • Hwang, Won-Tae;Kim, Dong-Seung
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.3
    • /
    • pp.15-21
    • /
    • 2008
  • This paper presents a new association mining algorithm that uses two phase sampling for shortening the execution time at the cost of precision of the mining result. Previous FAST(Finding Association by Sampling Technique) algorithm has the weakness in that it only considered the frequent 1-itemsets in trimming/growing, thus, it did not have ways of considering mulit-itemsets including 2-itemsets. The new algorithm reflects the multi-itemsets in sampling transactions. It improves the mining results by adjusting the counts of both missing itemsets and false itemsets. Experimentally, on a representative synthetic database, the algorithm produces a sampled subset of results with an increased accuracy in terms of the 2-itemsets while it maintains the same 1uality of the data set.

An Efficient Tree Structure Method for Mining Association Rules (트리 구조를 이용한 연관규칙의 효율적 탐색)

  • Kim, Chang-Oh;Ahn, Kwang-Il;Kim, Seong-Jip;Kim, Jae-Yearn
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.27 no.1
    • /
    • pp.30-36
    • /
    • 2001
  • We present a new algorithm for mining association rules in the large database. Association rules are the relationships of items in the same transaction. These rules provide useful information for marketing. Since Apriori algorithm was introduced in 1994, many researchers have worked to improve Apriori algorithm. However, the drawback of Apriori-based algorithm is that it scans the transaction database repeatedly. The algorithm which we propose scans the database twice. The first scanning of the database collects frequent length l-itemsets. And then, the algorithm scans the database one more time to construct the data structure Common-Item Tree which stores the information about frequent itemsets. To find all frequent itemsets, the algorithm scans Common-Item Tree instead of the database. As scanning Common-Item Tree takes less time than scanning the database, the algorithm proposed is more efficient than Apriori-based algorithm.

  • PDF

PRMS: Page Reallocation Method for SSDs (PRMS: SSDs에서의 Page 재배치 방법)

  • Lee, Dong-Hyun;Roh, Hong-Chan;Park, Sang-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.17D no.6
    • /
    • pp.395-404
    • /
    • 2010
  • Solid-State Disks (SSDs) have been currently considered as a promising candidate to replace hard disks, due to their significantly short access time, low power consumption, and shock resistance. SSDs, however, have drawbacks such that their write throughput and life span are decreased by random-writes, nearly regardless of SSDs controller designs. Previous studies have mostly focused on better designs of SSDs controller and reducing the number of write operations to SSDs. We suggest another method that reallocates data pages that tend to be simultaneously written to contiguous blocks. Our method gathers write operations during a period of time and generates write traces. After transforming each trace to a set of transactions, our method mines frequent itemsets from the transactions and reallocates the pages of the frequent itemsets. In addition, we introduce an algorithm that reallocates the pages of the frequent itemsets with moderate time complexity. Experiments using TPC-C workload demonstrated that our method successfully reduce 6% of total logical block access.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Multi-Sized cumulative Summary Structure Driven Light Weight in Frequent Closed Itemset Mining to Increase High Utility

  • Siva S;Shilpa Chaudhari
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.117-129
    • /
    • 2023
  • High-utility itemset mining (HIUM) has emerged as a key data-mining paradigm for object-of-interest identification and recommendation systems that serve as frequent itemset identification tools, product or service recommendation systems, etc. Recently, it has gained widespread attention owing to its increasing role in business intelligence, top-N recommendation, and other enterprise solutions. Despite the increasing significance and the inability to provide swift and more accurate predictions, most at-hand solutions, including frequent itemset mining, HUIM, and high average- and fast high-utility itemset mining, are limited to coping with real-time enterprise demands. Moreover, complex computations and high memory exhaustion limit their scalability as enterprise solutions. To address these limitations, this study proposes a model to extract high-utility frequent closed itemsets based on an improved cumulative summary list structure (CSLFC-HUIM) to reduce an optimal set of candidate items in the search space. Moreover, it employs the lift score as the minimum threshold, called the cumulative utility threshold, to prune the search space optimal set of itemsets in a nested-list structure that improves computational time, costs, and memory exhaustion. Simulations over different datasets revealed that the proposed CSLFC-HUIM model outperforms other existing methods, such as closed- and frequent closed-HUIM variants, in terms of execution time and memory consumption, making it suitable for different mined items and allied intelligence of business goals.