• Title/Summary/Keyword: pruning techniques

Search Result 37, Processing Time 0.019 seconds

An Algorithm for generating Cloaking Region Using Grids for Privacy Protection in Location-Based Services (위치기반 서비스에서 개인 정보 보호를 위한 그리드를 이용한 Cloaking 영역 생성 알고리즘)

  • Um, Jung-Ho;Kim, Ji-Hee;Chang, Jae-Woo
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.2
    • /
    • pp.151-161
    • /
    • 2009
  • In Location-Based Services (LBSs), users requesting a location-based query send their exact location to a database server and thus the location information of the users can be misused by adversaries. Therefore, a privacy protection method is required for using LBS in a safe way. In this paper, we propose a new cloaking region generation algorithm using grids for privacy protection in LBSs. The proposed algorithm creates a m inimum cloaking region by finding L buildings and then performs K-anonymity to search K users. For this, we make use of not only a grid-based index structure, but also an efficient pruning techniques. Finally, we show from a performance analysis that our cloaking region generation algorithm outperforms the existing algorithm in term of the size of cloaking region.

  • PDF

Non-linear regression model considering all association thresholds for decision of association rule numbers (기본적인 연관평가기준 전부를 고려한 비선형 회귀모형에 의한 연관성 규칙 수의 결정)

  • Park, Hee Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.2
    • /
    • pp.267-275
    • /
    • 2013
  • Among data mining techniques, the association rule is the most recently developed technique, and it finds the relevance between two items in a large database. And it is directly applied in the field because it clearly quantifies the relationship between two or more items. When we determine whether an association rule is meaningful, we utilize interestingness measures such as support, confidence, and lift. Interestingness measures are meaningful in that it shows the causes for pruning uninteresting rules statistically or logically. But the criteria of these measures are chosen by experiences, and the number of useful rules is hard to estimate. If too many rules are generated, we cannot effectively extract the useful rules.In this paper, we designed a variety of non-linear regression equations considering all association thresholds between the number of rules and three interestingness measures. And then we diagnosed multi-collinearity and autocorrelation problems, and used analysis of variance results and adjusted coefficients of determination for the best model through numerical experiments.

Prediction of concrete compressive strength using non-destructive test results

  • Erdal, Hamit;Erdal, Mursel;Simsek, Osman;Erdal, Halil Ibrahim
    • Computers and Concrete
    • /
    • v.21 no.4
    • /
    • pp.407-417
    • /
    • 2018
  • Concrete which is a composite material is one of the most important construction materials. Compressive strength is a commonly used parameter for the assessment of concrete quality. Accurate prediction of concrete compressive strength is an important issue. In this study, we utilized an experimental procedure for the assessment of concrete quality. Firstly, the concrete mix was prepared according to C 20 type concrete, and slump of fresh concrete was about 20 cm. After the placement of fresh concrete to formworks, compaction was achieved using a vibrating screed. After 28 day period, a total of 100 core samples having 75 mm diameter were extracted. On the core samples pulse velocity determination tests and compressive strength tests were performed. Besides, Windsor probe penetration tests and Schmidt hammer tests were also performed. After setting up the data set, twelve artificial intelligence (AI) models compared for predicting the concrete compressive strength. These models can be divided into three categories (i) Functions (i.e., Linear Regression, Simple Linear Regression, Multilayer Perceptron, Support Vector Regression), (ii) Lazy-Learning Algorithms (i.e., IBk Linear NN Search, KStar, Locally Weighted Learning) (iii) Tree-Based Learning Algorithms (i.e., Decision Stump, Model Trees Regression, Random Forest, Random Tree, Reduced Error Pruning Tree). Four evaluation processes, four validation implements (i.e., 10-fold cross validation, 5-fold cross validation, 10% split sample validation & 20% split sample validation) are used to examine the performance of predictive models. This study shows that machine learning regression techniques are promising tools for predicting compressive strength of concrete.

Using CART to Evaluate Performance of Tree Model (CART를 이용한 Tree Model의 성능평가)

  • Jung, Yong Gyu;Kwon, Na Yeon;Lee, Young Ho
    • Journal of Service Research and Studies
    • /
    • v.3 no.1
    • /
    • pp.9-16
    • /
    • 2013
  • Data analysis is the universal classification techniques, which requires a lot of effort. It can be easily analyzed to understand the results. Decision tree which is developed by Breiman can be the most representative methods. There are two core contents in decision tree. One of the core content is to divide dimensional space of the independent variables repeatedly, Another is pruning using the data for evaluation. In classification problem, the response variables are categorical variables. It should be repeatedly splitting the dimension of the variable space into a multidimensional rectangular non overlapping share. Where the continuous variables, binary, or a scale of sequences, etc. varies. In this paper, we obtain the coefficients of precision, reproducibility and accuracy of the classification tree to classify and evaluate the performance of the new cases, and through experiments to evaluate.

  • PDF

Efficient Processing of k-Farthest Neighbor Queries for Road Networks

  • Kim, Taelee;Cho, Hyung-Ju;Hong, Hee Ju;Nam, Hyogeun;Cho, Hyejun;Do, Gyung Yoon;Jeon, Pilkyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.10
    • /
    • pp.79-89
    • /
    • 2019
  • While most research focuses on the k-nearest neighbors (kNN) queries in the database community, an important type of proximity queries called k-farthest neighbors (kFN) queries has not received much attention. This paper addresses the problem of finding the k-farthest neighbors in road networks. Given a positive integer k, a query object q, and a set of data points P, a kFN query returns k data objects farthest from the query object q. Little attention has been paid to processing kFN queries in road networks. The challenge of processing kFN queries in road networks is reducing the number of network distance computations, which is the most prominent difference between a road network and a Euclidean space. In this study, we propose an efficient algorithm called FANS for k-FArthest Neighbor Search in road networks. We present a shared computation strategy to avoid redundant computation of the distances between a query object and data objects. We also present effective pruning techniques based on the maximum distance from a query object to data segments. Finally, we demonstrate the efficiency and scalability of our proposed solution with extensive experiments using real-world roadmaps.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Study on Evaluation of Carbon Emission and Sequestration in Pear Orchard (배 재배지 단위의 탄소 배출량 및 흡수량 평가 연구)

  • Suh, Sanguk;Choi, Eunjung;Jeong, Hyuncheol;Lee, Jongsik;Kim, Gunyeob;Sho, Kyuho;Lee, Jaeseok
    • Korean Journal of Environmental Biology
    • /
    • v.34 no.4
    • /
    • pp.257-263
    • /
    • 2016
  • Objective of this study was to evaluate the carbon budget on 40 years old pear orchard at Naju. For carbon budget assessment, we measured the soil respiration, net ecosystem productivity of herbs, pear biomass and net ecosystem exchange. In 2015, pear orchard released about $25.6ton\;CO_2\;ha^{-1}$ by soil respiration. And $27.9ton\;CO_2\;ha^{-1}$ was sequestrated by biomass growth. Also about $12.6ton\;CO_2\;ha^{-1}$ was stored at pruning branches and about $5.2ton\;CO_2\;ha^{-1}$ for photosynthesis of herbs. As a result, 25.6 ton of $CO_2$ per ha is annually released to atmosphere. At the same time about 45.7 ton of $CO_2$ was sequestrated from atmosphere. When it sum up the amount of $CO_2$ release and sequestration, approximately $20.1ton\;CO_2\;ha^{-1}$ was sequestrated by pear orchard in 2015, and it showed no significant differences with net ecosystem exchanges ($17.8ton\;CO_2\;ha^{-1}\;yr^{-1}$) by eddy covariance method with the same period. Continuous research using various techniques will help the understanding of $CO_2$ dynamics in agroecosystem and it can be able to present a new methodology for assessment of carbon budget in woody crop field. Futhermore, it is expected that the this study can be used as the basic data to be recognized as a carbon sink.