• 제목/요약/키워드: large data sets

검색결과 506건 처리시간 0.026초

대규모의 정보 검색을 위한 효율적인 최소 완전 해시함수의 생성 (Effective Generation of Minimal Perfect hash Functions for Information retrival from large Sets of Data)

  • 김수희;박세영
    • 한국정보처리학회논문지
    • /
    • 제5권9호
    • /
    • pp.2256-2270
    • /
    • 1998
  • 대량의 정보를 빠르게 검색하기 위해 성능좋은 인덱스를 개발하는 것은 매우 중요하다. 본 연구에서는 5ㆍm개의 키들을 m개의 버켓에 충돌없게 해시하는 최소 완전 해시함수를 다시 고려하게 되었다. 대량의 정보를 대상으로 최적의 인덱스를 성공적으로 구축하기 위해 Heath가 개발한 MOS 알고리즘을 개선하고, 이를 토대로 최소 완전 해시함수들을 생성하는 시스템을 개발하였다. 이를 실험하기 위해 대량의 데이터들에 적용한 결과 Heath의 알고리즘보다 효율적으로 각각의 최소 완전 해시함수를 계산하였다. 본 연구에서 개발한 시스템은 자주 변하지 않는 대량의 정보나 탐색 속도가 매우 느린 저장 매체에 저장할 데이터를 대상으로 인덱스를 구축하는 데 이용할 수 있다.

  • PDF

Efficient Continuous Skyline Query Processing Scheme over Large Dynamic Data Sets

  • Li, He;Yoo, Jaesoo
    • ETRI Journal
    • /
    • 제38권6호
    • /
    • pp.1197-1206
    • /
    • 2016
  • Performing continuous skyline queries of dynamic data sets is now more challenging as the sizes of data sets increase and as they become more volatile due to the increase in dynamic updates. Although previous work proposed support for such queries, their efficiency was restricted to small data sets or uniformly distributed data sets. In a production database with many concurrent queries, the execution of continuous skyline queries impacts query performance due to update requirements to acquire exclusive locks, possibly blocking other query threads. Thus, the computational costs increase. In order to minimize computational requirements, we propose a method based on a multi-layer grid structure. First, relational data object, elements of an initial data set, are processed to obtain the corresponding multi-layer grid structure and the skyline influence regions over the data. Then, the dynamic data are processed only when they are identified within the skyline influence regions. Therefore, a large amount of computation can be pruned by adopting the proposed multi-layer grid structure. Using a variety of datasets, the performance evaluation confirms the efficiency of the proposed method.

대용량 데이터 처리를 위한 하이브리드형 클러스터링 기법 (A Hybrid Clustering Technique for Processing Large Data)

  • 김만선;이상용
    • 정보처리학회논문지B
    • /
    • 제10B권1호
    • /
    • pp.33-40
    • /
    • 2003
  • 데이터 마이닝은 지식발견 과정에서 중요한 역할을 수행하며, 여러 데이터 마이닝의 알고리즘들은 특정의 목적을 위하여 선택될 수 있다. 대부분의 전통적인 계층적 클러스터링 방법은 적은 양의 데이터 집합을 처리하는데 적합하여 제한된 리소스와 부족한 효율성으로 인하여 대용량의 데이터 집합을 다루기가 곤란하다. 본 연구에서는 대용량의 데이터에 적용되어 알려지지 않은 패턴을 발견할 수 있는 하이브리드형 신경망 클러스터링 기법의 PPC(Pre-Post Clustrering) 기법을 제안한다. PPC 기법은 인공지능적 방법인 자기조직화지도(SOM)와 통계적 방법인 계층적 클러스터링을 결합하여 두 과정에서는 군집의 내부적 특징을 나타내는 응집거리와 군집간의 외부적 거리를 나타내는 인접거리에 따라 유사도를 측정한다. 최종적으로 PPC 기법은 측정된 유사도를 이용하여 대용량 데이터 집합을 군집화한다. PPC 기법은 UCI Repository 데이터를 이용하여 실험해 본 결과, 다른 클러스터링 기법들 보다 우수한 응집도를 보였다.

웨이블릿에 기반한 시그널 형태를 지닌 대형 자료의 feature 추출 방법 (A Wavelet based Feature Selection Method to Improve Classification of Large Signal-type Data)

  • 장우성;장우진
    • 대한산업공학회지
    • /
    • 제32권2호
    • /
    • pp.133-140
    • /
    • 2006
  • Large signal type data sets are difficult to classify, especially if the data sets are non-stationary. In this paper, large signal type and non-stationary data sets are wavelet transformed so that distinct features of the data are extracted in wavelet domain rather than time domain. For the classification of the data, a few wavelet coefficients representing class properties are employed for statistical classification methods : Linear Discriminant Analysis, Quadratic Discriminant Analysis, Neural Network etc. The application of our wavelet-based feature selection method to a mass spectrometry data set for ovarian cancer diagnosis resulted in 100% classification accuracy.

연속발생 데이터를 위한 실시간 데이터 마이닝 기법 (A Real-Time Data Mining for Stream Data Sets)

  • 김진화;민진영
    • 한국경영과학회지
    • /
    • 제29권4호
    • /
    • pp.41-60
    • /
    • 2004
  • A stream data is a data set that is accumulated to the data storage from a data source over time continuously. The size of this data set, in many cases. becomes increasingly large over time. To mine information from this massive data. it takes much resource such as storage, memory and time. These unique characteristics of the stream data make it difficult and expensive to use this large size data accumulated over time. Otherwise. if we use only recent or part of a whole data to mine information or pattern. there can be loss of information. which may be useful. To avoid this problem. we suggest a method that efficiently accumulates information. in the form of rule sets. over time. It takes much smaller storage compared to traditional mining methods. These accumulated rule sets are used as prediction models in the future. Based on theories of ensemble approaches. combination of many prediction models. in the form of systematically merged rule sets in this study. is better than one prediction model in performance. This study uses a customer data set that predicts buying power of customers based on their information. This study tests the performance of the suggested method with the data set alone with general prediction methods and compares performances of them.

LS-SVM for large data sets

  • Park, Hongrak;Hwang, Hyungtae;Kim, Byungju
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권2호
    • /
    • pp.549-557
    • /
    • 2016
  • In this paper we propose multiclassification method for large data sets by ensembling least squares support vector machines (LS-SVM) with principal components instead of raw input vector. We use the revised one-vs-all method for multiclassification, which is one of voting scheme based on combining several binary classifications. The revised one-vs-all method is performed by using the hat matrix of LS-SVM ensemble, which is obtained by ensembling LS-SVMs trained using each random sample from the whole large training data. The leave-one-out cross validation (CV) function is used for the optimal values of hyper-parameters which affect the performance of multiclass LS-SVM ensemble. We present the generalized cross validation function to reduce computational burden of leave-one-out CV functions. Experimental results from real data sets are then obtained to illustrate the performance of the proposed multiclass LS-SVM ensemble.

데이터 마이닝에서 기존의 연관규칙을 갱신하는 효율적인 앨고리듬 (An Efficient Algorithm for Updating Discovered Association Rules in Data Mining)

  • 김동필;지영근;황종원;강맹규
    • 산업경영시스템학회지
    • /
    • 제21권45호
    • /
    • pp.121-133
    • /
    • 1998
  • This study suggests an efficient algorithm for updating discovered association rules in large database, because a database may allow frequent or occasional updates, and such updates may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. FUP and DMI update efficiently strong association rules in the whole updated database reusing the information of the old large item-sets. Moreover, these algorithms use a pruning technique for reducing the database size in the update process. This study updates strong association rules efficiently in the whole updated database reusing the information of the old large item-sets. An updating algorithm that is suggested in this study generates the whole candidate item-sets at once in an incremental database in view of the fact that it is difficult to find the new set of large item-sets in the whole updated database after an incremental database is added to the original database. This method of generating candidate item-sets is different from that of FUP and DMI. After generating the whole candidate item-sets, if each item-set in the whole candidate item-sets is large at an incremental database, the original database is scanned and the support of each item-set in the whole candidate item-sets is updated. So, the whole large item-sets in the whole updated database is found out. An updating algorithm that is suggested in this study does not use a pruning technique for reducing the database size in the update process. As a result, an updating algoritm that is suggested updates fast and efficiently discovered large item-sets.

  • PDF

Locality-Sensitive Hashing for Data with Categorical and Numerical Attributes Using Dual Hashing

  • Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제14권2호
    • /
    • pp.98-104
    • /
    • 2014
  • Locality-sensitive hashing techniques have been developed to efficiently handle nearest neighbor searches and similar pair identification problems for large volumes of high-dimensional data. This study proposes a locality-sensitive hashing method that can be applied to nearest neighbor search problems for data sets containing both numerical and categorical attributes. The proposed method makes use of dual hashing functions, where one function is dedicated to numerical attributes and the other to categorical attributes. The method consists of creating indexing structures for each of the dual hashing functions, gathering and combining the candidates sets, and thoroughly examining them to determine the nearest ones. The proposed method is examined for a few synthetic data sets, and results show that it improves performance in cases of large amounts of data with both numerical and categorical attributes.

빈발단어집합을 이용한 NaiveBayes의 정확도 개선 (An Improvement of Accuracy for NaiveBayes by Using Large Word Sets)

  • 이재문
    • 인터넷정보학회논문지
    • /
    • 제7권3호
    • /
    • pp.169-178
    • /
    • 2006
  • 본 논문은 연관규칙탐사 기술에서 사용되는 빈발항목집합을 변형하여 문서분류의 문서에서 빈발단어집합을 정의하고, 이를 사용하여 문서분류 방법으로 잘 알려진 NaiveBayes에 적용하여 이 방법의 정확도를 개선한다. 이 기술의 적용을 위하여 하나의 문서는 여러 개의 문단으로 나뉘어졌으며, 각 문단에 나타나는 단어들의 집합을 트랜잭션화하여 빈발단어 집합을 찾을 수 있도록 하였다. 제안한 방법은 Al::Categorizer 프레임워크에서 구현되었으며 로이터-21578 데이터를 사용하여 그 정확도가 측정되었다. 문단에서의 라인수와 학습문서의 크기를 변화하면서 정확도를 측정하였다. 측정된 결과로부터 제안된 방법이 기존의 방법에 비하여 정확도를 개선한다는 사실을 알 수 있었다.

  • PDF

Training Data Sets Construction from Large Data Set for PCB Character Recognition

  • NDAYISHIMIYE, Fabrice;Gang, Sumyung;Lee, Joon Jae
    • Journal of Multimedia Information System
    • /
    • 제6권4호
    • /
    • pp.225-234
    • /
    • 2019
  • Deep learning has become increasingly popular in both academic and industrial areas nowadays. Various domains including pattern recognition, Computer vision have witnessed the great power of deep neural networks. However, current studies on deep learning mainly focus on quality data sets with balanced class labels, while training on bad and imbalanced data set have been providing great challenges for classification tasks. We propose in this paper a method of data analysis-based data reduction techniques for selecting good and diversity data samples from a large dataset for a deep learning model. Furthermore, data sampling techniques could be applied to decrease the large size of raw data by retrieving its useful knowledge as representatives. Therefore, instead of dealing with large size of raw data, we can use some data reduction techniques to sample data without losing important information. We group PCB characters in classes and train deep learning on the ResNet56 v2 and SENet model in order to improve the classification performance of optical character recognition (OCR) character classifier.