• Title/Summary/Keyword: Distributed Data Mining

Search Result 111, Processing Time 0.031 seconds

An Efficient Clustering Method based on Multi Centroid Set using MapReduce (맵리듀스를 이용한 다중 중심점 집합 기반의 효율적인 클러스터링 방법)

  • Kang, Sungmin;Lee, Seokjoo;Min, Jun-ki
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.7
    • /
    • pp.494-499
    • /
    • 2015
  • As the size of data increases, it becomes important to identify properties by analyzing big data. In this paper, we propose a k-Means based efficient clustering technique, called MCSKMeans (Multi centroid set k-Means), using distributed parallel processing framework MapReduce. A problem with the k-Means algorithm is that the accuracy of clustering depends on initial centroids created randomly. To alleviate this problem, the MCSK-Means algorithm reduces the dependency of initial centroids using sets consisting of k centroids. In addition, we apply the agglomerative hierarchical clustering technique for creating k centroids from centroids in m centroid sets which are the results of the clustering phase. In this paper, we implemented our MCSK-Means based on the MapReduce framework for processing big data efficiently.

Research Trends in Record Management Using Unstructured Text Data Analysis (비정형 텍스트 데이터 분석을 활용한 기록관리 분야 연구동향)

  • Deokyong Hong;Junseok Heo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.4
    • /
    • pp.73-89
    • /
    • 2023
  • This study aims to analyze the frequency of keywords used in Korean abstracts, which are unstructured text data in the domestic record management research field, using text mining techniques to identify domestic record management research trends through distance analysis between keywords. To this end, 1,157 keywords of 77,578 journals were visualized by extracting 1,157 articles from 7 journal types (28 types) searched by major category (complex study) and middle category (literature informatics) from the institutional statistics (registered site, candidate site) of the Korean Citation Index (KCI). Analysis of t-Distributed Stochastic Neighbor Embedding (t-SNE) and Scattertext using Word2vec was performed. As a result of the analysis, first, it was confirmed that keywords such as "record management" (889 times), "analysis" (888 times), "archive" (742 times), "record" (562 times), and "utilization" (449 times) were treated as significant topics by researchers. Second, Word2vec analysis generated vector representations between keywords, and similarity distances were investigated and visualized using t-SNE and Scattertext. In the visualization results, the research area for record management was divided into two groups, with keywords such as "archiving," "national record management," "standardization," "official documents," and "record management systems" occurring frequently in the first group (past). On the other hand, keywords such as "community," "data," "record information service," "online," and "digital archives" in the second group (current) were garnering substantial focus.

How Vulnerable is Indonesia's Financial System Stability to External Shock?

  • Pranata, Nika;Nurzanah, Nurzanah
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.4 no.2
    • /
    • pp.5-17
    • /
    • 2017
  • The main objective of the study is to measure the vulnerability of Indonesia's financial system stability in response to external shocks, including from regional economies namely three biggest Indonesia major trading partners (China, the U.S and Japan) and other external factors (oil price and the federal funds rate). Using Autoregressive Distributed Lag (ARDL) model and Orthogonalized Impulse Response Function (OIRF) with quarterly data over the period Q4 2002 - Q1 2016, results confirm that, 1) oil price response has the largest effect to Indonesia financial stability system and the effect period is the longest compared to others, represented by NPL and IHSG; 2) among those three economies, only China's economic growth has significantly positive effect to Indonesia financial stability system. Based on the findings it is better for the authorities to: 1) Diversify international trade commodities by decreasing share of oil, gas, and mining export and boosting other potential sectors such as manufacture, and fisheries; 2) Ensure the survival of Indonesia large coal exporter companies without neglecting burden of national budget; and 3) Create buffer for demand shock from specific countries by diversifying and increasing share of trading from other countries particularly from ASEAN member states.

An Efficient Dynamic Group Signature with Non-frameability

  • Xie, Run;Xu, Chunxiang;He, Chanlian;Zhang, Xiaojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2407-2426
    • /
    • 2016
  • A group signature scheme allows any member to sign on behalf of a group. It is applied to practical distributed security communication environments, such as privacy-preserving, data mining. In particular, the excellent features of group signatures, including membership joining and revocation, anonymity, traceability, non-frameability and controllable linkability, make group signature scheme more attractive. Among these features, non-frameability can guarantee that a member's signature cannot be forged by any other (including issuer), and controllable linkability supports to confirm whether or not two group signatures are created by the same signer while preserving anonymity. Until now, only Hwang et al.'s group schemes (proposed in 2013 and 2015) can support all of these features. In this paper, we present a new dynamic group signature scheme which can achieve all of the above excellent features. Compared with their schemes, our scheme has the following advantages. Firstly, our scheme achieves more efficient membership revocation, signing and verifying. The cost of update key in our scheme is two-thirds of them. Secondly, the tracing algorithm is simpler, since the signer can be determined without the judging step. Furthermore, in our scheme, the size of group public key and member's private key are shorter. Lastly, we also prove security features of our scheme, such as anonymity, traceability, non-frameability, under a random oracle model.

A MapReduce-Based Distributed Data Mining Approach to Next Place Prediction for Mobile Users (이동 사용자의 다음 장소 예측을 위한 맵리듀스 기반의 분산 데이터 마이닝)

  • Kim, Jong-Hwan;Lee, Seok-Jun;Kim, In-Cheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.777-780
    • /
    • 2014
  • 본 논문에서는 휴대용 기기 사용자들의 이동 궤적을 기록한 대용량의 GPS 위치 데이터 집합으로부터 각 사용자의 이동 패턴 모델을 학습해내고, 이 모델을 적용하여 각 사용자의 다음 방문 장소를 효율적으로 예측할 수 있는 맵리듀스 기반의 분산 데이터 마이닝 시스템을 소개한다. 본 시스템은 크게 사용자별 이동 패턴 모델을 학습하는 후단부와 실시간으로 다음 방문 장소를 예측하는 전단부로 구성된다. 이 중에서 후단부는 주요 장소 추출, 이동 궤적 변환, 이동 패턴 모델 학습 등 총 3개의 맵리듀스 작업 모듈들로 구성된다. 이에 반해, 본 시스템의 전단부는 이동 경로 후보군 생성, 다음 장소 예측 등 총 2개의 맵리듀스 작업 모듈들로 구성된다. 그리고 본 시스템을 구성하는 각각의 작어마다 분산처리를 극대화할 수 있도록 맵과 리듀스 함수를 설계하였다. 끝으로, 대용량의 GeoLife 벤치마크 데이터 집합을 이용하여 본 논문에서 소개한 시스템의 예측 성능을 분석하기 위한 실험을 수행하였고, 이를 통해 본 시스템의 높은 성능을 확인할 수 있었다.

Alsat-2B/Sentinel-2 Imagery Classification Using the Hybrid Pigeon Inspired Optimization Algorithm

  • Arezki, Dounia;Fizazi, Hadria
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.690-706
    • /
    • 2021
  • Classification is a substantial operation in data mining, and each element is distributed taking into account its feature values in the corresponding class. Metaheuristics have been widely used in attempts to solve satellite image classification problems. This article proposes a hybrid approach, the flower pigeons-inspired optimization algorithm (FPIO), and the local search method of the flower pollination algorithm is integrated into the pigeon-inspired algorithm. The efficiency and power of the proposed FPIO approach are displayed with a series of images, supported by computational results that demonstrate the cogency of the proposed classification method on satellite imagery. For this work, the Davies-Bouldin Index is used as an objective function. FPIO is applied to different types of images (synthetic, Alsat-2B, and Sentinel-2). Moreover, a comparative experiment between FPIO and the genetic algorithm genetic algorithm is conducted. Experimental results showed that GA outperformed FPIO in matters of time computing. However, FPIO provided better quality results with less confusion. The overall experimental results demonstrate that the proposed approach is an efficient method for satellite imagery classification.

Development and Operation of Marine Environmental Portal Service System (해양환경 포탈서비스시스템 구축과 운영)

  • 최현우;권순철
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.338-341
    • /
    • 2003
  • According to a long-term master plan for the implementing of MOMAF's marine environmental informatization, we have developed marine environment portal web site which consists of 7 main-menu and 39 sub-menu including various types of contents (text, image and multimedia) based on RDBMS. This portal site was opened in Oct., 2002 (http://www.meps.info). Also, for the national institutions' distributed DB which is archived and managed respectively the marine chemical data and biological data, the integrated retrieval system was developed. This system is meaningful for the making collaborative use of real data and could be applied for data mining, marine research, marine environmental GIS and making-decisions.

  • PDF

An Efficient Web Search Method Based on a Style-based Keyword Extraction and a Keyword Mining Profile (스타일 기반 키워드 추출 및 키워드 마이닝 프로파일 기반 웹 검색 방법)

  • Joo, Kil-Hong;Lee, Jun-Hwl;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.11D no.5
    • /
    • pp.1049-1062
    • /
    • 2004
  • With the popularization of a World Wide Web (WWW), the quantity of web information has been increased. Therefore, an efficient searching system is needed to offer the exact result of diverse Information to user. Due to this reason, it is important to extract and analysis of user requirements in the distributed information environment. The conventional searching method used the only keyword for the web searching. However, the searching method proposed in this paper adds the context information of keyword for the effective searching. In addition, this searching method extracts keywords by the new keyword extraction method proposed in this paper and it executes the web searching based on a keyword mining profile generated by the extracted keywords. Unlike the conventional searching method which searched for information by a representative word, this searching method proposed in this paper is much more efficient and exact. This is because this searching method proposed in this paper is searched by the example based query included content information as well as a representative word. Moreover, this searching method makes a domain keyword list in order to perform search quietly. The domain keyword is a representative word of a special domain. The performance of the proposed algorithm is analyzed by a series of experiments to identify its various characteristic.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.