• Title/Summary/Keyword: Big Data Cluster

Search Result 210, Processing Time 0.024 seconds

Data Transmitting and Storing Scheme based on Bandwidth in Hadoop Cluster (하둡 클러스터의 대역폭을 고려한 압축 데이터 전송 및 저장 기법)

  • Kim, Youngmin;Kim, Heejin;Kim, Younggwan;Hong, Jiman
    • Smart Media Journal
    • /
    • v.8 no.4
    • /
    • pp.46-52
    • /
    • 2019
  • The size of data generated and collected at industrial sites or in public institutions is growing rapidly. The existing data processing server often handles the increasing data by increasing the performance by scaling up. However, in the big data era, when the speed of data generation is exploding, there is a limit to data processing with a conventional server. To overcome such limitations, a distributed cluster computing system has been introduced that distributes data in a scale-out manner. However, because distributed cluster computing systems distribute data, inefficient use of network bandwidth can degrade the performance of the cluster as a whole. In this paper, we propose a scheme that compresses data when transmitting data in a Hadoop cluster considering network bandwidth. The proposed scheme considers the network bandwidth and the characteristics of the compression algorithm and selects the optimal compression transmission scheme before transmission. Experimental results show that the proposed scheme reduces data transfer time and size.

Big Data Astronomy: Large-scale Graph Analyses of Five Different Multiverses

  • Hong, Sungryong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.2
    • /
    • pp.36.3-37
    • /
    • 2018
  • By utilizing large-scale graph analytic tools in the modern Big Data platform, Apache Spark, we investigate the topological structures of five different multiverses produced by cosmological n-body simulations with various cosmological initial conditions: (1) one standard universe, (2) two different dark energy states, and (3) two different dark matter densities. For the Big Data calculations, we use a custom build of stand-alone Spark cluster at KIAS and Dataproc Compute Engine in Google Cloud Platform with the sample sizes ranging from 7 millions to 200 millions. Among many graph statistics, we find that three simple graph measurements, denoted by (1) $n_\k$, (2) $\tau_\Delta$, and (3) $n_{S\ge5}$, can efficiently discern different topology in discrete point distributions. We denote this set of three graph diagnostics by kT5+. These kT5+ statistics provide a quick look of various orders of n-points correlation functions in a computationally cheap way: (1) $n = 2$ by $n_k$, (2) $n = 3$ by $\tau_\Delta$, and (3) $n \ge 5$ by $n_{S\ge5}$.

  • PDF

Image Machine Learning System using Apache Spark and OpenCV on Distributed Cluster (Apache Spark와 OpenCV를 활용한 분산 클러스터 컴퓨팅 환경 대용량 이미지 머신러닝 시스템)

  • Hayoon Kim;Wonjib Kim;Hyeopgeon Lee;Young Woon Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.33-34
    • /
    • 2023
  • 성장하는 빅 데이터 시장과 빅 데이터 수의 기하급수적인 증가는 기존 컴퓨팅 환경에서 데이터 처리의 어려움을 야기한다. 특히 이미지 데이터 처리 속도는 데이터양이 많을수록 현저하게 느려진다. 이에 본 논문에서는 Apache Spark와 OpenCV를 활용한 분산 클러스터 컴퓨팅 환경의 대용량 이미지 머신러닝 시스템을 제안한다. 제안하는 시스템은 Apache Spark를 통해 분산 클러스터를 구성하며, OpenCV의 이미지 처리 알고리즘과 Spark MLlib의 머신러닝 알고리즘을 활용하여 작업을 수행한다. 제안하는 시스템을 통해 본 논문은 대용량 이미지 데이터 처리 및 머신러닝 작업 속도 향상 방법을 제시한다.

Improved TI-FCM Clustering Algorithm in Big Data (빅데이터에서 개선된 TI-FCM 클러스터링 알고리즘)

  • Lee, Kwang-Kyug
    • Journal of IKEEE
    • /
    • v.23 no.2
    • /
    • pp.419-424
    • /
    • 2019
  • The FCM algorithm finds the optimal solution through iterative optimization technique. In particular, there is a difference in execution time depending on the initial center of clustering, the location of noise, the location and number of crowded densities. However, this method gradually updates the center point, and the center of the initial cluster is shifted to one side. In this paper, we propose a TI-FCM(Triangular Inequality-Fuzzy C-Means) clustering algorithm that determines the cluster center density by maximizing the distance between clusters using triangular inequality. The proposed method is an effective method to converge to real clusters compared to FCM even in large data sets. Experiments show that execution time is reduced compared to existing FCM.

A Study on the Data Collection Methods based Hadoop Distributed Environment (하둡 분산 환경 기반의 데이터 수집 기법 연구)

  • Jin, Go-Whan
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.5
    • /
    • pp.1-6
    • /
    • 2016
  • Many studies have been carried out for the development of big data utilization and analysis technology recently. There is a tendency that government agencies and companies to introduce a Hadoop of a processing platform for analyzing big data is increasing gradually. Increased interest with respect to the processing and analysis of these big data collection technology of data has become a major issue in parallel to it. However, study of the collection technology as compared to the study of data analysis techniques, it is insignificant situation. Therefore, in this paper, to build on the Hadoop cluster is a big data analysis platform, through the Apache sqoop, stylized from relational databases, to collect the data. In addition, to provide a sensor through the Apache flume, a system to collect on the basis of the data file of the Web application, the non-structured data such as log files to stream. The collection of data through these convergence would be able to utilize as a basic material of big data analysis.

Study on the Application of Big Data Mining to Activate Physical Distribution Cooperation : Focusing AHP Technique (물류공동화 활성화를 위한 빅데이터 마이닝 적용 연구 : AHP 기법을 중심으로)

  • Young-Hyun Pak;Jae-Ho Lee;Kyeong-Woo Kim
    • Korea Trade Review
    • /
    • v.46 no.5
    • /
    • pp.65-81
    • /
    • 2021
  • The technological development in the era of the 4th industrial revolution is changing the paradigm of various industries. Various technologies such as big data, cloud, artificial intelligence, virtual reality, and the Internet of Things are used, creating synergy effects with existing industries, creating radical development and value creation. Among them, the logistics sector has been greatly influenced by quantitative data from the past and has been continuously accumulating and managing data, so it is highly likely to be linked with big data analysis and has a high utilization effect. The modern advanced technology has developed together with the data mining technology to discover hidden patterns and new correlations in such big data, and through this, meaningful results are being derived. Therefore, data mining occupies an important part in big data analysis, and this study tried to analyze data mining techniques that can contribute to the logistics field and common logistics using these data mining technologies. Therefore, by using the AHP technique, it was attempted to derive priorities for each type of efficient data mining for logisticalization, and R program and R Studio were used as tools to analyze this. Criteria of AHP method set association analysis, cluster analysis, decision tree method, artificial neural network method, web mining, and opinion mining. For the alternatives, common transport and delivery, common logistics center, common logistics information system, and common logistics partnership were set as factors.

Comparing Cilk and MPI on a heterogeneous cluster system (이기종 클러스터 시스템에서 Cilk와 MPI 특성 비교)

  • Lee, Kyu-Ho;Kim, Jun-Seong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.4 s.316
    • /
    • pp.21-27
    • /
    • 2007
  • Recently cluster system built from personal computers and network devices are easily and economically constructed. Rapid technological change discloses new processors on the market yielding cluster systems heterogeneity. A parallel system in heterogeneous environment needs work managers for utilizing the full power of the heterogeneous cluster system. In this paper, we compare MPI and Cilk in a heterogeneous cluster system in terms of performance and code complexity. Experimental results show that Cilk is better than MPI with small sizes of data transfers while MPI outperforms Cilk with big sizes of data transfers. Also, We find that Cilk requires less programming efforts to write a parallel program.

Application of Urban Computing to Explore Living Environment Characteristics in Seoul : Integration of S-Dot Sensor and Urban Data

  • Daehwan Kim;Woomin Nam;Keon Chul Park
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.65-76
    • /
    • 2023
  • This paper identifies the aspects of living environment elements (PM2.5, PM10, Noise) throughout Seoul and the urban characteristics that affect them by utilizing the big data of the S-Dot sensors in Seoul, which has recently become a hot topic. In other words, it proposes a big data based urban computing research methodology and research direction to confirm the relationship between urban characteristics and living environments that directly affect citizens. The temporal range is from 2020 to 2021, which is the available range of time series data for S-Dot sensors, and the spatial range is throughout Seoul by 500mX500m GRID. First of all, as part of analyzing specific living environment patterns, simple trends through EDA are identified, and cluster analysis is conducted based on the trends. After that, in order to derive specific urban planning factors of each cluster, basic statistical analysis such as ANOVA, OLS and MNL analysis were conducted to confirm more specific characteristics. As a result of this study, cluster patterns of environment elements(PM2.5, PM10, Noise) and urban factors that affect them are identified, and there are areas with relatively high or low long-term living environment values compared to other regions. The results of this study are believed to be a reference for urban planning management measures for vulnerable areas of living environment, and it is expected to be an exploratory study that can provide directions to urban computing field, especially related to environmental data in the future.

Quantitative Analysis of the Size and the Structural Factors of the Feet for Elementary School Girls' Shoe Design (아동화 설계에 요구되는 치수 및 구조요인의 정량적 분석 -학령기 여아를 대상으로-)

  • Jeon, Eun-Kyung
    • Korean Journal of Human Ecology
    • /
    • v.15 no.4
    • /
    • pp.651-658
    • /
    • 2006
  • This study was performed to provide the analysis on their size and the structural factors required in the process of design and manufacture of school girls' shoes. 371 elementary school girls in Kyungin and Youngnam area were participated in the size measurement. 25 foot items and 6 main body items were measured directly or indirectly using a digital photography. The results of the study are as follows: first, by most of measured items, the range of their foot size was very wide from the size of toddlers to adults'. That shows that the change of school girls' foot size occurred with their growth is pretty big. Second, from the structural factor analysis on 25 foot items, five factors were extracted such as 'the size of the foot', 'the volume of the foot,' 'the height and inclination of the foot,' 'the shape of the foot,' and 'the inside and outside inclination of the foot'. Third, from the cluster analysis, three clusters were classified: Cluster 1 was the group of 10 to 11 year old girls who had big-sized feet. The elementary school girls in the fourth to sixth grade belonged to this group. Cluster 2 consisted of girls who had small-sized and big-volumed feet. Cluster 3 had medium-sized and slim-shaped feet. Most of 6 to 7 year old elementary school girls belonged to this group. The above-mentioned results imply that many continual researches are required on children's shoe production reflecting the change of elementary school girls' feet size owing to their growth. The quantitative data on elementary school girls' feet size in this study could be used as basic information for the development of children's shoe design and its production.

  • PDF

A Study on Extraction of Useful Information from Big dataset of Multi-attributes - Focus on Single Household in Seoul - (다속성 빅데이터로부터 유용한 정보 추출에 관한 연구 - 서울시 1인 가구를 중심으로 -)

  • Choi, Jung-Min;Kim, Kun-Woo
    • Journal of the Korean housing association
    • /
    • v.25 no.4
    • /
    • pp.59-72
    • /
    • 2014
  • This study proposes a data-mining analysis method for examining variable multi-attribute big-data, which is considered to be more applicable in social science using a Correspondence Analysis of variables obtained by AIC model selection. The proposed method was applied on the Seoul Survey from 2005 to 2010 in order to extract interesting rules or patterns on characteristics of single household. The results found as follows. Firstly, this paper illustrated that the proposed method is efficiently able to apply on a big dataset of huge categorical multi attributes variables. Secondly, as a result of Seoul Survey analysis, it has been found that the more dissatisfied with residential environment the higher tendency of residential mobility in single household. Thirdly, it turned out that there are three types of single households based on the characteristics of their demographic characteristics, and it was different from recognition of home and partner of counselling by the three types of single households. Fourthly, this paper extracted eight significant variables with a spatial aggregated dataset which are highly correlated to the ratio of occupancy of single household in 25 Seoul Municipals, and to conclude, it investigated the relation between spatial distribution of single households and their demographic statistics based on the six divided groups obtained by Cluster Analysis.