• Title/Summary/Keyword: Big data Processing

Search Result 1,063, Processing Time 0.028 seconds

Idea proposal of InfograaS for Visualization of Public Big-data (공공 빅데이터의 시각화를 위한 InfograaS의 아이디어 제안)

  • Cha, Byung-Rae;Lee, Hyung-Ho;Sim, Su-Jeong;Kim, Jong-Won
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.5
    • /
    • pp.524-531
    • /
    • 2014
  • In this paper, we have proposed the processing and analyzing the linked open data (LOD), a kind of big-data, using resources of cloud computing. The LOD is web-based open data in order to share and recycle of public data. Specially, we defined the InfograaS (Info-graphic as a service), new business area of SaaS (software as a service), to support visualization technique for BA (business analytics) and Info-graphic. The goal of this study is easily to use it by the non-specialist and beginner without experts of visualization and business analysis. Data visualization is the process to represent visually and understand the data analysis easily. The purpose of data visualization is to deliver information clearly and effectively by chart and figure. The big data of public data are shared and presented in the charts and the graphics understood easily by various processing results using Hadoop, R, machine learning, and data mining of open source and resources of cloud computing.

Building an Analytical Platform of Big Data for Quality Inspection in the Dairy Industry: A Machine Learning Approach (유제품 산업의 품질검사를 위한 빅데이터 플랫폼 개발: 머신러닝 접근법)

  • Hwang, Hyunseok;Lee, Sangil;Kim, Sunghyun;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.125-140
    • /
    • 2018
  • As one of the processes in the manufacturing industry, quality inspection inspects the intermediate products or final products to separate the good-quality goods that meet the quality management standard and the defective goods that do not. The manual inspection of quality in a mass production system may result in low consistency and efficiency. Therefore, the quality inspection of mass-produced products involves automatic checking and classifying by the machines in many processes. Although there are many preceding studies on improving or optimizing the process using the data generated in the production process, there have been many constraints with regard to actual implementation due to the technical limitations of processing a large volume of data in real time. The recent research studies on big data have improved the data processing technology and enabled collecting, processing, and analyzing process data in real time. This paper aims to propose the process and details of applying big data for quality inspection and examine the applicability of the proposed method to the dairy industry. We review the previous studies and propose a big data analysis procedure that is applicable to the manufacturing sector. To assess the feasibility of the proposed method, we applied two methods to one of the quality inspection processes in the dairy industry: convolutional neural network and random forest. We collected, processed, and analyzed the images of caps and straws in real time, and then determined whether the products were defective or not. The result confirmed that there was a drastic increase in classification accuracy compared to the quality inspection performed in the past.

Supplier Evaluation in Green Supply Chain: An Adaptive Weight D-S Theory Model Based on Fuzzy-Rough-Sets-AHP Method

  • Li, Lianhui;Xu, Guanying;Wang, Hongguang
    • Journal of Information Processing Systems
    • /
    • v.15 no.3
    • /
    • pp.655-669
    • /
    • 2019
  • Supplier evaluation is of great significance in green supply chain management. Influenced by factors such as economic globalization, sustainable development, a holistic index framework is difficult to establish in green supply chain. Furthermore, the initial index values of candidate suppliers are often characterized by uncertainty and incompleteness and the index weight is variable. To solve these problems, an index framework is established after comprehensive consideration of the major factors. Then an adaptive weight D-S theory model is put forward, and a fuzzy-rough-sets-AHP method is proposed to solve the adaptive weight in the index framework. The case study and the comparison with TOPSIS show that the adaptive weight D-S theory model in this paper is feasible and effective.

A Study on Big Data Processing Technology Based on Open Source for Expansion of LIMS (실험실정보관리시스템의 확장을 위한 오픈 소스 기반의 빅데이터 처리 기술에 관한 연구)

  • Kim, Soon-Gohn
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.161-167
    • /
    • 2021
  • Laboratory Information Management System(LIMS) is a centralized database for storing, processing, retrieving, and analyzing laboratory data, and refers to a computer system or system specially designed for laboratories performing inspection, analysis, and testing tasks. In particular, LIMS is equipped with a function to support the operation of the laboratory, and it requires workflow management or data tracking support. In this paper, we collect data on websites and various channels using crawling technology, one of the automated big data collection technologies for the operation of the laboratory. Among the collected test methods and contents, useful test methods and contents useful that the tester can utilize are recommended. In addition, we implement a complementary LIMS platform capable of verifying the collection channel by managing the feedback.

Efficient Processing of an Aggregate Query Stream in MapReduce (맵리듀스에서 집계 질의 스트림의 효율적인 처리 기법)

  • Choi, Hyunjean;Lee, Ki Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.2
    • /
    • pp.73-80
    • /
    • 2014
  • MapReduce is a widely used programming model for analyzing and processing Big data. Aggregate queries are one of the most common types of queries used for analyzing Big data. In this paper, we propose an efficient method for processing an aggregate query stream, where many concurrent users continuously issue different aggregate queries on the same data. Instead of processing each aggregate query separately, the proposed method processes multiple aggregate queries together in a batch by a single, optimized MapReduce job. As a result, the number of queries processed per unit time increases significantly. Through various experiments, we show that the proposed method improves the performance significantly compared to a naive method.

A study on the enhancement and performance optimization of parallel data processing model for Big Data on Emissions of Air Pollutants Emitted from Vehicles (차량에서 배출되는 대기 오염 물질의 빅 데이터에 대한 병렬 데이터 처리 모델의 강화 및 성능 최적화에 관한 연구)

  • Kang, Seong-In;Cho, Sung-youn;Kim, Ji-Whan;Kim, Hyeon-Joung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.1-6
    • /
    • 2020
  • Road movement pollutant air environment big data is a link between real-time traffic data such as vehicle type, speed, and load using AVC, VDS, WIM, and DTG, which are always traffic volume survey equipment, and road shape (uphill, downhill, turning section) data using GIS. It consists of traffic flow data. Also, unlike general data, a lot of data per unit time is generated and has various formats. In particular, since about 7.4 million cases/hour or more of large-scale real-time data collected as detailed traffic flow information are collected, stored and processed, a system that can efficiently process data is required. Therefore, in this study, an open source-based data parallel processing performance optimization study is conducted for the visualization of big data in the air environment of road transport pollution.

Study of MongoDB Architecture by Data Complexity for Big Data Analysis System (빅데이터 분석 시스템 구현을 위한 데이터 구조의 복잡성에 따른 MongoDB 환경 구성 연구)

  • Hyeopgeon Lee;Young-Woon Kim;Jin-Woo Lee;Seong Hyun Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.354-361
    • /
    • 2023
  • Big data analysis systems apply NoSQL databases like MongoDB to store, process, and analyze diverse forms of large-scale data. MongoDB offers scalability and fast data processing speeds through distributed processing and data replication, depending on its configuration. This paper investigates the suitable MongoDB environment configurations for implementing big data analysis systems. For performance evaluation, we configured both single-node and multi-node environments. In the multi-node setup, we expanded the number of data nodes from two to three and measured the performance in each environment. According to the analysis, the processing speeds for complex data structures with three or more dimensions are approximately 5.75% faster in the single-node environment compared to an environment with two data nodes. However, a setting with three data nodes processes data about 25.15% faster than the single-node environment. On the other hand, for simple one-dimensional data structures, the multi-node environment processes data approximately 28.63% faster than the single-node environment. Further research is needed to practically validate these findings with diverse data structures and large volumes of data.

Big Data-based Sensor Data Processing and Analysis for IoT Environment (IoT 환경을 위한 빅데이터 기반 센서 데이터 처리 및 분석)

  • Shin, Dong-Jin;Park, Ji-Hun;Kim, Ju-Ho;Kwak, Kwang-Jin;Park, Jeong-Min;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.117-126
    • /
    • 2019
  • The data generated in the IoT environment is very diverse. Especially, the development of the fourth industrial revolution has made it possible to increase the number of fixed and unstructured data generated in manufacturing facilities such as Smart Factory. With Big Data related solutions, it is possible to collect, store, process, analyze and visualize various large volumes of data quickly and accurately. Therefore, in this paper, we will directly generate data using Raspberry Pi used in IoT environment, and analyze using various Big Data solutions. Collected by using an Sqoop solution collected and stored in the database to the HDFS, and the process is to process the data by using the solutions available Hive parallel processing is associated with Hadoop. Finally, the analysis and visualization of the processed data via the R programming will be used universally to end verification.

Study of Efficient Algorithm for Deduplication of Complex Structure (복잡한 구조의 데이터 중복제거를 위한 효율적인 알고리즘 연구)

  • Lee, Hyeopgeon;Kim, Young-Woon;Kim, Ki-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.1
    • /
    • pp.29-36
    • /
    • 2021
  • The amount of data generated has been growing exponentially, and the complexity of data has been increasing owing to the advancement of information technology (IT). Big data analysts and engineers have therefore been actively conducting research to minimize the analysis targets for faster processing and analysis of big data. Hadoop, which is widely used as a big data platform, provides various processing and analysis functions, including minimization of analysis targets through Hive, which is a subproject of Hadoop. However, Hive uses a vast amount of memory for data deduplication because it is implemented without considering the complexity of data. Therefore, an efficient algorithm has been proposed for data deduplication of complex structures. The performance evaluation results demonstrated that the proposed algorithm reduces the memory usage and data deduplication time by approximately 79% and 0.677%, respectively, compared to Hive. In the future, performance evaluation based on a large number of data nodes is required for a realistic verification of the proposed algorithm.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.