• Title/Summary/Keyword: Big data Processing

Search Result 1,063, Processing Time 0.031 seconds

A Deep Learning Application for Automated Feature Extraction in Transaction-based Machine Learning (트랜잭션 기반 머신러닝에서 특성 추출 자동화를 위한 딥러닝 응용)

  • Woo, Deock-Chae;Moon, Hyun Sil;Kwon, Suhnbeom;Cho, Yoonho
    • Journal of Information Technology Services
    • /
    • v.18 no.2
    • /
    • pp.143-159
    • /
    • 2019
  • Machine learning (ML) is a method of fitting given data to a mathematical model to derive insights or to predict. In the age of big data, where the amount of available data increases exponentially due to the development of information technology and smart devices, ML shows high prediction performance due to pattern detection without bias. The feature engineering that generates the features that can explain the problem to be solved in the ML process has a great influence on the performance and its importance is continuously emphasized. Despite this importance, however, it is still considered a difficult task as it requires a thorough understanding of the domain characteristics as well as an understanding of source data and the iterative procedure. Therefore, we propose methods to apply deep learning for solving the complexity and difficulty of feature extraction and improving the performance of ML model. Unlike other techniques, the most common reason for the superior performance of deep learning techniques in complex unstructured data processing is that it is possible to extract features from the source data itself. In order to apply these advantages to the business problems, we propose deep learning based methods that can automatically extract features from transaction data or directly predict and classify target variables. In particular, we applied techniques that show high performance in existing text processing based on the structural similarity between transaction data and text data. And we also verified the suitability of each method according to the characteristics of transaction data. Through our study, it is possible not only to search for the possibility of automated feature extraction but also to obtain a benchmark model that shows a certain level of performance before performing the feature extraction task by a human. In addition, it is expected that it will be able to provide guidelines for choosing a suitable deep learning model based on the business problem and the data characteristics.

Lambda Architecture Used Apache Kudu and Impala (Apache Kudu와 Impala를 활용한 Lambda Architecture 설계)

  • Hwang, Yun-Young;Lee, Pil-Won;Shin, Yong-Tae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.207-212
    • /
    • 2020
  • The amount of data has increased significantly due to advances in technology, and various big data processing platforms are emerging, to handle it. Among them, the most widely used platform is Hadoop developed by the Apache Software Foundation, and Hadoop is also used in the IoT field. However, the existing Hadoop-based IoT sensor data collection and analysis environment has a problem of overloading the name node due to HDFS' Small File, which is Hadoop's core project, and it is impossible to update or delete the imported data. This paper uses Apache Kudu and Impala to design Lambda Architecture. The proposed Architecture classifies IoT sensor data into Cold-Data and Hot-Data, stores it in storage according to each personality, and uses Batch-View created through Batch and Real-time View generated through Apache Kudu and Impala to solve problems in the existing Hadoop-based IoT sensor data collection analysis environment and shorten the time users access to the analyzed data.

A Study on a Real Time Presentation Method for Playing of a Multimedia mail on Internet (인터넷상의 동영상 메일을 재생하기 위한 실시간 연출 기법 연구)

  • Im, Yeong-Hwan;Lee, Seon-Hye
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.877-890
    • /
    • 1999
  • In this paper, a multimedia mail including video, sound, graphic data has been proposed as the next generation mail of the text based mail. In order to develop the multimedia mail, the most outstanding problem is the fact that the multimedia data are too huge to send them to the receiving end directly. The fact of big data may cause many problems in both transferring and storing the data of the multimedia mail. Our main idea is to separate between a control program for the multimedia presentation and multimedia data. Since the size of a control program is as small as a plain text mail, it has no problem to send it attached to the internet mail to the receiver directly. Instead, the big multimedia data themselves may remain on the sender's computer or be sent to a designated server so that the data may be transferred to the receiver only when the receiver activates the play of the multimedia mail. In this scheme, our research focus is paced on the buffer management and the thread scheduling for the real time play of the multimedia mail on internet. Another problem is to provide an easy way of editing a multimedia presentation for an ordinary people having no programming knowledge. For the purposed, VIP(Visual Interface Player) has been used and the results or multimedia mail implemented on LAN has been described.

  • PDF

Korean Collective Intelligence in Sharing Economy Using R Programming: A Text Mining and Time Series Analysis Approach (R프로그래밍을 활용한 공유경제의 한국인 집단지성: 텍스트 마이닝 및 시계열 분석)

  • Kim, Jae Won;Yun, You Dong;Jung, Yu Jin;Kim, Ki Youn
    • Journal of Internet Computing and Services
    • /
    • v.17 no.5
    • /
    • pp.151-160
    • /
    • 2016
  • The purpose of this research is to investigate Korean popular attitudes and social perceptions of 'sharing economy' terminology at the current moment from a creative or socio-economic point of view. In Korea, this study discovers and interprets the objective and tangible annual changes and patterns of sociocultural collective intelligence that have taken place over the last five years by applying text mining in the big data analysis approach. By crawling and Googling, this study collected a significant amount of time series web meta-data with regard to the theme of the sharing economy on the world wide web from 2010 to 2014. Consequently, huge amounts of raw data concerning sharing economy are processed into the value-added meaningful 'word clouding' form of graphs or figures by using the function of word clouding with R programming. Till now, the lack of accumulated data or collective intelligence about sharing economy notwithstanding, it is worth nothing that this study carried out preliminary research on conducting a time-series big data analysis from the perspective of knowledge management and processing. Thus, the results of this study can be utilized as fundamental data to help understand the academic and industrial aspects of future sharing economy-related markets or consumer behavior.

EXCUTE REAL-TIME PROCESSING IN RTOS ON 8BIT MCU WITH TEMP AND HUMIDITY SENSOR

  • Kim, Ki-Su;Lee, Jong-Chan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.21-27
    • /
    • 2019
  • Recently, embedded systems have been introduced in various fields such as smart factories, industrial drones, and medical robots. Since sensor data collection and IoT functions for machine learning and big data processing are essential in embedded systems, it is essential to port the operating system that is suitable for the function requirements. However, in embedded systems, it is necessary to separate the hard real-time system, which must process within a fixed time according to service characteristics, and the flexible real-time system, which is more flexible in processing time. It is difficult to port the operating system to a low-performance embedded device such as 8BIT MCU to perform simultaneous real-time. When porting a real-time OS (RTOS) to a low-specification MCU and performing a number of tasks, the performance of the real-time and general processing greatly deteriorates, causing a problem of re-designing the hardware and software if a hard real-time system is required for an operating system ported to a low-performance MCU such as an 8BIT MCU. Research on the technology that can process real-time processing system requirements on RTOS (ported in low-performance MCU) is needed.

Parallel k-Modes Algorithm for Spark Framework (스파크 프레임워크를 위한 병렬적 k-Modes 알고리즘)

  • Chung, Jaehwa
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.10
    • /
    • pp.487-492
    • /
    • 2017
  • Clustering is a technique which is used to measure similarities between data in big data analysis and data mining field. Among various clustering methods, k-Modes algorithm is representatively used for categorical data. To increase the performance of iterative-centric tasks such as k-Modes, a distributed and concurrent framework Spark has been received great attention recently because it overcomes the limitation of Hadoop. Spark provides an environment that can process large amount of data in main memory using the concept of abstract objects called RDD. Spark provides Mllib, a dedicated library for machine learning, but Mllib only includes k-means that can process only continuous data, so there is a limitation that categorical data processing is impossible. In this paper, we design RDD for k-Modes algorithm for categorical data clustering in spark environment and implement an algorithm that can operate effectively. Experiments show that the proposed algorithm increases linearly in the spark environment.

Categorical Variable Selection in Naïve Bayes Classification (단순 베이즈 분류에서의 범주형 변수의 선택)

  • Kim, Min-Sun;Choi, Hosik;Park, Changyi
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.3
    • /
    • pp.407-415
    • /
    • 2015
  • $Na{\ddot{i}}ve$ Bayes Classification is based on input variables that are a conditionally independent given output variable. The $Na{\ddot{i}}ve$ Bayes assumption is unrealistic but simplifies the problem of high dimensional joint probability estimation into a series of univariate probability estimations. Thus $Na{\ddot{i}}ve$ Bayes classier is often adopted in the analysis of massive data sets such as in spam e-mail filtering and recommendation systems. In this paper, we propose a variable selection method based on ${\chi}^2$ statistic on input and output variables. The proposed method retains the simplicity of $Na{\ddot{i}}ve$ Bayes classier in terms of data processing and computation; however, it can select relevant variables. It is expected that our method can be useful in classification problems for ultra-high dimensional or big data such as the classification of diseases based on single nucleotide polymorphisms(SNPs).

WV-BTM: A Technique on Improving Accuracy of Topic Model for Short Texts in SNS (WV-BTM: SNS 단문의 주제 분석을 위한 토픽 모델 정확도 개선 기법)

  • Song, Ae-Rin;Park, Young-Ho
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.51-58
    • /
    • 2018
  • As the amount of users and data of NS explosively increased, research based on SNS Big data became active. In social mining, Latent Dirichlet Allocation(LDA), which is a typical topic model technique, is used to identify the similarity of each text from non-classified large-volume SNS text big data and to extract trends therefrom. However, LDA has the limitation that it is difficult to deduce a high-level topic due to the semantic sparsity of non-frequent word occurrence in the short sentence data. The BTM study improved the limitations of this LDA through a combination of two words. However, BTM also has a limitation that it is impossible to calculate the weight considering the relation with each subject because it is influenced more by the high frequency word among the combined words. In this paper, we propose a technique to improve the accuracy of existing BTM by reflecting semantic relation between words.

RDP: A storage-tier-aware Robust Data Placement strategy for Hadoop in a Cloud-based Heterogeneous Environment

  • Muhammad Faseeh Qureshi, Nawab;Shin, Dong Ryeol
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4063-4086
    • /
    • 2016
  • Cloud computing is a robust technology, which facilitate to resolve many parallel distributed computing issues in the modern Big Data environment. Hadoop is an ecosystem, which process large data-sets in distributed computing environment. The HDFS is a filesystem of Hadoop, which process data blocks to the cluster nodes. The data block placement has become a bottleneck to overall performance in a Hadoop cluster. The current placement policy assumes that, all Datanodes have equal computing capacity to process data blocks. This computing capacity includes availability of same storage media and same processing performances of a node. As a result, Hadoop cluster performance gets effected with unbalanced workloads, inefficient storage-tier, network traffic congestion and HDFS integrity issues. This paper proposes a storage-tier-aware Robust Data Placement (RDP) scheme, which systematically resolves unbalanced workloads, reduces network congestion to an optimal state, utilizes storage-tier in a useful manner and minimizes the HDFS integrity issues. The experimental results show that the proposed approach reduced unbalanced workload issue to 72%. Moreover, the presented approach resolve storage-tier compatibility problem to 81% by predicting storage for block jobs and improved overall data block placement by 78% through pre-calculated computing capacity allocations and execution of map files over respective Namenode and Datanodes.

Treatment Planning in Smart Medical: A Sustainable Strategy

  • Hao, Fei;Park, Doo-Soon;Woo, Sang Yeon;Min, Se Dong;Park, Sewon
    • Journal of Information Processing Systems
    • /
    • v.12 no.4
    • /
    • pp.711-723
    • /
    • 2016
  • With the rapid development of both ubiquitous computing and the mobile internet, big data technology is gradually penetrating into various applications, such as smart traffic, smart city, and smart medical. In particular, smart medical, which is one core part of a smart city, is changing the medical structure. Specifically, it is improving treatment planning for various diseases. Since multiple treatment plans generated from smart medical have their own unique treatment costs, pollution effects, side-effects for patients, and so on, determining a sustainable strategy for treatment planning is becoming very critical in smart medical. From the sustainable point of view, this paper first presents a three-dimensional evaluation model for representing the raw medical data and then proposes a sustainable strategy for treatment planning based on the representation model. Finally, a case study on treatment planning for the group of "computer autism" patients is then presented for demonstrating the feasibility and usability of the proposed strategy.