• Title/Summary/Keyword: big data tasks

Search Result 98, Processing Time 0.019 seconds

Big Numeric Data Classification Using Grid-based Bayesian Inference in the MapReduce Framework

  • Kim, Young Joon;Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.313-321
    • /
    • 2014
  • In the current era of data-intensive services, the handling of big data is a crucial issue that affects almost every discipline and industry. In this study, we propose a classification method for large volumes of numeric data, which is implemented in a distributed programming framework, i.e., MapReduce. The proposed method partitions the data space into a grid structure and it then models the probability distributions of classes for grid cells by collecting sufficient statistics using distributed MapReduce tasks. The class labeling of new data is achieved by k-nearest neighbor classification based on Bayesian inference.

Design of Distributed Cloud System for Managing large-scale Genomic Data

  • Seine Jang;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.119-126
    • /
    • 2024
  • The volume of genomic data is constantly increasing in various modern industries and research fields. This growth presents new challenges and opportunities in terms of the quantity and diversity of genetic data. In this paper, we propose a distributed cloud system for integrating and managing large-scale gene databases. By introducing a distributed data storage and processing system based on the Hadoop Distributed File System (HDFS), various formats and sizes of genomic data can be efficiently integrated. Furthermore, by leveraging Spark on YARN, efficient management of distributed cloud computing tasks and optimal resource allocation are achieved. This establishes a foundation for the rapid processing and analysis of large-scale genomic data. Additionally, by utilizing BigQuery ML, machine learning models are developed to support genetic search and prediction, enabling researchers to more effectively utilize data. It is expected that this will contribute to driving innovative advancements in genetic research and applications.

Research on the introduction and use of Big Data for trade digital transformation (무역 디지털 트랜스포메이션을 위한 빅데이터 도입 및 활용에 관한 연구)

  • Joon-Mo Jung;Yoon-Say Jeong
    • Korea Trade Review
    • /
    • v.47 no.3
    • /
    • pp.57-73
    • /
    • 2022
  • The process and change of convergence in the economy and industry with the development of digital technology and combining with new technologies is called Digital Transformation. Specifically, it refers to innovating existing businesses and services by utilizing information and communication technologies such as big data analysis, Internet of Things, cloud computing, and artificial intelligence. Digital transformation is changing the shape of business and has a wide impact on businesses and consumers in all industries. Among them, the big data and analytics market is emerging as one of the most important growth drivers of digital transformation. Integrating intelligent data into an existing business is one of the key tasks of digital transformation, and it is important to collect and monitor data and learn from the collected data in order to efficiently operate a data-based business. In developed countries overseas, research on new business models using various data accumulated at the level of government and private companies is being actively conducted. However, although the trade and import/export data collected in the domestic public sector is being accumulated in various types and ranges, the establishment of an analysis and utilization model is still in its infancy. Currently, we are living in an era of massive amounts of big data. We intend to discuss the value of trade big data possessed from the past to the present, and suggest a strategy to activate trade big data for trade digital transformation and a new direction for future trade big data research.

A Study of the Effects of the Internal Characteristics of Fashion Brand Salespeople on Core Sales Tasks (패션브랜드 판매원의 내적특성이 판매 중심직무에 미치는 영향에 관한 연구)

  • Oh, Hyun-Jeong
    • Human Ecology Research
    • /
    • v.59 no.3
    • /
    • pp.311-324
    • /
    • 2021
  • The purpose of this study was to reveal the effects of internal characteristics, such as fashion involvement, personality characteristics, and customer orientation of fashion brand salespeople on the core sales tasks, and how the core sales tasks and internal characteristics differ depending on differences in the way salespeople are remunerated. The data were collected as a questionnaire to fashion brand salespeople in Gwangju from September to October 2020. Using 235 responses, the data were analyzed with SPSS 21.0 for frequency analysis, reliability analysis, t-test, factor analysis, and regression analysis. The research results were as follows. First, fashion involvement comprises factors such as 'fashion passion and sense'and 'fashion trend interest', and the greater the 'fashion passion and sense', the better the 'sales management'and 'customer relationship management'jobs. Second, 'esthetic openness', 'responsibility' and 'extroversion' of the big five personality characteristics have a positive impact on 'sales management' and 'customer relationship management' tasks. Third, customer orientation comprises factors such as 'customer-centric understanding'and 'gain customer trust', the greater the customer-orientation, the better the 'sales management'and 'customer relationship management'tasks. Fourth, according to the position of the salesperson, the group of professional salespeople at manager level had high responses in core sales tasks, fashion involvement, customer orientation, and characteristics such as 'agreeableness', 'esthetic openness', and 'responsibility'.

Research on the Development of Big Data Analysis Tools for Engineering Education (공학교육 빅 데이터 분석 도구 개발 연구)

  • Kim, Younyoung;Kim, Jaehee
    • Journal of Engineering Education Research
    • /
    • v.26 no.4
    • /
    • pp.22-35
    • /
    • 2023
  • As information and communication technology has developed remarkably, it has become possible to analyze various types of large-volume data generated at a speed close to real time, and based on this, reliable value creation has become possible. Such big data analysis is becoming an important means of supporting decision-making based on scientific figures. The purpose of this study is to develop a big data analysis tool that can analyze large amounts of data generated through engineering education. The tasks of this study are as follows. First, a database is designed to store the information of entries in the National Creative Capstone Design Contest. Second, the pre-processing process is checked for analysis with big data analysis tools. Finally, analyze the data using the developed big data analysis tool. In this study, 1,784 works submitted to the National Creative Comprehensive Design Contest from 2014 to 2019 were analyzed. As a result of selecting the top 10 words through topic analysis, 'robot' ranked first from 2014 to 2019, and energy, drones, ultrasound, solar energy, and IoT appeared with high frequency. This result seems to reflect the current core topics and technology trends of the 4th Industrial Revolution. In addition, it seems that due to the nature of the Capstone Design Contest, students majoring in electrical/electronic, computer/information and communication engineering, mechanical engineering, and chemical/new materials engineering who can submit complete products for problem solving were selected. The significance of this study is that the results of this study can be used in the field of engineering education as basic data for the development of educational contents and teaching methods that reflect industry and technology trends. Furthermore, it is expected that the results of big data analysis related to engineering education can be used as a means of preparing preemptive countermeasures in establishing education policies that reflect social changes.

A Study on Automation of Big Data Quality Diagnosis Using Machine Learning (머신러닝을 이용한 빅데이터 품질진단 자동화에 관한 연구)

  • Lee, Jin-Hyoung
    • The Journal of Bigdata
    • /
    • v.2 no.2
    • /
    • pp.75-86
    • /
    • 2017
  • In this study, I propose a method to automate the method to diagnose the quality of big data. The reason for automating the quality diagnosis of Big Data is that as the Fourth Industrial Revolution becomes a issue, there is a growing demand for more volumes of data to be generated and utilized. Data is growing rapidly. However, if it takes a lot of time to diagnose the quality of the data, it can take a long time to utilize the data or the quality of the data may be lowered. If you make decisions or predictions from these low-quality data, then the results will also give you the wrong direction. To solve this problem, I have developed a model that can automate diagnosis for improving the quality of Big Data using machine learning which can quickly diagnose and improve the data. Machine learning is used to automate domain classification tasks to prevent errors that may occur during domain classification and reduce work time. Based on the results of the research, I can contribute to the improvement of data quality to utilize big data by continuing research on the importance of data conversion, learning methods for unlearned data, and development of classification models for each domain.

  • PDF

Design of a Large-scale Task Dispatching & Processing System based on Hadoop (하둡 기반 대규모 작업 배치 및 처리 기술 설계)

  • Kim, Jik-Soo;Cao, Nguyen;Kim, Seoyoung;Hwang, Soonwook
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.613-620
    • /
    • 2016
  • This paper presents a MOHA(Many-Task Computing on Hadoop) framework which aims to effectively apply the Many-Task Computing(MTC) technologies originally developed for high-performance processing of many tasks, to the existing Big Data processing platform Hadoop. We present basic concepts, motivation, preliminary results of PoC based on distributed message queue, and future research directions of MOHA. MTC applications may have relatively low I/O requirements per task. However, a very large number of tasks should be efficiently processed with potentially heavy inter-communications based on files. Therefore, MTC applications can show another pattern of data-intensive workloads compared to existing Hadoop applications, typically based on relatively large data block sizes. Through an effective convergence of MTC and Big Data technologies, we can introduce a new MOHA framework which can support the large-scale scientific applications along with the Hadoop ecosystem, which is evolving into a multi-application platform.

An Effective Data Model for Forecasting and Analyzing Securities Data

  • Lee, Seung Ho;Shin, Seung Jung
    • International journal of advanced smart convergence
    • /
    • v.5 no.4
    • /
    • pp.32-39
    • /
    • 2016
  • Machine learning is a field of artificial intelligence (AI), and a technology that collects, forecasts, and analyzes securities data is developed upon machine learning. The difference between using machine learning and not using machine learning is that machine learning-seems similar to big data-studies and collects data by itself which big data cannot do. Machine learning can be utilized, for example, to recognize a certain pattern of an object and find a criminal or a vehicle used in a crime. To achieve similar intelligent tasks, data must be more effectively collected than before. In this paper, we propose a method of effectively collecting data.

Dynamics-Based Location Prediction and Neural Network Fine-Tuning for Task Offloading in Vehicular Networks

  • Yuanguang Wu;Lusheng Wang;Caihong Kai;Min Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3416-3435
    • /
    • 2023
  • Task offloading in vehicular networks is hot topic in the development of autonomous driving. In these scenarios, due to the role of vehicles and pedestrians, task characteristics are changing constantly. The classical deep learning algorithm always uses a pre-trained neural network to optimize task offloading, which leads to system performance degradation. Therefore, this paper proposes a neural network fine-tuning task offloading algorithm, combining with location prediction for pedestrians and vehicles by the Payne model of fluid dynamics and the car-following model, respectively. After the locations are predicted, characteristics of tasks can be obtained and the neural network will be fine-tuned. Finally, the proposed algorithm continuously predicts task characteristics and fine-tunes a neural network to maintain high system performance and meet low delay requirements. From the simulation results, compared with other algorithms, the proposed algorithm still guarantees a lower task offloading delay, especially when congestion occurs.

Labeling Big Spatial Data: A Case Study of New York Taxi Limousine Dataset

  • AlBatati, Fawaz;Alarabi, Louai
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.207-212
    • /
    • 2021
  • Clustering Unlabeled Spatial-datasets to convert them to Labeled Spatial-datasets is a challenging task specially for geographical information systems. In this research study we investigated the NYC Taxi Limousine Commission dataset and discover that all of the spatial-temporal trajectory are unlabeled Spatial-datasets, which is in this case it is not suitable for any data mining tasks, such as classification and regression. Therefore, it is necessary to convert unlabeled Spatial-datasets into labeled Spatial-datasets. In this research study we are going to use the Clustering Technique to do this task for all the Trajectory datasets. A key difficulty for applying machine learning classification algorithms for many applications is that they require a lot of labeled datasets. Labeling a Big-data in many cases is a costly process. In this paper, we show the effectiveness of utilizing a Clustering Technique for labeling spatial data that leads to a high-accuracy classifier.