• Title/Summary/Keyword: SPARK 플랫폼

Search Result 35, Processing Time 0.032 seconds

Design and Implementation of a Search Engine based on Apache Spark (아파치 스파크 기반 검색엔진의 설계 및 구현)

  • Park, Ki-Sung;Choi, Jae-Hyun;Kim, Jong-Bae;Park, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.17-28
    • /
    • 2017
  • Recently, a study on data has been actively conducted because the value of the data has become more useful. Web crawler that is program of data collection recently spotlighted because it can take advantage of the various fields. Web crawler can be defined as a tool to analyze the web pages and collects the URL by traversing the web server in an automated manner. For the treatment of Big-data, distributed Web crawler is widely used which is based on the Hadoop MapReduce. But, it is difficult to use and has constraints on the performance. Apache spark that is the In-memory computing platform is an alternative to MapReduce. The search engine which is one of the main purposes of web crawler displays the information you search by keyword gathered by web crawler. If search engines implement a spark-based web crawler instead of traditional MapReduce-based web crawler, it would be a more rapid data collection.

Performance Optimization Strategies for Fully Utilizing Apache Spark (아파치 스파크 활용 극대화를 위한 성능 최적화 기법)

  • Myung, Rohyoung;Yu, Heonchang;Choi, Sukyong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.1
    • /
    • pp.9-18
    • /
    • 2018
  • Enhancing performance of big data analytics in distributed environment has been issued because most of the big data related applications such as machine learning techniques and streaming services generally utilize distributed computing frameworks. Thus, optimizing performance of those applications at Spark has been actively researched. Since optimizing performance of the applications at distributed environment is challenging because it not only needs optimizing the applications themselves but also requires tuning of the distributed system configuration parameters. Although prior researches made a huge effort to improve execution performance, most of them only focused on one of three performance optimization aspect: application design, system tuning, hardware utilization. Thus, they couldn't handle an orchestration of those aspects. In this paper, we deeply analyze and model the application processing procedure of the Spark. Through the analyzed results, we propose performance optimization schemes for each step of the procedure: inner stage and outer stage. We also propose appropriate partitioning mechanism by analyzing relationship between partitioning parallelism and performance of the applications. We applied those three performance optimization schemes to WordCount, Pagerank, and Kmeans which are basic big data analytics and found nearly 50% performance improvement when all of those schemes are applied.

A GPU-based Filter Algorithm for Noise Improvement in Realtime Ultrasound Images (실시간 초음파 영상에서 노이즈 개선을 위한 GPU 기반의 필터 알고리즘)

  • Cho, Young-Bok;Woo, Sung-Hee
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1207-1212
    • /
    • 2018
  • The ultrasound image uses ultrasonic pulses to receive the reflected waves and construct an image necessary for diagnosis. At this time, when the signal becomes weak, noise is generated and a slight difference in brightness occurs. In addition, fluctuation of image due to breathing phenomenon, which is the characteristic of ultrasound image, and change of motion in real time occurs. Such a noise is difficult to recognize and diagnose visually in the analysis process. In this paper, morphological features are automatically extracted by using image processing technique on ultrasound acquired images. In this paper, we implemented a GPU - based fast filter using a cloud big data processing platform for image processing. In applying the GPU - based high - performance filter, the algorithm was run with performance 4.7 times faster than CPU - based and the PSNR was 37.2dB, which is very similar to the original.

Personal Recommendation Service Design Through Big Data Analysis on Science Technology Information Service Platform (과학기술정보 서비스 플랫폼에서의 빅데이터 분석을 통한 개인화 추천서비스 설계)

  • Kim, Dou-Gyun
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.28 no.4
    • /
    • pp.501-518
    • /
    • 2017
  • Reducing the time it takes for researchers to acquire knowledge and introduce them into research activities can be regarded as an indispensable factor in improving the productivity of research. The purpose of this research is to cluster the information usage patterns of KOSEN users and to suggest optimization method of personalized recommendation service algorithm for grouped users. Based on user research activities and usage information, after identifying appropriate services and contents, we applied a Spark based big data analysis technology to derive a personal recommendation algorithm. Individual recommendation algorithms can save time to search for user information and can help to find appropriate information.

Development of Big-data Management Platform Considering Docker Based Real Time Data Connecting and Processing Environments (도커 기반의 실시간 데이터 연계 및 처리 환경을 고려한 빅데이터 관리 플랫폼 개발)

  • Kim, Dong Gil;Park, Yong-Soon;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.4
    • /
    • pp.153-161
    • /
    • 2021
  • Real-time access is required to handle continuous and unstructured data and should be flexible in management under dynamic state. Platform can be built to allow data collection, storage, and processing from local-server or multi-server. Although the former centralize method is easy to control, it creates an overload problem because it proceeds all the processing in one unit, and the latter distributed method performs parallel processing, so it is fast to respond and can easily scale system capacity, but the design is complex. This paper provides data collection and processing on one platform to derive significant insights from various data held by an enterprise or agency in the latter manner, which is intuitively available on dashboards and utilizes Spark to improve distributed processing performance. All service utilize dockers to distribute and management. The data used in this study was 100% collected from Kafka, showing that when the file size is 4.4 gigabytes, the data processing speed in spark cluster mode is 2 minute 15 seconds, about 3 minutes 19 seconds faster than the local mode.

Operational Big Data Analytics platform for Smart Factory (스마트팩토리를 위한 운영빅데이터 분석 플랫폼)

  • Bae, Hyerim;Park, Sanghyuck;Choi, Yulim;Joo, Byeongjun;Sutrisnowati, Riska Asriana;Pulshashi, Iq Reviessay;Putra, Ahmad Dzulfikar Adi;Adi, Taufik Nur;Lee, Sanghwa;Won, Seokrae
    • The Journal of Bigdata
    • /
    • v.1 no.2
    • /
    • pp.9-19
    • /
    • 2016
  • Since ICT convergence became a major issue, German government has carried forward a policy 'Industry 4.0' that triggered ICT convergence with manufacturing. Now this trend gets into our stride. From this facts, we can expect great leap up to quality perfection in low cost. Recently Korean government also enforces policy with 'Manufacturing 3.0' for upgrading Korean manufacturing industry with being accelerated by many related technologies. We, in the paper, developed a custom-made operational big data analysis platform for the implementation of operational intelligence to improve industry capability. Our platform is designed based on spring framework and web. In addition, HDFS and spark architectures helps our system analyze massive data on the field with streamed data processed by process mining algorithm. Extracted knowledge from data will support enhancement of manufacturing performance.

  • PDF

A Design on a Streaming Big Data Processing System (스트리밍 빅데이터 처리 시스템 설계)

  • Kim, Sungsook;Kim, GyungTae;Park, Kiejin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.99-101
    • /
    • 2015
  • 현재 다양한 센서 기기에서 쏟아지는 대용량의 정형/비정형의 스트림 데이터의 경우 기존의 단일 스트리밍 처리 시스템 만으로 처리하기에는 한계가 있다. 클러스터의 디스크가 아닌 메모리들을 사용하여 대용량 데이터 처리를 할 수 있는 Spark 는 분산 처리 임에도 불구하고 강력한 데이터 일관성과 실시간성을 확보할 수 있는 플랫폼이다. 본 연구에서는 대용량 스트림 데이터 분석 시 발생하는 메모리 공간 부족과 실시간 병렬 처리 문제를 해결하고자, 클러스터의 메모리를 이용하여 대용량 데이터의 분산 처리와 스트림 실시간 처리를 동시에 할 수 있도록 구성하였다. 실험을 통하여, 기존 배치 처리 방식과 제안 시스템의 성능 차이를 확인 할 수 있었다.

Safety Autonomous Platform Design with Ensemble AI Models (앙상블 인공지능 모델을 활용한 안전 관리 자율운영 플랫폼 설계)

  • Dongyeop Lee;Daesik Lim;Soojeong Woo;Youngho Moon;Minjeong Kim;Joonwon Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.1
    • /
    • pp.159-162
    • /
    • 2024
  • This paper proposes a novel safety autonomous platform (SAP) architecture that can automatically and precisely manage on-site safety through ensemble artificial intelligence models generated from video information, worker's biometric information, and the safety rule to estimate the risk index. We practically designed the proposed SAP architecture by the Hadoop ecosystem with Kafka/NiFi, Spark/Hive, Hue, ELK (Elasticsearch, Logstash, Kibana), Ansible, etc., and confirmed that it worked well with safety mobility gateways for providing various safety applications.

A System Design for Real-Time Monitoring of Patient Waiting Time based on Open-Source Platform (오픈소스 플랫폼 기반의 실시간 환자 대기시간 모니터링 시스템 설계)

  • Ryu, Wooseok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.4
    • /
    • pp.575-580
    • /
    • 2018
  • This paper discusses system for real-time monitoring of patient waiting time in hospitals based on open-source platform. It is necessary to make use of open-source projects to develop a high-performance stream processing system, which analyzes and processes stream data in real time, with less cost. The Hadoop ecosystem is a well-known big data processing platform consisting of numerous open-source subprojects. This paper first defines several requirements for the monitoring system, and selects a few projects from the Hadoop ecosystem that are suited to meet the requirements. Then, the paper proposes system architecture and a detailed module design using Apache Spark, Apache Kafka, and so on. The proposed system can reduce development costs by using open-source projects and by acquiring data from legacy hospital information system. High-performance and fault-tolerance of the system can also be achieved through distributed processing.

Development of IoT Service Classification Method based on Service Operation Characteristic (세부 동작 기반 사물인터넷 서비스 분류 기법 개발)

  • Jo, Jeong hoon;Lee, HwaMin;Lee, Dae won
    • Journal of Internet Computing and Services
    • /
    • v.19 no.2
    • /
    • pp.17-26
    • /
    • 2018
  • Recently, through the emergence and convergence of Internet services, the unified Internet of thing(IoT) service platform have been researched. Currently, the IoT service is constructed as an independent system according to the purpose of the service provider, so information exchange and module reuse are impossible among similar services. In this paper, we propose a operation based service classification algorithm for various services in order to provide an environment of unfied Internet platform. In implementation, we classify and cluster more than 100 commercial IoT services. Based on this, we evaluated the performance of the proposed algorithm compared with the K-means algorithm. In order to prevent a single clustering due to the lack of sample groups, we re-cluster them using K-means algorithm. In future study, we will expand existing service sample groups and use the currently implemented classification system on Apache Spark for faster and more massive data processing.