• Title/Summary/Keyword: 분산 클라우드

Search Result 269, Processing Time 0.027 seconds

A Data Placement Scheme for the Characteristics of Data Intensive Scientific Workflow Applications (데이터 집약 과학 워크플로우 응용의 특성을 고려한 데이터 배치 기법)

  • Ahn, Julim;Kim, Yoonhee
    • KNOM Review
    • /
    • v.21 no.2
    • /
    • pp.46-52
    • /
    • 2018
  • For data-intensive scientific workflow application experiments that leverage the cloud computing environment, large amounts of data can be distributed across multiple data centers in the cloud. The generated intermediate data can also be transmitted through access between different data centers. When the application is executed, the execution result is changed according to the location of the data since the intermediate data generated is used. However, existing data placement strategies do not consider the characteristics of scientific applications. In this paper, we define a data-intensive tasks and propose runtime data placement in that interval. Through the proposed data placement scheme, we analyze the scenarios considering the number of times in the data intensive tasks defined in this study and derive the results. In addition, performance was compared by analyzing runtime data placement times and runtime data placement overhead.

An Optimization Method for Hologram Generation on Multiple GPU-based Parallel Processing (다중 GPU기반 홀로그램 생성을 위한 병렬처리 성능 최적화 기법)

  • Kook, Joongjin
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.9-15
    • /
    • 2019
  • Since the computational complexity for hologram generation increases exponentially with respect to the size of the point cloud, parallel processing using CUDA and/or OpenCL library based on multiple GPUs has recently become popular. The CUDA kernel for parallelization needs to consist of threads, blocks, and grids properly in accordance with the number of cores and the memory size in the GPU. In addition, in case of multiple GPU environments, the distribution in grid-by-grid, in block-by-block, or in thread-by-thread is needed according to the number of GPUs. In order to evaluate the performance of CGH generation, we compared the computational speed in CPU, in single GPU, and in multi-GPU environments by gradually increasing the number of points in a point cloud from 10 to 1,000,000. We also present a memory structure design and a calculation method required in the CUDA-based parallel processing to accelerate the CGH (Computer Generated Hologram) generation operation in multiple GPU environments.

Development and implementation of smart pipe network operating platform focused on water quality management (스마트 상수관망 수질관리 운영플랫폼 개발과 적용)

  • Dae Hee Park;Ju Hwan Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.453-453
    • /
    • 2023
  • 상수관망의 수질사고와 이상상황 발생시 대응을 위해서는 급수구역에 설치되어 있는 자동수질측정기, 정밀여과장치, 재염소주입설비, 자동드레인 등의 계측·제어설비들 간의 유기적인 정보공유를 통한 제어를 필요로 한다. 스마트 상수관망 운영플랫폼은 이러한 인프라 시설의 운영방안을 고려하여 분산되어 있는 계측데이터를 통합감시 및 제어하는 시스템으로 개발되었다. 상수관망 운영플랫폼은 능동형 분석 제어기술을 도입하여, 스마트 상수관망 인프라 설비를 최적제어할 수 있도록 구현하였다. 통합운영 플랫폼은 PostgreSQL, PostGIS, GeoServer, OpenLayers 등의 기술을 활용하여 개발하였다. 플랫폼은 계측감시, 시설관리, 운영제어 등의 기능으로 구성되며, 상수도 업무지원을 위한 관망해석 및 네트워크 분석 기능을 지원한다. 본 시스템은 스마트 상수도 구축사업을 통해 구축한 유량·수질모니터링 장비와 수질관리를 위해 도입된 재염소, 자동드레인 설비의 운영상태를 실시간 조회하는 모니터링 프로그램과, 관망해석 프로그램 그리고 대상설비의 최적제어를 위한 운영관리 프로그램으로 구성되어 있다. 모니터링 프로그램은 현장에서 측정되고 있는 유량, 수압, 수질, 펌프운전 등의 상태를 실시간으로 감시하고 클라우드 데이터베이스에 저장·관리하는 기능을 수행한다. 관망해석 프로그램은 EPA_Net모형과 연계되어 관망수리·수질해석을 수행하는 부분으로 재염소설비의 염소 추가주입이나 자동드레인을 통한 배제시 나타나게되는 관의 수리·수질변화를 클라우드 컴퓨팅 환경에서 분석하고 결과를 가시화 하는 기능을 갖고 있다. 운영관리 프로그램은 재염소 주입이 필요할 경우 주입량의 산정하는 부분과 관망 파손이나 수질사고 발생시 최적 단수예상지역을 도출하는 기능을 보유하고 있다. 향후 스마트 상수관망의 능동형 수질관리를 추진하는 지자체에 도입하여 인프라운영관리 기술 확보 및 수질관리 능력 개선과 실시간 감시 및 위기 대응능력 향상에 기여할 것으로 기대된다.

  • PDF

A Study on Determination of VPP Cloud Charges (VPP 클라우드 요금 산정에 관한 연구)

  • Lim, Chung-Hwan;Kim, Dong-Sub;Moon, Chae-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.299-308
    • /
    • 2022
  • Recent, energy transition policies are driving to increase in the number of small photovoltaic(PV) generators. It is difficult for system operators to accurately anticipate the amount of power generated from such small scale PV generation, and this may disrupt dispatch schedules and result in an increase in cost. The need for a Virtual Power Plant(VPP) is emerging as a way of resolving these problems, as it would integrate small-scale PV plants and eliminate uncertainty about the amount of power generated, control voltage, and provide power reserves. In this paper, the cost evaluation methods are described for determination of VPP cloud charges both Net Present Value(NPV) method and Profitability Index(PI) method, the calculated outcomes of the two types of cost evaluation methods are presented in detail. It seems we secure profitability as we get 1.22 of profitability index from calculation results, it may be attractive for the aggregator as NPV is enough for satisfying profitability.

EDF: An Interactive Tool for Event Log Generation for Enabling Process Mining in Small and Medium-sized Enterprises

  • Frans Prathama;Seokrae Won;Iq Reviessay Pulshashi;Riska Asriana Sutrisnowati
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.101-112
    • /
    • 2024
  • In this paper, we present EDF (Event Data Factory), an interactive tool designed to assist event log generation for process mining. EDF integrates various data connectors to improve its capability to assist users in connecting to diverse data sources. Our tool employs low-code/no-code technology, along with graph-based visualization, to help non-expert users understand process flow and enhance the user experience. By utilizing metadata information, EDF allows users to efficiently generate an event log containing case, activity, and timestamp attributes. Through log quality metrics, our tool enables users to assess the generated event log quality. We implement EDF under a cloud-based architecture and run a performance evaluation. Our case study and results demonstrate the usability and applicability of EDF. Finally, an observational study confirms that EDF is easy to use and beneficial, expanding small and medium-sized enterprises' (SMEs) access to process mining applications.

Performance Optimization in GlusterFS on SSDs (SSD 환경 아래에서 GlusterFS 성능 최적화)

  • Kim, Deoksang;Eom, Hyeonsang;Yeom, Heonyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.95-100
    • /
    • 2016
  • In the current era of big data and cloud computing, the amount of data utilized is increasing, and various systems to process this big data rapidly are being developed. A distributed file system is often used to store the data, and glusterFS is one of popular distributed file systems. As computer technology has advanced, NAND flash SSDs (Solid State Drives), which are high performance storage devices, have become cheaper. For this reason, datacenter operators attempt to use SSDs in their systems. They also try to install glusterFS on SSDs. However, since the glusterFS is designed to use HDDs (Hard Disk Drives), when SSDs are used instead of HDDs, the performance is degraded due to structural problems. The problems include the use of I/O-cache, Read-ahead, and Write-behind Translators. By removing these features that do not fit SSDs which are advantageous for random I/O, we have achieved performance improvements, by up to 255% in the case of 4KB random reads, and by up to 50% in the case of 64KB random reads.

Issue Analysis on Gas Safety Based on a Distributed Web Crawler Using Amazon Web Services (AWS를 활용한 분산 웹 크롤러 기반 가스 안전 이슈 분석)

  • Kim, Yong-Young;Kim, Yong-Ki;Kim, Dae-Sik;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.16 no.12
    • /
    • pp.317-325
    • /
    • 2018
  • With the aim of creating new economic values and strengthening national competitiveness, governments and major private companies around the world are continuing their interest in big data and making bold investments. In order to collect objective data, such as news, securing data integrity and quality should be a prerequisite. For researchers or practitioners who wish to make decisions or trend analyses based on objective and massive data, such as portal news, the problem of using the existing Crawler method is that data collection itself is blocked. In this study, we implemented a method of collecting web data by addressing existing crawler-style problems using the cloud service platform provided by Amazon Web Services (AWS). In addition, we collected 'gas safety' articles and analyzed issues related to gas safety. In order to ensure gas safety, the research confirmed that strategies for gas safety should be established and systematically operated based on five categories: accident/occurrence, prevention, maintenance/management, government/policy and target.

User-Centric Disaster Recovery System Based on Proxy Re-Encryption Using Blockchain and Distributed Storage (블록체인과 분산 스토리지를 활용한 프록시 재암호화 기반의 사용자 중심 재해 복구 시스템)

  • Park, Junhoo;Kim, Geunyoung;Kim, Junseok;Ryou, Jaecheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1157-1169
    • /
    • 2021
  • The disaster recovery refers to policies and procedures to ensure continuity of services and minimize loss of resources and finances in case of emergency situations such as natural disasters. In particular, the disaster recovery method by the cloud service provider has advantages such as management flexibility, high availability, and cost effectiveness. However, this method has a dependency on a service provider and has a structural limitation in which a user cannot be involved in personal data. In this paper, we propose a protocol using proxy re-encryption for data confidentiality by removing dependency on service providers by backing up user data using blockchain and distributed storage. The proposed method is implemented in Ethereum and IPFS environments, and presents the performance and cost required for backup and recovery operations.

Active and Context-Resilient Cyber Defense Operation applying the Concept of Performing Mosaic Warfare (모자이크전 수행 개념을 적용한 능동형 상황 탄력적 사이버 방어작전)

  • Jung-Ho Eom
    • Convergence Security Journal
    • /
    • v.21 no.4
    • /
    • pp.41-48
    • /
    • 2021
  • Recently, the aspect of war is evolving due to the 4th industrial revolution technology. Among them, AI technology is changing the way of war as it is applied to advanced weapon systems and decision-making systems. Mosaic Warfare, presented by the U.S. DARPA, is shifting military warfare from attrition-centric warfare to decision-centric warfare by combining Internet of Things, cloud computing, big data, mobile, and artificial intelligence technologies. In addition, it is a method to perform operations quickly so that the most offensive effect can be achieved by appropriately combining the distributed and deployed forces according to the battlefield context. In other words, military operations are not carried out through a uniform combat process, but various forces are operated through a distributed system depending on the battlefield context. In cyber warfare, as artificial intelligence is applied to cyber attack technology, there is a limit to responding with the same procedural response method as the existing cyber kill chain. Therefore, in this paper, the execution method of mosaic warfare is applied to perform context-resilient cyber operations that can operate a response system according to the attack and cyberspace context.

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.