• Title/Summary/Keyword: cloud computing systems

Search Result 602, Processing Time 0.023 seconds

Analysis of Priority of Technical Factors for Enabling Cloud Computing Services (클라우드 컴퓨팅 서비스 활성화를 위한 기술적 측면 특성요인의 중요도 우선순위 분석)

  • Kang, Da-Yeon;Hwang, Jong-Ho
    • Journal of Digital Convergence
    • /
    • v.17 no.8
    • /
    • pp.123-130
    • /
    • 2019
  • The advent of the full-fledged Internet of Things era will bring together various types of information through Internet of Things devices, and the vast amount of information collected will be generated as new information by the analysis process. To effectively store this generated information, a flexible and scalable cloud computing system is advantageous. Therefore, the main determinants for effective client system acceptance are viewed as motivator factor (economics, efficiency, etc.) and hindrance factor (transitional costs, security issues, etc.) and the purpose of this study is to determine which detailed factors play a major role in making new system acceptance decisions around harm. The factors required to determine the major priorities are defined as the system acceptance determinants from the technical point of view obtained through the literature review, and the questionnaire is prepared based on the factors derived, and the survey is conducted on the experts concerned. In addition, the AHP analysis aims to achieve a final priority by performing a bifurcation between components for measuring a decision unit. Furthermore, the results of this study will serve as an important basis for making decisions based on acceptance (enabling) of technology.

Learning Algorithms in AI System and Services

  • Jeong, Young-Sik;Park, Jong Hyuk
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1029-1035
    • /
    • 2019
  • In recent years, artificial intelligence (AI) services have become one of the most essential parts to extend human capabilities in various fields such as face recognition for security, weather prediction, and so on. Various learning algorithms for existing AI services are utilized, such as classification, regression, and deep learning, to increase accuracy and efficiency for humans. Nonetheless, these services face many challenges such as fake news spread on social media, stock selection, and volatility delay in stock prediction systems and inaccurate movie-based recommendation systems. In this paper, various algorithms are presented to mitigate these issues in different systems and services. Convolutional neural network algorithms are used for detecting fake news in Korean language with a Word-Embedded model. It is based on k-clique and data mining and increased accuracy in personalized recommendation-based services stock selection and volatility delay in stock prediction. Other algorithms like multi-level fusion processing address problems of lack of real-time database.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

A Case Study on the Experience of Using a Cloud-based Library Systems (클라우드 기반 도서관 시스템의 사용경험에 대한 사례연구)

  • Lee, Soosang
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.1
    • /
    • pp.343-364
    • /
    • 2021
  • In this study, as examples of domestic libraries currently using the cloud-based library system, the main characteristics and issues that appeared in the experience of use divided into the processes of introduction, conversion, and operation of each system were investigated, and the results are as follows. First, it is said that new systems were introduced as alternatives to problems caused by the operation of the existing system, and the current products were selected because they were cost-effective. Second, the main issues in the conversion process were data migration work, implementation of existing service functions, and linking problems of internal and external systems in the library. Third, the main advantages identified in the operation process were cost reduction, simple installation and automatic management and maintenance, and convenient use in mobile devices. The main drawbacks were the difficulty of customizing that reflects the characteristics of the library, and the need for stability of the network. The disappeared role of the information technology librarian is the regular system inspection and maintenance support, and various new roles have been suggested. The responses of librarians and users to the new system were generally satisfied rather than dissatisfied.

An Analysis of Utilization on Virtualized Computing Resource for Hadoop and HBase based Big Data Processing Applications (Hadoop과 HBase 기반의 빅 데이터 처리 응용을 위한 가상 컴퓨팅 자원 이용률 분석)

  • Cho, Nayun;Ku, Mino;Kim, Baul;Xuhua, Rui;Min, Dugki
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.4
    • /
    • pp.449-462
    • /
    • 2014
  • In big data era, there are a number of considerable parts in processing systems for capturing, storing, and analyzing stored or streaming data. Unlike traditional data handling systems, a big data processing system needs to concern the characteristics (format, velocity, and volume) of being handled data in the system. In this situation, virtualized computing platform is an emerging platform for handling big data effectively, since virtualization technology enables to manage computing resources dynamically and elastically with minimum efforts. In this paper, we analyze resource utilization of virtualized computing resources to discover suitable deployment models in Apache Hadoop and HBase-based big data processing environment. Consequently, Task Tracker service shows high CPU utilization and high Disk I/O overhead during MapReduce phases. Moreover, HRegion service indicates high network resource consumption for transfer the traffic data from DataNode to Task Tracker. DataNode shows high memory resource utilization and Disk I/O overhead for reading stored data.

A Study on the Image/Video Data Processing Methods for Edge Computing-Based Object Detection Service (에지 컴퓨팅 기반 객체탐지 서비스를 위한 이미지/동영상 데이터 처리 기법에 관한 연구)

  • Jang Shin Won;Yong-Geun Hong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.11
    • /
    • pp.319-328
    • /
    • 2023
  • Unlike cloud computing, edge computing technology analyzes and judges data close to devices and users, providing advantages such as real-time service, sensitive data protection, and reduced network traffic. EdgeX Foundry, a representative open source of edge computing platforms, is an open source-based edge middleware platform that provides services between various devices and IT systems in the real world. EdgeX Foundry provides a service for handling camera devices, along with a service for handling existing sensed data, which only supports simple streaming and camera device management and does not store or process image data obtained from the device inside EdgeX. This paper presents a technique that can store and process image data inside EdgeX by applying some of the services provided by EdgeX Foundry. Based on the proposed technique, a service pipeline for object detection services used core in the field of autonomous driving was created for experiments and performance evaluation, and then compared and analyzed with existing methods.

Performance Evaluation and Optimization of NoSQL Databases with High-Performance Flash SSDs (고성능 플래시 SSD 환경에서 NoSQL 데이터베이스의 성능 평가 및 최적화)

  • Han, Hyuck
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.7
    • /
    • pp.93-100
    • /
    • 2017
  • Recently, demands for high-performance flash-based storage devices (i.e., flash SSD) have rapidly grown in social network services, cloud computing, super-computing, and enterprise storage systems. The industry and academic communities made the NVMe specification for high-performance storage devices, and NVMe-based flash SSDs can be now obtained in the market. In this article, we evaluate performance of NoSQL databases that social network services and cloud computing services heavily adopt by using NVMe-based flash SSDs. To this end, we use NVMe SSD that Samsung Electronics recently developed, and the SSD used in this study has performance up to 3.5GB/s for sequential read/write operations. We use WiredTiger for NoSQL databases, and it is a default storage engine for MongoDB. Our experimental results show that log processing in NoSQL databases is a major overhead when high-performance NVMe-based flash SSDs are used. Furthermore, we optimize components of log processing and optimized WiredTiger show up to 15 times better performance than original WiredTiger.

A Strategy for Adopting Server Virtualization in the Public Sector: NIPA Computer Center

  • Song, Jong-Cheol;Ryu, Jee-Woong;Moon, Byung-Joo;Jung, Hoe-Kyung
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.1
    • /
    • pp.61-65
    • /
    • 2012
  • Many public organizations have been adopting and operating various servers. These servers run on Windows, Unix, and Linux operating systems that generally use less than 10% of their capacity. For migrating a public organization to cloud computing, we must first virtualize the server environment. This article proposes a strategy for server virtualization that the National IT Industry Promotion Agency (NIPA) has done and describes the effects of a public organization migrating to cloud computing. The NIPA Computer Center planned an effective virtualization migration on various servers. This project of virtualization migration was conducted with the existing policy of separate x86 servers and Unix servers. There are three popular approaches to server virtualization: a virtual machine model, a paravirtual machine model, and virtualization at the operating system layer. We selected a VMware solution that uses the virtual machine model. We selected servers for virtualization in the following manner. Servers were chosen that had the highest rate of service usage and CPU usage and had been operating for five years or more. However, we excluded servers that require 80% or greater rates of CPU usage. After adopting the server virtualization technique, we consolidated 32 servers into 3 servers. Virtualization is a technology that can provide benefits in these areas: server consolidation and optimization, infrastructure cost reduction and improved operational flexibility, and implementation of a dual computing environment.

A Cloud Computing Architecture for Integrating Navy Shipboard Computing Systems (클라우드 컴퓨팅 기반 해군 함정 컴퓨팅 시스템 통합 아키텍처)

  • Kim, Hong-Jae;Oh, Sang-Yoon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.59-62
    • /
    • 2011
  • 네트워크 중심전 능력을 위해서 해군 함정에는 다양한 컴퓨팅 시스템이 운영 중이다. 그러나 각 컴퓨팅 시스템이 군 표준에 따라 단일 시스템으로 구현이 되어 다른 시스템과 통합이 어려우며 효과적인 자원의 운영이 제한된다. 클라우드 컴퓨팅에서는 가상화를 통해 하드웨어 및 소프트웨어의 종속성을 방지, 효과적인 컴퓨팅 자원의 사용 및 시스템 활용도 향상이 가능하다. 이에 본 논문에서는 클라우드 컴퓨팅 기반의 해군 함정 컴퓨팅 시스템 통합 아키텍처를 제안한다. 제안 아키텍처는 가상화를 통해 통합 하드웨어 풀을 만들고 가상머신에 자원을 할당, 가상머신 위에 운영체제 및 어플리케이션을 운영함으로써 함정 컴퓨팅하드웨어 통합과 효과적인 컴퓨팅 자원의 활용이 가능하다.

Breaking the Myths of the IT Productivity Paradox

  • Hwang, Jong-Sung;Kim, SungHyun;Lee, Ho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.466-482
    • /
    • 2015
  • IT is the key enabler of digital economy. Appropriate usage of IT can provide a strategic competitive advantage to a firm in a dynamic competitive environment. However, there has been a continuing debate on whether IT can actually enhance the productivity of firms. This concept is called IT productivity paradox. In this study, we analyzed the causality among appropriate indicators to demonstrate the real impact of IT on productivity. The 12,100 sample data from 2011 were used for analysis. As expected, the results indicated that mobile device usage, website adoption, e-commerce, open source, cloud computing, and green computing positively influence IT productivity. This unprecedented large-scale analysis can provide clarification regarding the ambiguous causal mechanism between IT usage and productivity.