• Title/Summary/Keyword: cloud computing systems

Search Result 602, Processing Time 0.028 seconds

A Case Study on the Development of Epidemiological Investigation Support System through Inter-ministerial Collaboration (정부 부처간 협업을 통한 온라인 역학조사 지원시스템 개발 사례 연구)

  • Kim, Su Jung;Kim, Jae Ho;Eum, Gyu Ri;Kim, Tae Hyung
    • The Journal of Information Systems
    • /
    • v.29 no.4
    • /
    • pp.123-135
    • /
    • 2020
  • Purpose The purpose of this study is to investigate the development process and the effectiveness of the EISS (epidemiological investigation support system), which prevents the spread of infectious diseases like a novel corona virus disease, COVID-19. Design/methodology/approach This study identified the existing epidemiological support system for MERS through prior research and studied the case of the development of a newly developed epidemiological support system based on cloud computing infrastructure for COVID-19 through inter-ministerial collaboration in 2020. Findings The outbreak of COVID-19 drove the Korean Government began the development of the EISS with private companies. This system played a significant role in flattening the spread of infection during several waves in which the number of confirmed cases increased rapidly in Korea, However, we need to be careful in handling confirmed patients' private data affecting their privacy.

On the Performance of Oracle Grid Engine Queuing System for Computing Intensive Applications

  • Kolici, Vladi;Herrero, Albert;Xhafa, Fatos
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.491-502
    • /
    • 2014
  • In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.

Functional Privacy-preserving Outsourcing Scheme with Computation Verifiability in Fog Computing

  • Tang, Wenyi;Qin, Bo;Li, Yanan;Wu, Qianhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.281-298
    • /
    • 2020
  • Fog computing has become a popular concept in the application of internet of things (IoT). With the superiority in better service providing, the edge cloud has become an attractive solution to IoT networks. The data outsourcing scheme of IoT devices demands privacy protection as well as computation verification since the lightweight devices not only outsource their data but also their computation. Existing solutions mainly deal with the operations over encrypted data, but cannot support the computation verification in the same time. In this paper, we propose a data outsourcing scheme based on an encrypted database system with linear computation as well as efficient query ability, and enhance the interlayer program in the original system with homomorphic message authenticators so that the system could perform computational verifying. The tools we use to construct our scheme have been proven secure and valid. With our scheme, the system could check if the cloud provides the correct service as the system asks. The experiment also shows that our scheme could be as effective as the original version, and the extra load in time is neglectable.

User Mobility Model Based Computation Offloading Decision for Mobile Cloud

  • Lee, Kilho;Shin, Insik
    • Journal of Computing Science and Engineering
    • /
    • v.9 no.3
    • /
    • pp.155-162
    • /
    • 2015
  • The last decade has seen a rapid growth in the use of mobile devices all over the world. With an increasing use of mobile devices, mobile applications are becoming more diverse and complex, demanding more computational resources. However, mobile devices are typically resource-limited (i.e., a slower-speed CPU, a smaller memory) due to a variety of reasons. Mobile users will be capable of running applications with heavy computation if they can offload some of their computations to other places, such as a desktop or server machines. However, mobile users are typically subject to dynamically changing network environments, particularly, due to user mobility. This makes it hard to choose good offloading decisions in mobile environments. In general, users' mobility can provide some hints for upcoming changes to network environments. Motivated by this, we propose a mobility model of each individual user taking advantage of the regularity of his/her mobility pattern, and develop an offloading decision-making technique based on the mobility model. We evaluate our technique through trace-based simulation with real log data traces from 14 Android users. Our evaluation results show that the proposed technique can help boost the performance of mobile devices in terms of response time and energy consumption, when users are highly mobile.

An Efficient Design and Implementation of an MdbULPS in a Cloud-Computing Environment

  • Kim, Myoungjin;Cui, Yun;Lee, Hanku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3182-3202
    • /
    • 2015
  • Flexibly expanding the storage capacity required to process a large amount of rapidly increasing unstructured log data is difficult in a conventional computing environment. In addition, implementing a log processing system providing features that categorize and analyze unstructured log data is extremely difficult. To overcome such limitations, we propose and design a MongoDB-based unstructured log processing system (MdbULPS) for collecting, categorizing, and analyzing log data generated from banks. The proposed system includes a Hadoop-based analysis module for reliable parallel-distributed processing of massive log data. Furthermore, because the Hadoop distributed file system (HDFS) stores data by generating replicas of collected log data in block units, the proposed system offers automatic system recovery against system failures and data loss. Finally, by establishing a distributed database using the NoSQL-based MongoDB, the proposed system provides methods of effectively processing unstructured log data. To evaluate the proposed system, we conducted three different performance tests on a local test bed including twelve nodes: comparing our system with a MySQL-based approach, comparing it with an Hbase-based approach, and changing the chunk size option. From the experiments, we found that our system showed better performance in processing unstructured log data.

Advanced Resource Management with Access Control for Multitenant Hadoop

  • Won, Heesun;Nguyen, Minh Chau;Gil, Myeong-Seon;Moon, Yang-Sae
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.592-601
    • /
    • 2015
  • Multitenancy has gained growing importance with the development and evolution of cloud computing technology. In a multitenant environment, multiple tenants with different demands can share a variety of computing resources (e.g., CPU, memory, storage, network, and data) within a single system, while each tenant remains logically isolated. This useful multitenancy concept offers highly efficient, and cost-effective systems without wasting computing resources to enterprises requiring similar environments for data processing and management. In this paper, we propose a novel approach supporting multitenancy features for Apache Hadoop, a large scale distributed system commonly used for processing big data. We first analyze the Hadoop framework focusing on "yet another resource negotiator (YARN)", which is responsible for managing resources, application runtime, and access control in the latest version of Hadoop. We then define the problems for supporting multitenancy and formally derive the requirements to solve these problems. Based on these requirements, we design the details of multitenant Hadoop. We also present experimental results to validate the data access control and to evaluate the performance enhancement of multitenant Hadoop.

Efficient Server Virtualization using Grid Service Infrastructure

  • Baek, Sung-Jin;Park, Sun-Mi;Yang, Su-Hyun;Song, Eun-Ha;Jeong, Young-Sik
    • Journal of Information Processing Systems
    • /
    • v.6 no.4
    • /
    • pp.553-562
    • /
    • 2010
  • The core services in cloud computing environment are SaaS (Software as a Service), Paas (Platform as a Service) and IaaS (Infrastructure as a Service). Among these three core services server virtualization belongs to IaaS and is a service technology to reduce the server maintenance expenses. Normally, the primary purpose of sever virtualization is building and maintaining a new well functioning server rather than using several existing servers, and in improving the various system performances. Often times this presents an issue in that there might be a need to increase expenses in order to build a new server. This study intends to use grid service architecture for a form of server virtualization which utilizes the existing servers rather than introducing a new server. More specifically, the proposed system is to enhance system performance and to reduce the corresponding expenses, by adopting a scheduling algorithm among the distributed servers and the constituents for grid computing thereby supporting the server virtualization service. Furthermore, the proposed server virtualization system will minimize power management by adopting the sleep severs, the subsidized servers and the grid infrastructure. The power maintenance expenses for the sleep servers will be lowered by utilizing the ACPI (Advanced Configuration & Power Interface) standards with the purpose of overcoming the limits of server performance.

Reducing Cybersecurity Risks in Cloud Computing Using A Distributed Key Mechanism

  • Altowaijri, Saleh M.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.1-10
    • /
    • 2021
  • The Internet of things (IoT) is the main advancement in data processing and communication technologies. In IoT, intelligent devices play an exciting role in wireless communication. Although, sensor nodes are low-cost devices for communication and data gathering. However, sensor nodes are more vulnerable to different security threats because these nodes have continuous access to the internet. Therefore, the multiparty security credential-based key generation mechanism provides effective security against several attacks. The key generation-based methods are implemented at sensor nodes, edge nodes, and also at server nodes for secure communication. The main challenging issue in a collaborative key generation scheme is the extensive multiplication. When the number of parties increased the multiplications are more complex. Thus, the computational cost of batch key and multiparty key-based schemes is high. This paper presents a Secure Multipart Key Distribution scheme (SMKD) that provides secure communication among the nodes by generating a multiparty secure key for communication. In this paper, we provide node authentication and session key generation mechanism among mobile nodes, head nodes, and trusted servers. We analyzed the achievements of the SMKD scheme against SPPDA, PPDAS, and PFDA schemes. Thus, the simulation environment is established by employing an NS 2. Simulation results prove that the performance of SMKD is better in terms of communication cost, computational cost, and energy consumption.

Query with SUM Aggregate Function on Encrypted Floating-Point Numbers in Cloud

  • Zhu, Taipeng;Zou, Xianxia;Pan, Jiuhui
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.573-589
    • /
    • 2017
  • Cloud computing is an attractive solution that can provide low cost storage and powerful processing capabilities for government agencies or enterprises of small and medium size. Yet the confidentiality of information should be considered by any organization migrating to cloud, which makes the research on relational database system based on encryption schemes to preserve the integrity and confidentiality of data in cloud be an interesting subject. So far there have been various solutions for realizing SQL queries on encrypted data in cloud without decryption in advance, where generally homomorphic encryption algorithm is applied to support queries with aggregate functions or numerical computation. But the existing homomorphic encryption algorithms cannot encrypt floating-point numbers. So in this paper, we present a mechanism to enable the trusted party to encrypt the floating-points by homomorphic encryption algorithm and partial trusty server to perform summation on their ciphertexts without revealing the data itself. In the first step, we encode floating-point numbers to hide the decimal points and the positive or negative signs. Then, the codes of floating-point numbers are encrypted by homomorphic encryption algorithm and stored as sequences in cloud. Finally, we use the data structure of DoubleListTree to implement the aggregate function of SUM and later do some extra processes to accomplish the summation.

Designing the Record Management Functions for Record Content Using Advantages of Cloud Storage (클라우드 저장소 장점을 활용한 기록 콘텐츠 관리기능 설계)

  • Yim, Jin-Hee
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.19 no.3
    • /
    • pp.271-292
    • /
    • 2019
  • Recently, the central administrative agency changed its business management system to cloud-based On-nara 2.0. To transfer and manage the records of the cloud business management system, the National Archives Service has developed and distributed a cloud-based records management system. It serves as an opportunity to maximize the benefits of cloud computing and redesign the records management to be more effective and efficient. The process and method of electronic record management can be transformed through digital technologies. First, we can change the transfer method for electronic records. When the business and the records management systems share the same cloud storage, it is possible to transfer the content files between the two systems without moving the contents files physically, thus copying only the metadata and reducing the cost and the risk of integrity damage. Second, the strategy for allocating storage space for contents can be conceived. Assuming that the cloud storage is shared by the business and the record management systems, it is advantageous to distinguish the storage location based on the retention period of the content files. Third, systems that access content files, such as records creation, records management, and information disclosure systems, can share the cloud storage and minimize the duplication of content files.