• Title/Summary/Keyword: cloud computing systems

Search Result 602, Processing Time 0.025 seconds

Outsourcing decryption algorithm of Verifiable transformed ciphertext for data sharing

  • Guangwei Xu;Chen Wang;Shan Li;Xiujin Shi;Xin Luo;Yanglan Gan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.998-1019
    • /
    • 2024
  • Mobile cloud computing is a very attractive service paradigm that outsources users' data computing and storage from mobile devices to cloud data centers. To protect data privacy, users often encrypt their data to ensure data sharing securely before data outsourcing. However, the bilinear and power operations involved in the encryption and decryption computation make it impossible for mobile devices with weak computational power and network transmission capability to correctly obtain decryption results. To this end, this paper proposes an outsourcing decryption algorithm of verifiable transformed ciphertext. First, the algorithm uses the key blinding technique to divide the user's private key into two parts, i.e., the authorization key and the decryption secret key. Then, the cloud data center performs the outsourcing decryption operation of the encrypted data to achieve partial decryption of the encrypted data after obtaining the authorization key and the user's outsourced decryption request. The verifiable random function is used to prevent the semi-trusted cloud data center from not performing the outsourcing decryption operation as required so that the verifiability of the outsourcing decryption is satisfied. Finally, the algorithm uses the authorization period to control the final decryption of the authorized user. Theoretical and experimental analyses show that the proposed algorithm reduces the computational overhead of ciphertext decryption while ensuring the verifiability of outsourcing decryption.

RDP: A storage-tier-aware Robust Data Placement strategy for Hadoop in a Cloud-based Heterogeneous Environment

  • Muhammad Faseeh Qureshi, Nawab;Shin, Dong Ryeol
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4063-4086
    • /
    • 2016
  • Cloud computing is a robust technology, which facilitate to resolve many parallel distributed computing issues in the modern Big Data environment. Hadoop is an ecosystem, which process large data-sets in distributed computing environment. The HDFS is a filesystem of Hadoop, which process data blocks to the cluster nodes. The data block placement has become a bottleneck to overall performance in a Hadoop cluster. The current placement policy assumes that, all Datanodes have equal computing capacity to process data blocks. This computing capacity includes availability of same storage media and same processing performances of a node. As a result, Hadoop cluster performance gets effected with unbalanced workloads, inefficient storage-tier, network traffic congestion and HDFS integrity issues. This paper proposes a storage-tier-aware Robust Data Placement (RDP) scheme, which systematically resolves unbalanced workloads, reduces network congestion to an optimal state, utilizes storage-tier in a useful manner and minimizes the HDFS integrity issues. The experimental results show that the proposed approach reduced unbalanced workload issue to 72%. Moreover, the presented approach resolve storage-tier compatibility problem to 81% by predicting storage for block jobs and improved overall data block placement by 78% through pre-calculated computing capacity allocations and execution of map files over respective Namenode and Datanodes.

Service Deployment Strategy for Customer Experience and Cost Optimization under Hybrid Network Computing Environment

  • Ning Wang;Huiqing Wang;Xiaoting Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.3030-3049
    • /
    • 2023
  • With the development and wide application of hybrid network computing modes like cloud computing, edge computing and fog computing, the customer service requests and the collaborative optimization of various computing resources face huge challenges. Considering the characteristics of network environment resources, the optimized deployment of service resources is a feasible solution. So, in this paper, the optimal goals for deploying service resources are customer experience and service cost. The focus is on the system impact of deploying services on load, fault tolerance, service cost, and quality of service (QoS). Therefore, the alternate node filtering algorithm (ANF) and the adjustment factor of cost matrix are proposed in this paper to enhance the system service performance without changing the minimum total service cost, and corresponding theoretical proof has been provided. In addition, for improving the fault tolerance of system, the alternate node preference factor and algorithm (ANP) are presented, which can effectively reduce the probability of data copy loss, based on which an improved cost-efficient replica deployment strategy named ICERD is given. Finally, by simulating the random occurrence of cloud node failures in the experiments and comparing the ICERD strategy with representative strategies, it has been validated that the ICERD strategy proposed in this paper not only effectively reduces customer access latency, meets customers' QoS requests, and improves system service quality, but also maintains the load balancing of the entire system, reduces service cost, enhances system fault tolerance, which further confirm the effectiveness and reliability of the ICERD strategy.

Design and Forensic Analysis of a Zero Trust Model for Amazon S3 (Amazon S3 제로 트러스트 모델 설계 및 포렌식 분석)

  • Kyeong-Hyun Cho;Jae-Han Cho;Hyeon-Woo Lee;Jiyeon Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.295-303
    • /
    • 2023
  • As the cloud computing market grows, a variety of cloud services are now reliably delivered. Administrative agencies and public institutions of South Korea are transferring all their information systems to cloud systems. It is essential to develop security solutions in advance in order to safely operate cloud services, as protecting cloud services from misuse and malicious access by insiders and outsiders over the Internet is challenging. In this paper, we propose a zero trust model for cloud storage services that store sensitive data. We then verify the effectiveness of the proposed model by operating a cloud storage service. Memory, web, and network forensics are also performed to track access and usage of cloud users depending on the adoption of the zero trust model. As a cloud storage service, we use Amazon S3(Simple Storage Service) and deploy zero trust techniques such as access control lists and key management systems. In order to consider the different types of access to S3, furthermore, we generate service requests inside and outside AWS(Amazon Web Services) and then analyze the results of the zero trust techniques depending on the location of the service request.

R2NET: Storage and Analysis of Attack Behavior Patterns

  • M.R., Amal;P., Venkadesh
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.295-311
    • /
    • 2023
  • Cloud computing has evolved significantly, intending to provide users with fast, dependable, and low-cost services. With its development, malicious users have become increasingly capable of attacking both its internal and external security. To ensure the security of cloud services, encryption, authorization, firewalls, and intrusion detection systems have been employed. However, these single monitoring agents, are complex, time-consuming, and they do not detect ransomware and zero-day vulnerabilities on their own. An innovative Record and Replay-based hybrid Honeynet (R2NET) system has been developed to address this issue. Combining honeynet with Record and Replay (RR) technology, the system allows fine-grained analysis by delaying time-consuming analysis to the replay step. In addition, a machine learning algorithm is utilized to cluster the logs of attackers and store them in a database. So, the accessing time for analyzing the attack may be reduced which in turn increases the efficiency of the proposed framework. The R2NET framework is compared with existing methods such as EEHH net, HoneyDoc, Honeynet system, and AHDS. The proposed system achieves 7.60%, 9.78%%, 18.47%, and 31.52% more accuracy than EEHH net, HoneyDoc, Honeynet system, and AHDS methods.

A Deep Learning Approach for Intrusion Detection

  • Roua Dhahbi;Farah Jemili
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.89-96
    • /
    • 2023
  • Intrusion detection has been widely studied in both industry and academia, but cybersecurity analysts always want more accuracy and global threat analysis to secure their systems in cyberspace. Big data represent the great challenge of intrusion detection systems, making it hard to monitor and analyze this large volume of data using traditional techniques. Recently, deep learning has been emerged as a new approach which enables the use of Big Data with a low training time and high accuracy rate. In this paper, we propose an approach of an IDS based on cloud computing and the integration of big data and deep learning techniques to detect different attacks as early as possible. To demonstrate the efficacy of this system, we implement the proposed system within Microsoft Azure Cloud, as it provides both processing power and storage capabilities, using a convolutional neural network (CNN-IDS) with the distributed computing environment Apache Spark, integrated with Keras Deep Learning Library. We study the performance of the model in two categories of classification (binary and multiclass) using CSE-CIC-IDS2018 dataset. Our system showed a great performance due to the integration of deep learning technique and Apache Spark engine.

Improving efficiency of remote data audit for cloud storage

  • Fan, Kuan;Liu, Mingxi;Shi, Wenbo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2198-2222
    • /
    • 2019
  • The cloud storage service becomes a rising trend based on the cloud computing, which promotes the remote data integrity auditing a hot topic. Some research can audit the integrity and correctness of user data and solve the problem of user privacy leakage. However, these schemes cannot use fewer data blocks to achieve better auditing results. In this paper, we figure out that the random sampling used in most auditing schemes is not well apply to the problem of cloud service provider (CSP) deleting the data that users rarely use, and we adopt the probability proportionate to size sampling (PPS) to handle such situation. A new scheme named improving audit efficiency of remote data for cloud storage is designed. The proposed scheme supports the public auditing with fewer data blocks and constrains the server's malicious behavior to extend the auditing cycle. Compared with the relevant schemes, the experimental results show that the proposed scheme is more effective.

An Enhanced Privacy-Aware Authentication Scheme for Distributed Mobile Cloud Computing Services

  • Xiong, Ling;Peng, Daiyuan;Peng, Tu;Liang, Hongbin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.6169-6187
    • /
    • 2017
  • With the fast growth of mobile services, Mobile Cloud Computing(MCC) has gained a great deal of attention from researchers in the academic and industrial field. User authentication and privacy are significant issues in MCC environment. Recently, Tsai and Lo proposed a privacy-aware authentication scheme for distributed MCC services, which claimed to support mutual authentication and user anonymity. However, Irshad et.al. pointed out this scheme cannot achieve desired security goals and improved it. Unfortunately, this paper shall show that security features of Irshad et.al.'s scheme are achieved at the price of multiple time-consuming operations, such as three bilinear pairing operations, one map-to-point hash function operation, etc. Besides, it still suffers from two minor design flaws, including incapability of achieving three-factor security and no user revocation and re-registration. To address these issues, an enhanced and provably secure authentication scheme for distributed MCC services will be designed in this work. The proposed scheme can meet all desirable security requirements and is able to resist against various kinds of attacks. Moreover, compared with previously proposed schemes, the proposed scheme provides more security features while achieving lower computation and communication costs.

A Study on Data Storage and Recovery in Hadoop Environment (하둡 환경에 적합한 데이터 저장 및 복원 기법에 관한 연구)

  • Kim, Su-Hyun;Lee, Im-Yeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.12
    • /
    • pp.569-576
    • /
    • 2013
  • Cloud computing has been receiving increasing attention recently. Despite this attention, security is the main problem that still needs to be addressed for cloud computing. In general, a cloud computing environment protects data by using distributed servers for data storage. When the amount of data is too high, however, different pieces of a secret key (if used) may be divided among hundreds of distributed servers. Thus, the management of a distributed server may be very difficult simply in terms of its authentication, encryption, and decryption processes, which incur vast overheads. In this paper, we proposed a efficiently data storage and recovery scheme using XOR and RAID in Hadoop environment.

Performance Evaluation of Hypervisor VMs and Nested VMs Overcommitting Memory in Nested Virtualization Environments (중첩 가상화 환경에서 메모리 오버커밋을 하는 하이퍼바이저 VM과 중첩 VM의 성능 평가)

  • Lyoo, Taemuk;Lim, JongBeom;Chung, Kwang-Sik;Suh, Teaweon;Yu, Heonchang
    • Annual Conference of KIPS
    • /
    • 2013.11a
    • /
    • pp.61-64
    • /
    • 2013
  • 가상화는 가상의 자원이 물리적 자원에 접근할 수 있게 해주는 기술이며 VM(가상머신)을 다수 설치하여 VM의 수만큼 운영체제들을 이용할 수 있다. 이러한 가상화는 자원의 낭비를 막고 관리비용을 줄이기 위해 사용한다. 가상화 기술은 CPU, 메모리, I/O 가상화로 구분 지을 수 있으며 이 중 메모리 가상화 기술은 메모리 자원의 효율적인 사용을 가능하게 해준다. 여러 VM들이 실제 머신의 메모리보다 많은 메모리를 할당받아 사용하는 것이 가능한데 이것을 오버커밋 상태라고 한다. 중첩 가상화는 VM에 하드웨어 가상화 기법의 사용을 허용하게 하여 VM 위에 또 다른 VM이 동작할 수 있는 환경을 제공해준다. 이와 같은 (중첩) 가상화 환경에서의 메모리 접근은 일반적으로 하드웨어 지원을 통한 중첩 페이징 기법을 이용하여 메모리의 접근이 이루어진다. 본 논문에서는 오버커밋 발생 시 중첩 VM과 하이퍼바이저 VM의 성능 차이를 실험을 통하여 보여주고자 한다.