• Title/Summary/Keyword: Cloud storage system

Search Result 188, Processing Time 0.025 seconds

ADAM: An Approach of Digital Asset Management system (사후 디지털 자산 관리 시스템에 관한 연구)

  • Moon, Jeong-Kyung;Kim, Hwang-Rae;Kim, Jin-Mook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.1977-1982
    • /
    • 2012
  • Social network service user or smart phone user is very convenient, because there are supported to various social network services such as Facebook, Twitter, Flickr, Naver blog, Daum blog, and so on. This Is very good because they store multimedia datas that user wants to cyber space if they want it conveniently. But, if social network user increases, space of cloud storage increases sharply, and when social network service used user dies then they have big problems such as they did not know in existing. Typically, problems of notice, dissemination, storage, and inheritance for digital asset can happen representatively. Now, If successor send relation information of he and his dead user that is family to social network service provider then service provider checks it. And successor may can use, save and only backup are possible after confirm of family relation truth. Therefore, We wish to propose ADAM that successor may can inherit digital assets easily, conveniently, and safely in this paper. If someone use ADAM, successor submits information about dead and own family relation in the third certification party. And ADAM can be inherited freely and conveniently about digital assets as general assets passing through right inheritance process.

New Authentication Methods based on User's Behavior Big Data Analysis on Cloud (클라우드 환경에서 빅데이터 분석을 통한 새로운 사용자 인증방법에 관한 연구)

  • Hong, Sunghyuck
    • Journal of Convergence Society for SMB
    • /
    • v.6 no.4
    • /
    • pp.31-36
    • /
    • 2016
  • User authentication is the first step to network security. There are lots of authentication types, and more than one authentication method works together for user's authentication in the network. Except for biometric authentication, most authentication methods can be copied, or someone else can adopt and abuse someone else's credential method. Thus, more than one authentication method must be used for user authentication. However, more credential makes system degrade and inefficient as they log on the system. Therefore, without tradeoff performance with efficiency, this research proposed user's behavior based authentication for secure communication, and it will improve to establish a secure and efficient communication.

Advanced Resource Management with Access Control for Multitenant Hadoop

  • Won, Heesun;Nguyen, Minh Chau;Gil, Myeong-Seon;Moon, Yang-Sae
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.592-601
    • /
    • 2015
  • Multitenancy has gained growing importance with the development and evolution of cloud computing technology. In a multitenant environment, multiple tenants with different demands can share a variety of computing resources (e.g., CPU, memory, storage, network, and data) within a single system, while each tenant remains logically isolated. This useful multitenancy concept offers highly efficient, and cost-effective systems without wasting computing resources to enterprises requiring similar environments for data processing and management. In this paper, we propose a novel approach supporting multitenancy features for Apache Hadoop, a large scale distributed system commonly used for processing big data. We first analyze the Hadoop framework focusing on "yet another resource negotiator (YARN)", which is responsible for managing resources, application runtime, and access control in the latest version of Hadoop. We then define the problems for supporting multitenancy and formally derive the requirements to solve these problems. Based on these requirements, we design the details of multitenant Hadoop. We also present experimental results to validate the data access control and to evaluate the performance enhancement of multitenant Hadoop.

Research Trends Analysis of Big Data: Focused on the Topic Modeling (빅데이터 연구동향 분석: 토픽 모델링을 중심으로)

  • Park, Jongsoon;Kim, Changsik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • The objective of this study is to examine the trends in big data. Research abstracts were extracted from 4,019 articles, published between 1995 and 2018, on Web of Science and were analyzed using topic modeling and time series analysis. The 20 single-term topics that appeared most frequently were as follows: model, technology, algorithm, problem, performance, network, framework, analytics, management, process, value, user, knowledge, dataset, resource, service, cloud, storage, business, and health. The 20 multi-term topics were as follows: sense technology architecture (T10), decision system (T18), classification algorithm (T03), data analytics (T17), system performance (T09), data science (T06), distribution method (T20), service dataset (T19), network communication (T05), customer & business (T16), cloud computing (T02), health care (T14), smart city (T11), patient & disease (T04), privacy & security (T08), research design (T01), social media (T12), student & education (T13), energy consumption (T07), supply chain management (T15). The time series data indicated that the 40 single-term topics and multi-term topics were hot topics. This study provides suggestions for future research.

Efficient and Privacy-Preserving Near-Duplicate Detection in Cloud Computing (클라우드 환경에서 검색 효율성 개선과 프라이버시를 보장하는 유사 중복 검출 기법)

  • Hahn, Changhee;Shin, Hyung June;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1112-1123
    • /
    • 2017
  • As content providers further offload content-centric services to the cloud, data retrieval over the cloud typically results in many redundant items because there is a prevalent near-duplication of content on the Internet. Simply fetching all data from the cloud severely degrades efficiency in terms of resource utilization and bandwidth, and data can be encrypted by multiple content providers under different keys to preserve privacy. Thus, locating near-duplicate data in a privacy-preserving way is highly dependent on the ability to deduplicate redundant search results and returns best matches without decrypting data. To this end, we propose an efficient near-duplicate detection scheme for encrypted data in the cloud. Our scheme has the following benefits. First, a single query is enough to locate near-duplicate data even if they are encrypted under different keys of multiple content providers. Second, storage, computation and communication costs are alleviated compared to existing schemes, while achieving the same level of search accuracy. Third, scalability is significantly improved as a result of a novel and efficient two-round detection to locate near-duplicate candidates over large quantities of data in the cloud. An experimental analysis with real-world data demonstrates the applicability of the proposed scheme to a practical cloud system. Last, the proposed scheme is an average of 70.6% faster than an existing scheme.

MPEG-DASH based 3D Point Cloud Content Configuration Method (MPEG-DASH 기반 3차원 포인트 클라우드 콘텐츠 구성 방안)

  • Kim, Doohwan;Im, Jiheon;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.660-669
    • /
    • 2019
  • Recently, with the development of three-dimensional scanning devices and multi-dimensional array cameras, research is continuously conducted on techniques for handling three-dimensional data in application fields such as AR (Augmented Reality) / VR (Virtual Reality) and autonomous traveling. In particular, in the AR / VR field, content that expresses 3D video as point data has appeared, but this requires a larger amount of data than conventional 2D images. Therefore, in order to serve 3D point cloud content to users, various technological developments such as highly efficient encoding / decoding and storage, transfer, etc. are required. In this paper, V-PCC bit stream created using V-PCC encoder proposed in MPEG-I (MPEG-Immersive) V-PCC (Video based Point Cloud Compression) group, It is defined by the MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard, and provides to be composed of segments. Also, in order to provide the user with the information of the 3D coordinate system, the depth information parameter of the signaling message is additionally defined. Then, we design a verification platform to verify the technology proposed in this paper, and confirm it in terms of the algorithm of the proposed technology.

Optimizing I/O Stack for Fast Storage Devices (고속 저장 장치를 위한 입출력 스택 최적화)

  • Han, Hyuck
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.251-258
    • /
    • 2016
  • Recently, the demand for fast storage devices is rapidly increasing in cloud platforms, social network services, etc. Despite the development of fast storage devices, the traditional Linux I/O stack is not able to exploit the full extent of the performance improvement since it has been optimized for disk-based storage devices. In this paper, we propose an optimized I/O stack which can fully utilize the I/O bandwidth and latency of fast storage devices. To this end, we design a new I/O interface to replace the current block I/O interface and optimize our I/O interface. Our optimized I/O interface bypasses operations/layers in block I/O subsystems of the current Linux I/O stack to fully exploit fast storage devices. We also optimize the Linux file systems such as ext2 and ext4 to run on our I/O interface. We evaluate our I/O stack with multiple benchmarks and the experimental results show that our I/O stack achieves 1.7 times better throughput compared to traditional Linux I/O stack.

Performance Analysis of Docker Container Migration Using Secure Copy in Mobile Edge Computing (모바일 엣지 컴퓨팅 환경에서 안전 복사를 활용한 도커 컨테이너 마이그레이션 성능 분석)

  • Byeon, Wonjun;Lim, Han-wool;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.5
    • /
    • pp.901-909
    • /
    • 2021
  • Since mobile devices have limited computational resources, it tends to use the cloud to compute or store data. As real-time becomes more important due to 5G, many studies have been conducted on edge clouds that computes at locations closer to users than central clouds. The farther the user's physical distance from the edge cloud connected to base station is, the slower the network transmits. So applications should be migrated and re-run to nearby edge cloud for smooth service use. We run applications in docker containers, which is independent of the host operating system and has a relatively light images size compared to the virtual machine. Existing migration studies have been experimented by using network simulators. It uses fixed values, so it is different from the results in the real-world environment. In addition, the method of migrating images through shared storage was used, which poses a risk of packet content exposure. In this paper, Containers are migrated with Secure CoPy(SCP) method, a data encryption transmission, by establishing an edge computing environment in a real-world environment. It compares migration time with Network File System, one of the shared storage methods, and analyzes network packets to verify safety.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Implementing I/O Bandwidth Sharing Scheme between Multiple Linux Containers based on Dm-zoned for Zoned Namespace SSDs

  • Seokjun Lee;Sungyong Ahn
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.237-245
    • /
    • 2023
  • In the cloud service, system resource such as CPU, memory, I/O bandwidth are shared among multiple users. Particularly, in Linux containers environment, I/O bandwidth is distributed in proportion to the weight of each container through the BFQ I/O scheduler. However, since the I/O scheduler can only be applied to conventional block storage devices, it cannot be applied to Zoned Namespace(ZNS) SSD, a new storage interface that has been recently studied. To overcome this limitation, in this paper, we implemented a weighted proportional I/O bandwidth sharing scheme for ZNS SSDs in dm-zoned, which emulates conventional block storage using ZNS SSDs. Each user receives a different amount of budget, which is required to process the user's I/O requests based on the user's weight. If the budget is exhausted I/O requests cannot be processed and requests are queued until the budget replenished. Each budget refill period, the budget is replenished based on the user's weight. In the experiment, as a result, we can confirm that the I/O bandwidth can be distributed on their weight as we expected.