• Title/Summary/Keyword: distributed clouds

Search Result 36, Processing Time 0.022 seconds

Enabling Performance Intelligence for Application Adaptation in the Future Internet

  • Calyam, Prasad;Sridharan, Munkundan;Xu, Yingxiao;Zhu, Kunpeng;Berryman, Alex;Patali, Rohit;Venkataraman, Aishwarya
    • Journal of Communications and Networks
    • /
    • v.13 no.6
    • /
    • pp.591-601
    • /
    • 2011
  • Today's Internet which provides communication channels with best-effort end-to-end performance is rapidly evolving into an autonomic global computing platform. Achieving autonomicity in the Future Internet will require a performance architecture that (a) allows users to request and own 'slices' of geographically-distributed host and network resources, (b) measures and monitors end-to-end host and network status, (c) enables analysis of the measurements within expert systems, and (d) provides performance intelligence in a timely manner for application adaptations to improve performance and scalability. We describe the requirements and design of one such "Future Internet performance architecture" (FIPA), and present our reference implementation of FIPA called 'OnTimeMeasure.' OnTimeMeasure comprises of several measurement-related services that can interact with each other and with existing measurement frameworks to enable performance intelligence. We also explain our OnTimeMeasure deployment in the global environment for network innovations (GENI) infrastructure collaborative research initiative to build a sliceable Future Internet. Further, we present an applicationad-aptation case study in GENI that uses OnTimeMeasure-enabled performance intelligence in the context of dynamic resource allocation within thin-client based virtual desktop clouds. We show how a virtual desktop cloud provider in the Future Internet can use the performance intelligence to increase cloud scalability, while simultaneously delivering satisfactory user quality-of-experience.

OpenID Based User Authentication Scheme for Multi-clouds Environment (멀티 클라우드 환경을 위한 OpenID 기반의 사용자 인증 기법)

  • Wi, Yukyeong;Kwak, Jin
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.215-223
    • /
    • 2013
  • As cloud computing is activated, a variety of cloud services are being distributed. However, to use each different cloud service, you must perform a individual user authentication process to service. Therefore, not only the procedure is cumbersome but also due to repeated authentication process performance, it can cause password exposure or database overload that needs to have user's authentication information each cloud server. Moreover, there is high probability of security problem that being occurred by phishing attacks that result from different authentication schemes and input scheme for each service. Thus, when you want to use a variety of cloud service, we proposed OpenID based user authentication scheme that can be applied to a multi-cloud environment by the trusted user's verify ID provider.

A Data Sharing Algorithm of Micro Data Center in Distributed Cloud Networks (분산클라우드 환경에서 마이크로 데이터센터간 자료공유 알고리즘)

  • Kim, Hyuncheol
    • Convergence Security Journal
    • /
    • v.15 no.2
    • /
    • pp.63-68
    • /
    • 2015
  • Current ICT(Information & Communication Technology) infrastructures (Internet and server/client communication) are struggling for a wide variety of devices, services, and business and technology evolution. Cloud computing originated simply to request and execute the desired operation from the network of clouds. It means that an IT resource that provides a service using the Internet technology. It is getting the most attention in today's IT trends. In the distributed cloud environments, management costs for the network and computing resources are solved fundamentally through the integrated management system. It can increase the cost savings to solve the traffic explosion problem of core network via a distributed Micro DC. However, traditional flooding methods may cause a lot of traffic due to transfer to all the neighbor DCs. Restricted Path Flooding algorithms have been proposed for this purpose. In large networks, there is still the disadvantage that may occur traffic. In this paper, we developed Lightweight Path Flooding algorithm to improve existing flooding algorithm using hop count restriction.

THE PROPERTIES OF DUST EMISSION IN THE GALACTIC CENTER REGION REVEALED BY FIS-FTS OBSERVATIONS

  • Yasuda, A.;Kaneda, H.;Takahashi, A.;Nakagawa, T.;Kawada, M.;Okada, Y.;Takahashi, H.;Murakami, N.
    • Publications of The Korean Astronomical Society
    • /
    • v.27 no.4
    • /
    • pp.221-222
    • /
    • 2012
  • We present the results of far-infrared spectral mapping of the Galactic center region with FIS-FTS, which covered the two massive star-forming clusters, Arches and Quintuplet. We find that two dust components with temperatures of about 20 K and 50 K are required to fit the overall continuum spectra. The warm dust emission is spatially correlated with the [OIII] $88{\mu}m$ emission and both are likely to be associated with the two clusters, while the cool dust emission is more widely distributed without any clear spatial correlation with the clusters. We find differences in the properties of the ISM around the two clusters, suggesting that the star-forming activity of the Arches cluster is at an earlier stage than that of the Quintuplet cluster.

Improving Join Performance for SPARQL Query Processing in the Clouds (클라우드에서 SPARQL 질의 처리를 위한 조인 성능 향상)

  • Choi, Gyu-Jin;Son, Yun-Hee;Lee, Kyu-Chul
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.700-709
    • /
    • 2016
  • Recently, with the rapid growth of LOD (Linked Open Data) existing methods based on a single machine have limitation in performance. Existing solutions use distributed framework such as Mapreduce in order to improve the performance. However, the MapReduce framework for processing SPARQL queries involves multiple MapReduce jobs and additional costs incurred. In addition, the problem of unnecessary data processing arises. In this study, we proposed a method to reduce the number of MapReduce jobs during SPARQL query processing and join indexes based on Bitmap for minimizing the costs of processing unnecessary data.

Batch Resizing Policies and Techniques for Fine-Grain Grid Tasks: The Nuts and Bolts

  • Muthuvelu, Nithiapidary;Chai, Ian;Chikkannan, Eswaran;Buyya, Rajkumar
    • Journal of Information Processing Systems
    • /
    • v.7 no.2
    • /
    • pp.299-320
    • /
    • 2011
  • The overhead of processing fine-grain tasks on a grid induces the need for batch processing or task group deployment in order to minimise overall application turnaround time. When deciding the granularity of a batch, the processing requirements of each task should be considered as well as the utilisation constraints of the interconnecting network and the designated resources. However, the dynamic nature of a grid requires the batch size to be adaptable to the latest grid status. In this paper, we describe the policies and the specific techniques involved in the batch resizing process. We explain the nuts and bolts of these techniques in order to maximise the resulting benefits of batch processing. We conduct experiments to determine the nature of the policies and techniques in response to a real grid environment. The techniques are further investigated to highlight the important parameters for obtaining the appropriate task granularity for a grid resource.

Adaptive Deadline-aware Scheme (ADAS) for Data Migration between Cloud and Fog Layers

  • Khalid, Adnan;Shahbaz, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1002-1015
    • /
    • 2018
  • The advent of Internet of Things (IoT) and the evident inadequacy of Cloud networks concerning management of numerous end nodes have brought about a shift of paradigm giving birth to Fog computing. Fog computing is an extension of Cloud computing that extends Cloud resources at the edge of the network, closer to the user. Cloud computing has become one of the essential needs of people over the Internet but with the emerging concept of IoT, traditional Clouds seem inadequate. IoT entails extremely low latency and for that, the Cloud servers that are distant and unknown to the user appear to be unsuitable. With the help of Fog computing, the Fog devices installed would be closer to the user that will provide an immediate storage for the frequently needed data. This paper discusses data migration between different storage types especially between Cloud devices and then presents a mechanism to migrate data between Cloud and Fog Layer. We call this mechanism Adaptive Deadline-Aware Scheme (ADAS) for Data migration between Cloud and Fog. We will demonstrate that we can access and process latency sensitive "hot" data through the proposed ADAS more efficiently than with a traditional Cloud setup.

AUTOMATIC GENERATION OF BUILDING FOOTPRINTS FROM AIRBORNE LIDAR DATA

  • Lee, Dong-Cheon;Jung, Hyung-Sup;Yom, Jae-Hong;Lim, Sae-Bom;Kim, Jung-Hyun
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.637-641
    • /
    • 2007
  • Airborne LIDAR (Light Detection and Ranging) technology has reached a degree of the required accuracy in mapping professions, and advanced LIDAR systems are becoming increasingly common in the various fields of application. LiDAR data constitute an excellent source of information for reconstructing the Earth's surface due to capability of rapid and dense 3D spatial data acquisition with high accuracy. However, organizing the LIDAR data and extracting information from the data are difficult tasks because LIDAR data are composed of randomly distributed point clouds and do not provide sufficient semantic information. The main reason for this difficulty in processing LIDAR data is that the data provide only irregularly spaced point coordinates without topological and relational information among the points. This study introduces an efficient and robust method for automatic extraction of building footprints using airborne LIDAR data. The proposed method separates ground and non-ground data based on the histogram analysis and then rearranges the building boundary points using convex hull algorithm to extract building footprints. The method was implemented to LIDAR data of the heavily built-up area. Experimental results showed the feasibility and efficiency of the proposed method for automatic producing building layers of the large scale digital maps and 3D building reconstruction.

  • PDF

A Hierarchical Context Dissemination Framework for Managing Federated Clouds

  • Famaey, Jeroen;Latre, Steven;Strassner, John;Turck, Filip De
    • Journal of Communications and Networks
    • /
    • v.13 no.6
    • /
    • pp.567-582
    • /
    • 2011
  • The growing popularity of the Internet has caused the size and complexity of communications and computing systems to greatly increase in recent years. To alleviate this increased management complexity, novel autonomic management architectures have emerged, in which many automated components manage the network's resources in a distributed fashion. However, in order to achieve effective collaboration between these management components, they need to be able to efficiently exchange information in a timely fashion. In this article, we propose a context dissemination framework that addresses this problem. To achieve scalability, the management components are structured in a hierarchy. The framework facilitates the aggregation and translation of information as it is propagated through the hierarchy. Additionally, by way of semantics, context is filtered based on meaning and is disseminated intelligently according to dynamically changing context requirements. This significantly reduces the exchange of superfluous context and thus further increases scalability. The large size of modern federated cloud computing infrastructures, makes the presented context dissemination framework ideally suited to improve their management efficiency and scalability. The specific context requirements for the management of a cloud data center are identified, and our context dissemination approach is applied to it. Additionally, an extensive evaluation of the framework in a large-scale cloud data center scenario was performed in order to characterize the benefits of our approach, in terms of scalability and reasoning time.

An Adaptive Workflow Scheduling Scheme Based on an Estimated Data Processing Rate for Next Generation Sequencing in Cloud Computing

  • Kim, Byungsang;Youn, Chan-Hyun;Park, Yong-Sung;Lee, Yonggyu;Choi, Wan
    • Journal of Information Processing Systems
    • /
    • v.8 no.4
    • /
    • pp.555-566
    • /
    • 2012
  • The cloud environment makes it possible to analyze large data sets in a scalable computing infrastructure. In the bioinformatics field, the applications are composed of the complex workflow tasks, which require huge data storage as well as a computing-intensive parallel workload. Many approaches have been introduced in distributed solutions. However, they focus on static resource provisioning with a batch-processing scheme in a local computing farm and data storage. In the case of a large-scale workflow system, it is inevitable and valuable to outsource the entire or a part of their tasks to public clouds for reducing resource costs. The problems, however, occurred at the transfer time for huge dataset as well as there being an unbalanced completion time of different problem sizes. In this paper, we propose an adaptive resource-provisioning scheme that includes run-time data distribution and collection services for hiding the data transfer time. The proposed adaptive resource-provisioning scheme optimizes the allocation ratio of computing elements to the different datasets in order to minimize the total makespan under resource constraints. We conducted the experiments with a well-known sequence alignment algorithm and the results showed that the proposed scheme is efficient for the cloud environment.