• Title/Summary/Keyword: Application virtualization

Search Result 90, Processing Time 0.023 seconds

Green Information Systems Research: A Decade in Review and Future Agenda (그린 정보시스템 연구: 과거 10년간 연구 동향 분석 및 향후 과제)

  • Lee, Ha-Bin
    • Informatization Policy
    • /
    • v.27 no.4
    • /
    • pp.3-23
    • /
    • 2020
  • It has been two decades since Green Information System attracted scholars in information systems research. The surge of sustainability issues over the world naturally made Information Systems scholars to turn their attention to understanding how the use of Information Systems is making impact to our society and environments. This paper reviews studies on Green Information Systems(Green ISs) to evaluate efforts made in the last decade. Based on a systematic approach, 64 articles published in peer-reviewed international journals in Information Systems and Business & Management disciplines are analyzed to identify research gaps and propose future research agenda in Green ISs that include the application of psychological theory in the design of Green ISs, energy efficient IT/IS to respond to accelerated virtualization, and contribution of Green ISs to biodiversity.

A Workflow Execution System for Analyzing Large-scale Astronomy Data on Virtualized Computing Environments

  • Yu, Jung-Lok;Jin, Du-Seok;Yeo, Il-Yeon;Yoon, Hee-Jun
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.16-25
    • /
    • 2020
  • The size of observation data in astronomy has been increasing exponentially with the advents of wide-field optical telescopes. This means the needs of changes to the way used for large-scale astronomy data analysis. The complexity of analysis tools and the lack of extensibility of computing environments, however, lead to the difficulty and inefficiency of dealing with the huge observation data. To address this problem, this paper proposes a workflow execution system for analyzing large-scale astronomy data efficiently. The proposed system is composed of two parts: 1) a workflow execution manager and its RESTful endpoints that can automate and control data analysis tasks based on workflow templates and 2) an elastic resource manager as an underlying mechanism that can dynamically add/remove virtualized computing resources (i.e., virtual machines) according to the analysis requests. To realize our workflow execution system, we implement it on a testbed using OpenStack IaaS (Infrastructure as a Service) toolkit and HTCondor workload manager. We also exhaustively perform a broad range of experiments with different resource allocation patterns, system loads, etc. to show the effectiveness of the proposed system. The results show that the resource allocation mechanism works properly according to the number of queued and running tasks, resulting in improving resource utilization, and the workflow execution manager can handle more than 1,000 concurrent requests within a second with reasonable average response times. We finally describe a case study of data reduction system as an example application of our workflow execution system.

Integrating Resilient Tier N+1 Networks with Distributed Non-Recursive Cloud Model for Cyber-Physical Applications

  • Okafor, Kennedy Chinedu;Longe, Omowunmi Mary
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2257-2285
    • /
    • 2022
  • Cyber-physical systems (CPS) have been growing exponentially due to improved cloud-datacenter infrastructure-as-a-service (CDIaaS). Incremental expandability (scalability), Quality of Service (QoS) performance, and reliability are currently the automation focus on healthy Tier 4 CDIaaS. However, stable QoS is yet to be fully addressed in Cyber-physical data centers (CP-DCS). Also, balanced agility and flexibility for the application workloads need urgent attention. There is a need for a resilient and fault-tolerance scheme in terms of CPS routing service including Pod cluster reliability analytics that meets QoS requirements. Motivated by these concerns, our contributions are fourfold. First, a Distributed Non-Recursive Cloud Model (DNRCM) is proposed to support cyber-physical workloads for remote lab activities. Second, an efficient QoS stability model with Routh-Hurwitz criteria is established. Third, an evaluation of the CDIaaS DCN topology is validated for handling large-scale, traffic workloads. Network Function Virtualization (NFV) with Floodlight SDN controllers was adopted for the implementation of DNRCM with embedded rule-base in Open vSwitch engines. Fourth, QoS evaluation is carried out experimentally. Considering the non-recursive queuing delays with SDN isolation (logical), a lower queuing delay (19.65%) is observed. Without logical isolation, the average queuing delay is 80.34%. Without logical resource isolation, the fault tolerance yields 33.55%, while with logical isolation, it yields 66.44%. In terms of throughput, DNRCM, recursive BCube, and DCell offered 38.30%, 36.37%, and 25.53% respectively. Similarly, the DNRCM had an improved incremental scalability profile of 40.00%, while BCube and Recursive DCell had 33.33%, and 26.67% respectively. In terms of service availability, the DNRCM offered 52.10% compared with recursive BCube and DCell which yielded 34.72% and 13.18% respectively. The average delays obtained for DNRCM, recursive BCube, and DCell are 32.81%, 33.44%, and 33.75% respectively. Finally, workload utilization for DNRCM, recursive BCube, and DCell yielded 50.28%, 27.93%, and 21.79% respectively.

Autoscaling Mechanism based on Execution-times for VNFM in NFV Platforms (NFV 플랫폼에서 VNFM의 실행 시간에 기반한 자동 자원 조정 메커니즘)

  • Mehmood, Asif;Diaz Rivera, Javier;Khan, Talha Ahmed;Song, Wang-Cheol
    • KNOM Review
    • /
    • v.22 no.1
    • /
    • pp.1-10
    • /
    • 2019
  • The process to determine the required number of resources depends on the factors being considered. Autoscaling is one such mechanism that uses a wide range of factors to decide and is a critical process in NFV. As the networks are being shifted onto the cloud after the invention of SDN, we require better resource managers in the future. To solve this problem, we propose a solution that allows the VNFMs to autoscale the system resources depending on the factors such as overhead of hyperthreading, number of requests, execution-times for the virtual network functions. It is a known fact that the hyperthreaded virtual-cores are not fully capable of performing like the physical cores. Also, as there are different types of core having different frequencies so the process to calculate the number of cores needs to be measured accurately and precisely. The platform independency is achieved by proposing another solution in the form of a monitoring microservice, which communicates through APIs. Hence, by the use of our autoscaling application and a monitoring microservice, we enhance the resource provisioning process to meet the criteria of future networks.

A Study on Stabilizing a Network Security Zone Based on the Application of Logical Area to Communication Bandwidth (통신 대역폭 논리영역 적용 기반의 네트워크 보안구간 안정화 연구)

  • Seo, Woo-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.5
    • /
    • pp.3462-3468
    • /
    • 2015
  • Regarding countless network disorders or invasions happening nowadays from 2014 until 2015, illegal access intended to attack through the communication line provided by ISP (Internet Service Provider) appears to be the source of the problem. As a defensive way to prevent such network-based attacks, not only stabilization structures for network communication but various policies as well as physical security devices and solutions corresponding to those have been realized and established. Therefore, now is the time to gain foundational research data to secure network security sections by producing logical area on communication bandwidth or such, suggest tasks to expand the communication line which is another research topic in the network security market, and recognize the fact that the active communication bandwidth linkage paradigm using network communication bandwidth is needed as one of the areas that can realize physical security. Additionally, it is necessary to limit the data in the forms of organizing visible security structures into a certain range of physical information by re-dividing communication capacity being currently provided by telecommunicators into subdivided organizational areas and applying the logical virtualization of communication capacity in each of the areas divided. By proposing a network security section based on a logical field application in place of the existing physical structure, basic data that designs a stable physical network communication structure will be provided.

The Establishment for Technology Development Plan for National Spatial Information Infrastructure Cloud Service (국가 공간정보 인프라의 클라우드 서비스 기술개발 방안 수립)

  • Youn, Junhee;Kim, Changyoon;Moon, Hyonseok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.469-477
    • /
    • 2017
  • Cloud computing is an IT resource providing technology to various users by using virtualization technology. Newly updated spatial information may not be used by other organizations since management authorities are dispersed for Korean public spatial information. Further, the national budget is wasted since each organization independently implements renewable GIS analysis function. These problems can be solved by applying cloud service. However, research related to the application of cloud service to Korea spatial information system has been proposed in the technology development direction, and no detailed development plan has been proposed. In this paper, we deal with the establishment of a technology development plan for national spatial information infrastructure cloud service. First, we deduct the implication to derive the technology development goals by analyzing the political and technical environment. Second, technology and critical technology elements are derived to achieve the goals of the specialist's analysis based on the evaluation elements. As a result, thirteen critical technology elements are derived. Finally, thirty-one research activities, which comprise the critical technology elements, are defined. Critical technology elements and research activities derived in this research will be used for the generation of a technology development road-map.

Research on the Implementation of the Virtual Interface on Multi-mode Mobile Nodes (멀티모드 단말을 위한 가상 인터페이스 구현 연구)

  • Lee, Kyoung-Hee;Lee, Seong-Keun;Rhee, Eun-Jun;Cho, Kyoung-Seob;Lee, Hyun-Woo;Ryu, Won;Hong, Seng-Phil
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.4B
    • /
    • pp.677-686
    • /
    • 2010
  • In this paper, we propose the virtual interface management scheme on the multi-mode mobile node for supporting multiple connections to various access networks in fixed mobile convergence (FMC) networks. The proposed scheme supports the virtualization of multiple physical network interfaces by presenting only the virtual interface to beyond IP layers and hiding physical network interfaces from them. In the proposed scheme, only one IP address is allocated to virtual interface without any IP allocations to physical network interfaces. Therefore, the proposed scheme does not change its IP address and keep it during the vertical handover, so that it can support the seamless handover of real-time multimedia services among heterogeneous access networks. The proposed scheme is implemented on the multi-mode mobile node with multiple network interfaces by using NDIS (Network Driver Interface Specifications) libraries. Through the mobility test-bed and the test application of virtual interface, we evaluate and analyze the performance of the proposed scheme.

A Technique for Provisioning Virtual Clusters in Real-time and Improving I/O Performance on Computational-Science Simulation Environments (계산과학 시뮬레이션을 위한 실시간 가상 클러스터 생성 및 I/O 성능 향상 기법)

  • Choi, Chanho;Lee, Jongsuk Ruth;Kim, Hangi;Jin, DuSeok;Yu, Jung-lok
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.1
    • /
    • pp.13-18
    • /
    • 2015
  • Computational science simulations have been used to enable discovery in a broad spectrum of application areas, these simulations show irregular demanding characteristics of computing resources from time to time. The adoption of virtualized high performance cloud, rather than CPU-centric computing platform (such as supercomputers), is gaining interest of interests mainly due to its ease-of-use, multi-tenancy and flexibility. Basically, provisioning a virtual cluster, which consists of a lot of virtual machines, in a real-time has a critical impact on the successful deployment of the virtualized HPC clouds for computational science simulations. However, the cost of concurrently creating many virtual machines in constructing a virtual cluster can be as much as two orders of magnitude worse than expected. One of the main factors in this bottleneck is the time spent to create the virtual images for the virtual machines. In this paper, we propose a novel technique to minimize the creation time of virtual machine images and improve I/O performance of the provisioned virtual clusters. We also confirm that our proposed technique outperforms the conventional ones using various sets of experiments.

A Management for IMS Network Using SDN and SNMP (SDN과 SNMP를 이용한 IMS 네트워크 관리)

  • Yang, Woo-Seok;Kim, Jung-Ho;Lee, Jae-Oh
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.4
    • /
    • pp.694-699
    • /
    • 2017
  • In accordance with the development of information and communications technology, a network user has to be able to use quality of service (QoS)-based multimedia services easily. Thus, information and communications operators began to focus on a technique for providing multimedia services. The IP Multimedia Subsystem (IMS) is a platform based on Internet Protocol (IP) as a technology for providing multimedia services and application services. The emerging 5G networks are described as having massive capacity and connectivity, adaptability, seamless heterogeneity, and great flexibility. The explosive growth in network services and devices for 5G will cause excessive traffic loads. In this paper, software-defined networking (SDN) is applied as a kind of virtualization technology for the network in order to minimize the traffic load, and Simple Network Management Protocol (SNMP) is used to provide more efficient network management. To accomplish these purposes, we suggest the design of a dynamic routing algorithm to be utilized in the IMS network using SDN and an SNMP private management information base (MIB). The proposal in this paper gives information and communications operators the ability to supply more efficient network resources.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.