• Title/Summary/Keyword: Computing Resource

Search Result 852, Processing Time 0.023 seconds

Dynamic Load Balancing Scheme Based on Resource Reservation for Migration of Agents in Pure P2P Network Environments (순수 P2P 네트워크 환경에서 에이전트 이주를 위한 자원 예약 기반 동적 부하 균형 기법)

  • Kim, Kyung-In;Kim, Young-jin;Eom, Young-Ik
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.257-266
    • /
    • 2004
  • Mobile agents are defined as processes which can be autonomously delegated or transferred among the hosts in a network in order to perform some computations on behalf of the user and co-operate with other agents. Currently, mobile agents are used in various fields, such as electronic commerce, mobile communication, parallel processing, search of information, recovery, and so on. In pure P2P network environment, if mobile agents that require computing resources rashly migrate to another peers without consideration on the peer's capacity of resources, the peer may have a problem that the performance of the peer is degraded due to lack of resources. To solve this problem, we propose resource reservation based load balancing scheme of using RMA(Resource Management Agent) that monitors workload information of the peers and that decides migrating agents and destination peers. In mobile agent migrating procedure, if the resource of specific peer is already reserved, our resource reservation scheme prevents other mobile agents from allocating the resource.

Multimedia Resource Management System for the Realtime Data Broadcasting using T-DMB (지상파 DMB에서 실시간 데이타방송을 위한 멀티미디어 자원관리 시스템)

  • Kang, Do-Young;Yeh, Hong-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.311-319
    • /
    • 2007
  • Today, the data broadcasting service provides various multimedia contents to users. This paper suggests a resource management system which controls multimedia resources in order to reduce the receiving time of the contents during the realtime data broadcasting of multimedia traffic information. To verify the efficiency of the resource management system, this paper has introduced the realtime traffic information integrating and authoring systems through MOT (Multimedia Object Transfer) protocol-based BWS (Broadcast Web Site) service in which multimedia data are transmitted by T-DMB (Terrestrial Digital Multimedia Broadcasting) and implemented a resource management system which controls the creation of multimedia resources in connection with the said systems. As the receiving time of the contents created with resources from the resource management system has decreased by 1/13, they are suitable for realtime data broadcasting.

Thread Block Scheduling for GPGPU based on Fine-Grained Resource Utilization (상세 자원 이용률에 기반한 병렬 가속기용 스레드 블록 스케줄링)

  • Bahn, Hyokyung;Cho, Kyungwoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.49-54
    • /
    • 2022
  • With the recent widespread adoption of general-purpose GPUs (GPGPUs) in cloud systems, maximizing the resource utilization through multitasking in GPGPU has become an important issue. In this article, we show that resource allocation based on the workload classification of computing-bound and memory-bound is not sufficient with respect to resource utilization, and present a new thread block scheduling policy for GPGPU that makes use of fine-grained resource utilizations of each workload. Unlike previous approaches, the proposed policy reduces scheduling overhead by separating profiling and scheduling, and maximizes resource utilizations by co-locating workloads with different bottleneck resources. Through simulations under various virtual machine scenarios, we show that the proposed policy improves the GPGPU throughput by 130.6% on average and up to 161.4%.

GPU Memory Management Technique to Improve the Performance of GPGPU Task of Virtual Machines in RPC-Based GPU Virtualization Environments (RPC 기반 GPU 가상화 환경에서 가상머신의 GPGPU 작업 성능 향상을 위한 GPU 메모리 관리 기법)

  • Kang, Jihun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.5
    • /
    • pp.123-136
    • /
    • 2021
  • RPC (Remote Procedure Call)-based Graphics Processing Unit (GPU) virtualization technology is one of the technologies for sharing GPUs with multiple user virtual machines. However, in a cloud environment, unlike CPU or memory, general GPUs do not provide a resource isolation technology that can limit the resource usage of virtual machines. In particular, in an RPC-based virtualization environment, since GPU tasks executed in each virtual machine are performed in the form of multi-process, the lack of resource isolation technology causes performance degradation due to resource competition. In addition, the GPU memory competition accelerates the performance degradation as the resource demand of the virtual machines increases, and the fairness decreases because it cannot guarantee equal performance between virtual machines. This paper, in the RPC-based GPU virtualization environment, analyzes the performance degradation problem caused by resource contention when the GPU memory requirement of virtual machines exceeds the available GPU memory capacity and proposes a GPU memory management technique to solve this problem. Also, experiments show that the GPU memory management technique proposed in this paper can improve the performance of GPGPU tasks.

Design of Efficient Edge Computing based on Learning Factors Sharing with Cloud in a Smart Factory Domain (스마트 팩토리 환경에서 클라우드와 학습된 요소 공유 방법 기반의 효율적 엣지 컴퓨팅 설계)

  • Hwang, Zi-on
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2167-2175
    • /
    • 2017
  • In recent years, an IoT is dramatically developing according to the enhancement of AI, the increase of connected devices, and the high-performance cloud systems. Huge data produced by many devices and sensors is expanding the scope of services, such as an intelligent diagnostics, a recommendation service, as well as a smart monitoring service. The studies of edge computing are limited as a role of small server system with high quality HW resources. However, there are specialized requirements in a smart factory domain needed edge computing. The edges are needed to pre-process containing tiny filtering, pre-formatting, as well as merging of group contexts and manage the regional rules. So, in this paper, we extract the features and requirements in a scope of efficiency and robustness. Our edge offers to decrease a network resource consumption and update rules and learning models. Moreover, we propose architecture of edge computing based on learning factors sharing with a cloud system in a smart factory.

A Research on the Cloud Computing Security Framework (클라우드 컴퓨팅 정보보호 프레임워크에 관한 연구)

  • kim, Jung-Duk;Lee, Seong-Il
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.6
    • /
    • pp.1277-1286
    • /
    • 2013
  • Cloud computing's unique attributes such as elasticity, rapid provisioning and releasing, resource pooling, multi-tenancy, broad-network accessibility, and ubiquity bring many benefits to cloud adopters(company and organization), but also entails specific security risks associated with the type of adopted cloud and deployment mode. To minimize those types of risk, this paper proposed cloud computing security framework refered to strategic alliance model. The cloud computing security framework has main triangles that are cloud threat, security controls, cloud stakeholders and compose of three sides that are purposefulness, accountability, transparent responsibility. Main triangles define purpose of risk minimization, appointment of stakeholders, security activity for them and three sides of framework are principles of security control in the cloud computing, provide direction of deduction for seven service packages.

Mobile Device CPU usage based Context-awareness in Mobile Cloud Computing (모바일 클라우드 컴퓨팅에서 상황인지 기반 모바일 장치 CPU사용)

  • Cho, Kyunghee;Jo, Minho;Jeon, Taewoong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.3
    • /
    • pp.127-135
    • /
    • 2015
  • Context-aware mobile cloud computing is a new promising paradigm that allows to improve user experience by analyzing contextual information such as user location, time of the day, neighboring devices and current activity. In this paper we provide performance study of context-aware mobile cloud computing system with Volare middleware. Volare monitors the resources and context of the device. and dynamically adapts cloud service requests accordingly, at discovery time or at runtime. This approach allows for more resource-efficient and reliable cloud service discovery, as well as significant cost savings at runtime. We also have studied the performance of context-aware mobile cloud computing for different quality of service (QoS) adaptation policies. Our simulations results show that when battery level is low and CPU usage is high and user cannot maintain the initial QoS, service cost is decreased according to current adaptation policy. In conclusion, the current adaptation policy suggested in this paper may improve user experience by providing a dynamically adapted service cost according to a situation.

ACCESS CONTROL MODEL FOR DATA STORED ON CLOUD COMPUTING

  • Mateen, Ahmed;Zhu, Qingsheng;Afsar, Salman;Rehan, Akmal;Mumtaz, Imran;Ahmad, Wasi
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.208-221
    • /
    • 2019
  • The inference for this research was concentrated on client's data protection in cloud computing i.e. data storages protection problems and how to limit unauthenticated access to info by developing access control model then accessible preparations were introduce after that an access control model was recommend. Cloud computing might refer as technology base on internet, having share, adaptable authority that might be utilized as organization by clients. Compositely cloud computing is software's and hardware's are conveying by internet as a service. It is a remarkable technology get well known because of minimal efforts, adaptability and versatility according to client's necessity. Regardless its prevalence large administration, propositions are reluctant to proceed onward cloud computing because of protection problems, particularly client's info protection. Management have communicated worries overs info protection as their classified and delicate info should be put away by specialist management at any areas all around. Several access models were accessible, yet those models do not satisfy the protection obligations as per services producers and cloud is always under assaults of hackers and data integrity, accessibility and protection were traded off. This research presented a model keep in aspect the requirement of services producers that upgrading the info protection in items of integrity, accessibility and security. The developed model helped the reluctant clients to effectively choosing to move on cloud while considerate the uncertainty related with cloud computing.

A Global Framework for Parallel and Distributed Application with Mobile Objects (이동 객체 기반 병렬 및 분산 응용 수행을 위한 전역 프레임워크)

  • Han, Youn-Hee;Park, Chan-Yeol;Hwang, Chong-Sun;Jeong, Young-Sik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.6
    • /
    • pp.555-568
    • /
    • 2000
  • The World Wide Web has become the largest virtual system that is almost universal in scope. In recent research, it has become effective to utilize idle hosts existing in the World Wide Web for running applications that require a substantial amount of computation. This novel computing paradigm has been referred to as the advent of global computing. In this paper, we implement and propose a mobile object-based global computing framework called Tiger, whose primary goal is to present novel object-oriented programming libraries that support distribution, dispatching, migration of objects and concurrency among computational activities. The programming libraries provide programmers with access, location and migration transparency for distributed and mobile objects. Tiger's second goal is to provide a system supporting requisites for a global computing environment - scalability, resource and location management. The Tiger system and the programming libraries provided allow a programmer to easily develop an objectoriented parallel and distributed application using globally extended computing resources. We also present the improvement in performance gained by conducting the experiment with highly intensive computations such as parallel fractal image processing and genetic-neuro-fuzzy algorithms.

  • PDF

EXECUTION TIME AND POWER CONSUMPTION OPTIMIZATION in FOG COMPUTING ENVIRONMENT

  • Alghamdi, Anwar;Alzahrani, Ahmed;Thayananthan, Vijey
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.137-142
    • /
    • 2021
  • The Internet of Things (IoT) paradigm is at the forefront of present and future research activities. The huge amount of sensing data from IoT devices needing to be processed is increasing dramatically in volume, variety, and velocity. In response, cloud computing was involved in handling the challenges of collecting, storing, and processing jobs. The fog computing technology is a model that is used to support cloud computing by implementing pre-processing jobs close to the end-user for realizing low latency, less power consumption in the cloud side, and high scalability. However, it may be that some resources in fog computing networks are not suitable for some kind of jobs, or the number of requests increases outside capacity. So, it is more efficient to decrease sending jobs to the cloud. Hence some other fog resources are idle, and it is better to be federated rather than forwarding them to the cloud server. Obviously, this issue affects the performance of the fog environment when dealing with big data applications or applications that are sensitive to time processing. This research aims to build a fog topology job scheduling (FTJS) to schedule the incoming jobs which are generated from the IoT devices and discover all available fog nodes with their capabilities. Also, the fog topology job placement algorithm is introduced to deploy jobs into appropriate resources in the network effectively. Finally, by comparing our result with the state-of-art first come first serve (FCFS) scheduling technique, the overall execution time is reduced significantly by approximately 20%, the energy consumption in the cloud side is reduced by 18%.