• Title/Summary/Keyword: Computing amount

Search Result 687, Processing Time 0.027 seconds

An Offloading Scheduling Strategy with Minimized Power Overhead for Internet of Vehicles Based on Mobile Edge Computing

  • He, Bo;Li, Tianzhang
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.489-504
    • /
    • 2021
  • By distributing computing tasks among devices at the edge of networks, edge computing uses virtualization, distributed computing and parallel computing technologies to enable users dynamically obtain computing power, storage space and other services as needed. Applying edge computing architectures to Internet of Vehicles can effectively alleviate the contradiction among the large amount of computing, low delayed vehicle applications, and the limited and uneven resource distribution of vehicles. In this paper, a predictive offloading strategy based on the MEC load state is proposed, which not only considers reducing the delay of calculation results by the RSU multi-hop backhaul, but also reduces the queuing time of tasks at MEC servers. Firstly, the delay factor and the energy consumption factor are introduced according to the characteristics of tasks, and the cost of local execution and offloading to MEC servers for execution are defined. Then, from the perspective of vehicles, the delay preference factor and the energy consumption preference factor are introduced to define the cost of executing a computing task for another computing task. Furthermore, a mathematical optimization model for minimizing the power overhead is constructed with the constraints of time delay and power consumption. Additionally, the simulated annealing algorithm is utilized to solve the optimization model. The simulation results show that this strategy can effectively reduce the system power consumption by shortening the task execution delay. Finally, we can choose whether to offload computing tasks to MEC server for execution according to the size of two costs. This strategy not only meets the requirements of time delay and energy consumption, but also ensures the lowest cost.

Design of A new Algorithm by Using Standard Deviation Techniques in Multi Edge Computing with IoT Application

  • HASNAIN A. ALMASHHADANI;XIAOHENG DENG;OSAMAH R. AL-HWAIDI;SARMAD T. ABDUL-SAMAD;MOHAMMED M. IBRAHM;SUHAIB N. ABDUL LATIF
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1147-1161
    • /
    • 2023
  • The Internet of Things (IoT) requires a new processing model that will allow scalability in cloud computing while reducing time delay caused by data transmission within a network. Such a model can be achieved by using resources that are closer to the user, i.e., by relying on edge computing (EC). The amount of IoT data also grows with an increase in the number of IoT devices. However, building such a flexible model within a heterogeneous environment is difficult in terms of resources. Moreover, the increasing demand for IoT services necessitates shortening time delay and response time by achieving effective load balancing. IoT devices are expected to generate huge amounts of data within a short amount of time. They will be dynamically deployed, and IoT services will be provided to EC devices or cloud servers to minimize resource costs while meeting the latency and quality of service (QoS) constraints of IoT applications when IoT devices are at the endpoint. EC is an emerging solution to the data processing problem in IoT. In this study, we improve the load balancing process and distribute resources fairly to tasks, which, in turn, will improve QoS in cloud and reduce processing time, and consequently, response time.

Applying Workload Shaping Toward Green Cloud Computing

  • Kim, Woongsup
    • International journal of advanced smart convergence
    • /
    • v.1 no.2
    • /
    • pp.12-15
    • /
    • 2012
  • Energy costs for operating and cooling computing resources in Cloud infrastructure have increased significantly up to the point where they would surpass the hardware purchasing costs. Thus, reducing the energy consumption can save a significant amount of management cost. One of major approach is removing hardware over-provisioning. In this paper, we propose a technique that facilitates power saving through reducing resource over provisioning based on virtualization technology. To this end, we use dynamic workload shaping to reschedule and redistribute job requests considering overall power consumption. In this paper, we present our approach to shape workloads dynamically and distribute them on virtual machines and physical machines through virtualization technology. We generated synthetic workload data and evaluated it in simulating and real implementation. Our simulated results demonstrate our approach outperforms to when not using no workload shaping methodology.

Development of Web-based High Throughput Computing Environment and Its Applications (웹기반 대용량 계산환경 구축 및 응용사례)

  • Jeong, Min-Joong;Kim, Byung-Sang
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2007.04a
    • /
    • pp.719-724
    • /
    • 2007
  • Many engineering problems often require the large amount of computing resources for iterative simulations of problems treating many parameters and input files. In order to overcome the situation, this paper proposes an e-Science based computational system. The system exploits the Grid computing technology to establish an integrated web service environment which supports distributed high throughput computational simulations and remote executions. The proposed system provides an easy-to-use parametric study service where a computational service includes real time monitoring. To verify usability of the proposed system, two kinds of applications were introduced. The first application is an Aerospace Integrated Research System (e-AIRS). The e-AIRS adapts the proposed computational system to solve CFD problems. The second one is design and optimization of protein 3-dimensional structures.

  • PDF

A Study for Parallel Computing Efficiency Comparing Numerical Solutions of Battery Pack (배터리 팩 수치해석 해의 비교를 통한 병렬연산 효율성 연구)

  • Kim, Kwang Sun;Jang, Kyung Min
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.2
    • /
    • pp.20-25
    • /
    • 2016
  • The parallel computer cluster system has been known as the powerful tool to solve a complex physical phenomenon numerically. The numerical analysis of large size of Li-ion battery pack, which has a complex physical phenomenon, requires a large amount of computing time. In this study, the numerical analyses were conducted for comparing the computing efficiency between the single workstation and the parallel cluster system both with multicore CPUs'. The result shows that the parallel cluster system took the time 80 times faster than the single work station for the same battery pack model. The performance of cluster system was increased linearly with more CPU cores being increased.

A Privacy-preserving and Energy-efficient Offloading Algorithm based on Lyapunov Optimization

  • Chen, Lu;Tang, Hongbo;Zhao, Yu;You, Wei;Wang, Kai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2490-2506
    • /
    • 2022
  • In Mobile Edge Computing (MEC), attackers can speculate and mine sensitive user information by eavesdropping wireless channel status and offloading usage pattern, leading to user privacy leakage. To solve this problem, this paper proposes a Privacy-preserving and Energy-efficient Offloading Algorithm (PEOA) based on Lyapunov optimization. In this method, a continuous Markov process offloading model with a buffer queue strategy is built first. Then the amount of privacy of offloading usage pattern in wireless channel is defined. Finally, by introducing the Lyapunov optimization, the problem of minimum average energy consumption in continuous state transition process with privacy constraints in the infinite time domain is transformed into the minimum value problem of each timeslot, which reduces the complexity of algorithms and helps obtain the optimal solution while maintaining low energy consumption. The experimental results show that, compared with other methods, PEOA can maintain the amount of privacy accumulation in the system near zero, while sustaining low average energy consumption costs. This makes it difficult for attackers to infer sensitive user information through offloading usage patterns, thus effectively protecting user privacy and safety.

An Efficient VM-Level Scaling Scheme in an IaaS Cloud Computing System: A Queueing Theory Approach

  • Lee, Doo Ho
    • International Journal of Contents
    • /
    • v.13 no.2
    • /
    • pp.29-34
    • /
    • 2017
  • Cloud computing is becoming an effective and efficient way of computing resources and computing service integration. Through centralized management of resources and services, cloud computing delivers hosted services over the internet, such that access to shared hardware, software, applications, information, and all resources is elastically provided to the consumer on-demand. The main enabling technology for cloud computing is virtualization. Virtualization software creates a temporarily simulated or extended version of computing and network resources. The objectives of virtualization are as follows: first, to fully utilize the shared resources by applying partitioning and time-sharing; second, to centralize resource management; third, to enhance cloud data center agility and provide the required scalability and elasticity for on-demand capabilities; fourth, to improve testing and running software diagnostics on different operating platforms; and fifth, to improve the portability of applications and workload migration capabilities. One of the key features of cloud computing is elasticity. It enables users to create and remove virtual computing resources dynamically according to the changing demand, but it is not easy to make a decision regarding the right amount of resources. Indeed, proper provisioning of the resources to applications is an important issue in IaaS cloud computing. Most web applications encounter large and fluctuating task requests. In predictable situations, the resources can be provisioned in advance through capacity planning techniques. But in case of unplanned and spike requests, it would be desirable to automatically scale the resources, called auto-scaling, which adjusts the resources allocated to applications based on its need at any given time. This would free the user from the burden of deciding how many resources are necessary each time. In this work, we propose an analytical and efficient VM-level scaling scheme by modeling each VM in a data center as an M/M/1 processor sharing queue. Our proposed VM-level scaling scheme is validated via a numerical experiment.

Performance Comparison and Optimal Selection of Computing Techniques for Corridor Surveillance (회랑감시를 위한 컴퓨팅 기법의 성능 비교와 최적 선택 연구)

  • Gyeong-rae Jo;Seok-min Hong;Won-hyuck Choi
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.770-775
    • /
    • 2023
  • Recently, as the amount of digital data increases exponentially, the importance of data processing systems is being emphasized. In this situation, the selection and construction of data processing systems are becoming more important. In this study, the performance of cloud computing (CC), edge computing (EC), and UAV-based intelligent edge computing (UEC) was compared as a way to solve this problem. The characteristics, strengths, and weaknesses of each method were analyzed. In particular, this study focused on real-time large-capacity data processing situations such as corridor monitoring. When conducting the experiment, a specific scenario was assumed and a penalty was given to the infrastructure. In this way, it was possible to evaluate performance in real situations more accurately. In addition, the effectiveness and limitations of each computing method were more clearly understood, and through this, the help was provided to enable more effective system selection.

A Study on the Security Framework for IoT Services based on Cloud and Fog Computing (클라우드와 포그 컴퓨팅 기반 IoT 서비스를 위한 보안 프레임워크 연구)

  • Shin, Minjeong;Kim, Sungun
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1928-1939
    • /
    • 2017
  • Fog computing is another paradigm of the cloud computing, which extends the ubiquitous services to applications on many connected devices in the IoT (Internet of Things). In general, if we access a lot of IoT devices with existing cloud, we waste a huge amount of bandwidth and work efficiency becomes low. So we apply the paradigm called fog between IoT devices and cloud. The network architecture based on cloud and fog computing discloses the security and privacy issues according to mixed paradigm. There are so many security issues in many aspects. Moreover many IoT devices are connected at fog and they generate much data, therefore light and efficient security mechanism is needed. For example, with inappropriate encryption or authentication algorithm, it causes a huge bandwidth loss. In this paper, we consider issues related with data encryption and authentication mechanism in the network architecture for cloud and fog-based M2M (Machine to Machine) IoT services. This includes trusted encryption and authentication algorithm, and key generation method. The contribution of this paper is to provide efficient security mechanisms for the proposed service architecture. We implemented the envisaged conceptual security check mechanisms and verified their performance.

Implementation of Deep Learning-based Label Inspection System Applicable to Edge Computing Environments (엣지 컴퓨팅 환경에서 적용 가능한 딥러닝 기반 라벨 검사 시스템 구현)

  • Bae, Ju-Won;Han, Byung-Gil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.2
    • /
    • pp.77-83
    • /
    • 2022
  • In this paper, the two-stage object detection approach is proposed to implement a deep learning-based label inspection system on edge computing environments. Since the label printed on the products during the production process contains important information related to the product, it is significantly to check the label information is correct. The proposed system uses the lightweight deep learning model that able to employ in the low-performance edge computing devices, and the two-stage object detection approach is applied to compensate for the low accuracy relatively. The proposed Two-Stage object detection approach consists of two object detection networks, Label Area Detection Network and Character Detection Network. Label Area Detection Network finds the label area in the product image, and Character Detection Network detects the words in the label area. Using this approach, we can detect characters precise even with a lightweight deep learning models. The SF-YOLO model applied in the proposed system is the YOLO-based lightweight object detection network designed for edge computing devices. This model showed up to 2 times faster processing time and a considerable improvement in accuracy, compared to other YOLO-based lightweight models such as YOLOv3-tiny and YOLOv4-tiny. Also since the amount of computation is low, it can be easily applied in edge computing environments.