• Title/Summary/Keyword: task scheduling

Search Result 485, Processing Time 0.021 seconds

Long-Term Container Allocation via Optimized Task Scheduling Through Deep Learning (OTS-DL) And High-Level Security

  • Muthakshi S;Mahesh K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1258-1275
    • /
    • 2023
  • Cloud computing is a new technology that has adapted to the traditional way of service providing. Service providers are responsible for managing the allocation of resources. Selecting suitable containers and bandwidth for job scheduling has been a challenging task for the service providers. There are several existing systems that have introduced many algorithms for resource allocation. To overcome these challenges, the proposed system introduces an Optimized Task Scheduling Algorithm with Deep Learning (OTS-DL). When a job is assigned to a Cloud Service Provider (CSP), the containers are allocated automatically. The article segregates the containers as' Long-Term Container (LTC)' and 'Short-Term Container (STC)' for resource allocation. The system leverages an 'Optimized Task Scheduling Algorithm' to maximize the resource utilisation that initially inquires for micro-task and macro-task dependencies. The bottleneck task is chosen and acted upon accordingly. Further, the system initializes a 'Deep Learning' (DL) for implementing all the progressive steps of job scheduling in the cloud. Further, to overcome container attacks and errors, the system formulates a Container Convergence (Fault Tolerance) theory with high-level security. The results demonstrate that the used optimization algorithm is more effective for implementing a complete resource allocation and solving the large-scale optimization problem of resource allocation and security issues.

Deep Learning Based Security Model for Cloud based Task Scheduling

  • Devi, Karuppiah;Paulraj, D.;Muthusenthil, Balasubramanian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3663-3679
    • /
    • 2020
  • Scheduling plays a dynamic role in cloud computing in generating as well as in efficient distribution of the resources of each task. The principle goal of scheduling is to limit resource starvation and to guarantee fairness among the parties using the resources. The demand for resources fluctuates dynamically hence the prearranging of resources is a challenging task. Many task-scheduling approaches have been used in the cloud-computing environment. Security in cloud computing environment is one of the core issue in distributed computing. We have designed a deep learning-based security model for scheduling tasks in cloud computing and it has been implemented using CloudSim 3.0 simulator written in Java and verification of the results from different perspectives, such as response time with and without security factors, makespan, cost, CPU utilization, I/O utilization, Memory utilization, and execution time is compared with Round Robin (RR) and Waited Round Robin (WRR) algorithms.

Task Scheduling in Fog Computing - Classification, Review, Challenges and Future Directions

  • Alsadie, Deafallah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.89-100
    • /
    • 2022
  • With the advancement in the Internet of things Technology (IoT) cloud computing, billions of physical devices have been interconnected for sharing and collecting data in different applications. Despite many advancements, some latency - specific application in the real world is not feasible due to existing constraints of IoT devices and distance between cloud and IoT devices. In order to address issues of latency sensitive applications, fog computing has been developed that involves the availability of computing and storage resources at the edge of the network near the IoT devices. However, fog computing suffers from many limitations such as heterogeneity, storage capabilities, processing capability, memory limitations etc. Therefore, it requires an adequate task scheduling method for utilizing computing resources optimally at the fog layer. This work presents a comprehensive review of different task scheduling methods in fog computing. It analyses different task scheduling methods developed for a fog computing environment in multiple dimensions and compares them to highlight the advantages and disadvantages of methods. Finally, it presents promising research directions for fellow researchers in the fog computing environment.

Enhanced Technique for Performance in Real Time Systems (실시간 시스템에서 성능 향상 기법)

  • Kim, Myung Jun
    • Journal of Information Technology Services
    • /
    • v.16 no.3
    • /
    • pp.103-111
    • /
    • 2017
  • The real time scheduling is a key research area in high performance computing and has been a source of challenging problems. A periodic task is an infinite sequence of task instance where each job of a task comes in a regular period. The RMS (Rate Monotonic Scheduling) algorithm has the advantage of a strong theoretical foundation and holds out the promise of reducing the need for exhaustive testing of the scheduling. Many real-time systems built in the past based their scheduling on the Cyclic Executive Model because it produces predictable schedules which facilitate exhaustive testing. In this work we propose hybrid scheduling method which combines features of both of these scheduling algorithms. The original rate monotonic scheduling algorithm didn't consider the uniform sampling tasks in the real time systems. We have enumerated some issues when the RMS is applied to our hybrid scheduling method. We found the scheduling bound for the hard real-time systems which include the uniform sampling tasks. The suggested hybrid scheduling algorithm turns out to have some advantages from the point of view of the real time system designer, and is particularly useful in the context of large critical systems. Our algorithm can be useful for real time system designer who must guarantee the hard real time tasks.

Modified TDS (Task Duplicated based Scheduling) Scheme Optimizing Task Execution Time (태스크 실행 시간을 최적화한 개선된 태스크 중복 스케줄 기법)

  • Jang, Sei-Ie;Kim, Sung-Chun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.6
    • /
    • pp.549-557
    • /
    • 2000
  • Distributed Memory Machine(DMM) is necessary for the effective computation of the data which is complicated and very large. Task scheduling is a method that reduces the communication time among tasks to reduce the total execution time of application program and is very important for the improvement of DMM. Task Duplicated based Scheduling(TDS) method improves execution time by reducing communication time of tasks. It uses clustering method which schedules tasks of the large communication time on the same processor. But there is a problem that cannot optimize communication time between task sending data and task receiving data. Hence, this paper proposes a new method which solves the above problem in TDS. Modified Task Duplicated based Scheduling(MTDS) method which can approximately optimize the communication time between task sending data and task receiving data by checking the optimal condition, resulted in the minimization of task execution time by reducing the communication time among tasks. Also system modeling shows that task execution time of MTDS is about 70% faster than that of TDS in the best case and the same as the result of TDS in the worst case. It proves that MTDS method is better than TDS method.

  • PDF

Emotion-aware Task Scheduling for Autonomous Vehicles in Software-defined Edge Networks

  • Sun, Mengmeng;Zhang, Lianming;Mei, Jing;Dong, Pingping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3523-3543
    • /
    • 2022
  • Autonomous vehicles are gradually being regarded as the mainstream trend of future development of the automobile industry. Autonomous driving networks generate many intensive and delay-sensitive computing tasks. The storage space, computing power, and battery capacity of autonomous vehicle terminals cannot meet the resource requirements of the tasks. In this paper, we focus on the task scheduling problem of autonomous driving in software-defined edge networks. By analyzing the intensive and delay-sensitive computing tasks of autonomous vehicles, we propose an emotion model that is related to task urgency and changes with execution time and propose an optimal base station (BS) task scheduling (OBSTS) algorithm. Task sentiment is an important factor that changes with the length of time that computing tasks with different urgency levels remain in the queue. The algorithm uses task sentiment as a performance indicator to measure task scheduling. Experimental results show that the OBSTS algorithm can more effectively meet the intensive and delay-sensitive requirements of vehicle terminals for network resources and improve user service experience.

Energy Aware Scheduling of Aperiodic Real-Time Tasks on Multiprocessor Systems

  • Anne, Naveen;Muthukumar, Venkatesan
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.1
    • /
    • pp.30-43
    • /
    • 2013
  • Multicore and multiprocessor systems with dynamic voltage scaling architectures are being used as one of the solutions to satisfy the growing needs of high performance applications with low power constraints. An important aspect that has propelled this solution is effective task/application scheduling and mapping algorithms for multiprocessor systems. This work proposes an energy aware, offline, probability-based unified scheduling and mapping algorithm for multiprocessor systems, to minimize the number of processors used, maximize the utilization of the processors, and optimize the energy consumption of the multiprocessor system. The proposed algorithm is implemented, simulated and evaluated with synthetic task graphs, and compared with classical scheduling algorithms for the number of processors required, utilization of processors, and energy consumed by the processors for execution of the application task graphs.

Energy-Aware Task Scheduling for Multiprocessors using Dynamic Voltage Scaling and Power Shutdown (멀티프로세서상의 에너지 소모를 고려한 동적 전압 스케일링 및 전력 셧다운을 이용한 태스크 스케줄링)

  • Kim, Hyun-Jin;Hong, Hye-Jeong;Kim, Hong-Sik;Kang, Sung-Ho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.7
    • /
    • pp.22-28
    • /
    • 2009
  • As multiprocessors have been widely adopted in embedded systems, task computation energy consumption should be minimized with several low power techniques supported by the multiprocessors. This paper proposes an energy-aware task scheduling algorithm that adopts both dynamic voltage scaling and power shutdown in multiprocessor environments. Considering the timing and energy overhead of power shutdown, the proposed algorithm performs an iterative task assignment and task ordering for multiprocessor systems. In this case, the iterative priority-based task scheduling is adopted to obtain the best solution with the minimized total energy consumption. Total energy consumption is calculated by considering a linear programming model and threshold time of power shutdown. By analyzing experimental results for standard task graphs based on real applications, the resource and timing limitations were analyzed to maximize energy savings. Considering the experimental results, the proposed energy-aware task scheduling provided meaningful performance enhancements over the existing priority-based task scheduling approaches.

A Methodology for Task placement and Scheduling Based on Virtual Machines

  • Chen, Xiaojun;Zhang, Jing;Li, Junhuai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.9
    • /
    • pp.1544-1572
    • /
    • 2011
  • Task placement and scheduling are traditionally studied in following aspects: resource utilization, application throughput, application execution latency and starvation, and recently, the studies are more on application scalability and application performance. A methodology for task placement and scheduling centered on tasks based on virtual machines is studied in this paper to improve the performances of systems and dynamic adaptability in applications development and deployment oriented parallel computing. For parallel applications with no real-time constraints, we describe a thought of feature model and make a formal description for four layers of task placement and scheduling. To place the tasks to different layers of virtual computing systems, we take the performances of four layers as the goal function in the model of task placement and scheduling. Furthermore, we take the personal preference, the application scalability for a designer in his (her) development and deployment, as the constraint of this model. The workflow of task placement and scheduling based on virtual machines has been discussed. Then, an algorithm TPVM is designed to work out the optimal scheme of the model, and an algorithm TEVM completes the execution of tasks in four layers. The experiments have been performed to validate the effectiveness of time estimated method and the feasibility and rationality of algorithms. It is seen from the experiments that our algorithms are better than other four algorithms in performance. The results show that the methodology presented in this paper has guiding significance to improve the efficiency of virtual computing systems.

Multi-factor Evolution for Large-scale Multi-objective Cloud Task Scheduling

  • Tianhao Zhao;Linjie Wu;Di Wu;Jianwei Li;Zhihua Cui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1100-1122
    • /
    • 2023
  • Scheduling user-submitted cloud tasks to the appropriate virtual machine (VM) in cloud computing is critical for cloud providers. However, as the demand for cloud resources from user tasks continues to grow, current evolutionary algorithms (EAs) cannot satisfy the optimal solution of large-scale cloud task scheduling problems. In this paper, we first construct a large- scale multi-objective cloud task problem considering the time and cost functions. Second, a multi-objective optimization algorithm based on multi-factor optimization (MFO) is proposed to solve the established problem. This algorithm solves by decomposing the large-scale optimization problem into multiple optimization subproblems. This reduces the computational burden of the algorithm. Later, the introduction of the MFO strategy provides the algorithm with a parallel evolutionary paradigm for multiple subpopulations of implicit knowledge transfer. Finally, simulation experiments and comparisons are performed on a large-scale task scheduling test set on the CloudSim platform. Experimental results show that our algorithm can obtain the best scheduling solution while maintaining good results of the objective function compared with other optimization algorithms.