• Title/Summary/Keyword: Multiple Servers

Search Result 172, Processing Time 0.025 seconds

The Clustering Method Of Central Control System In New Distribution Automation System (배전자동화시스템 중앙제어장치 이중화 적용방안)

  • Cho, Nam-Hun;Ha, Bok-Nam;Lee, Jung-Ho;Lim, Seong-Il
    • Proceedings of the KIEE Conference
    • /
    • 1999.07c
    • /
    • pp.1120-1122
    • /
    • 1999
  • This paper introduces a clustering for Central Control System in New Distribution Automation System. There are three primary benefits to use clustering: improved availability, easier manageability and more cost-effective scalability. Availability: Clustering can automatically detect the failure of an application or server and quickly restart it on a surviving server. Clients only experience a momentary pause in service. Manageability: Clustering lets administrators quickly inspect the status of all cluster resources and easily move workload around onto different servers within a cluster. Scalability: Applications can use the Clustering services through the MSCS Application Programming Interface(API) to do dynamic load balancing and scale across multiple servers within a cluster.

  • PDF

An approximation of the M/M/s system where customers demand random number of servers (고객(顧客)이 임의수(任意數)의 Server 를 원하는 M/M/s system 의 개산법(槪算法))

  • Kim, Seong-Sik
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.7 no.1
    • /
    • pp.5-11
    • /
    • 1981
  • In the case of numerical implementation, the exact solution method for the M/M/s system where customers demand multiple server use [2] reveals limitations, if a system has large number of servers or types of customers. This is due to the huge matrices involved in the course of the calculations. This paper offers an approximation scheme for such cases. Capitalizing the characteristics of the service rate curve of the system, this method approximates the service rate as a piecewise linear function. With the service rates obtained from the linear function for each number of customers n (n=0. 1. 2,$\cdots$), ${\mu}(n)$, steady-state probabilities and measures of performance are found treating this system as an ordinary M/M/s system. This scheme performs well when the traffic intensity of a system is below about 0.8. Some numerical examples are presented.

  • PDF

A Design Problem of a Two-Stage Cyclic Queueing Network (두 단계로 구성된 순환대기네트워크의 설계)

  • Kim Sung-Chul
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.31 no.1
    • /
    • pp.1-13
    • /
    • 2006
  • In this paper we consider a design problem of a cyclic queueing network with two stages, each with a local buffer of limited capacity. Based on the theory of reversibility and product-form solution, we derive the throughput function of the network as a key performance measure to maximize. Two cases are considered. In case each stage consists of a single server, an optimal allocation policy of a given buffer capacity and work load between stages as well as the optimal number of customers is identified by exploiting the properties of the throughput function. In case each stage consists of multiple servers, the optimal policy developed for the single server case doesn't hold any more and an algorithm is developed to allocate with a small number of computations a given number of servers, buffer capacity as well as total work load and the total number of customers. The differences of the optimal policies between two cases and the implications of the results are also discussed. The results can be applied to support the design of certain manufacturing and computer/communication systems.

A Study on the MMORPG Server Architecture Applying with Arithmetic Server (연산서버를 적용한 MMORPG 게임서버에 관한 연구)

  • Bae, Sung-Gill;Kim, Hye-Young
    • Journal of Korea Game Society
    • /
    • v.13 no.2
    • /
    • pp.39-48
    • /
    • 2013
  • In MMORPGs(Massively Multi-player Online Role-Playing Games) a large number of players actively interact with one another in a virtual world. Therefore MMORGs must be able to quickly process real-time access requests and process requests from numerous gaming users. A key challenge is that the workload of the game server increases as the number of gaming users increases. To address this workload problem, many developers apply with distributed server architectures which use dynamic map partitioning and load balancing according to the server function. Therefore most MMORPG servers partition a virtual world into zones and each zone runs on multiple game servers. These methods cause of players frequently move between game servers, which imposes high overhead for data updates. In this paper, we propose a new architecture that apply with an arithmetic server dedicated to data operation. This architecture enables the existing game servers to process more access and job requests by reducing the load. Through mathematical modeling and experimental results, we show that our scheme yields higher efficiency than the existing ones.

Migration Agent for Seamless Virtual Environment System in Cloud Computing Network (클라우드 컴퓨팅 네트워크에서 Seamless 가상 환경 시스템 구축을 위한 마이그레이션 에이전트)

  • Won, Dong Hyun;An, Dong Un
    • Smart Media Journal
    • /
    • v.8 no.3
    • /
    • pp.41-46
    • /
    • 2019
  • In a MMORPG, a typical application of virtual environment systems, it is a common desire to play in a more realistic environment. However, it is very difficult to provide a latency-free virtual environment to a large user base, mainly due to the fact that the real environment must be configured on multiple servers rather than on single server and that data must be shared on the real server when users move from one region to another. Experiencing response delays continuously in the process of information synchronization between servers greatly deteriorates the degree of immersion. In order to solve this problem, it is necessary to minimize the response delay occurring in the information synchronization process between the servers. In this paper, we propose Migration Agent for efficient information synchronization between field servers providing information of virtual environment and minimizing response delay between Field Server and PC(Player Character) and implement it in cloud computing network. In the proposed system, CPU utilization of field server increased by 6 ~ 13%, and response time decreased by 5 ~ 10 seconds over the existing system in 70,000 ~ 90,000 PCs

A New Conference Information Data Model in SIP based Distributed Conference Architecture (SIP 기반 분산형 컨퍼런스 구조에서의 새로운 컨퍼런스 정보 데이터 모델)

  • Jang, Choon-Seo;Lee, Ky-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.5
    • /
    • pp.85-91
    • /
    • 2009
  • The centralized conference architecture has a restriction in scalability due to the performance reduction as the number of conference participants increases. To solve this problem several distributed conference architectures have been studied recently. In these architectures new conference servers are added dynamically to the conference environment. In this paper, We have proposed a new conference information data model which can be used in these distributed conference architectures. In our newly proposed conference information data model. several components has been added for exchanging conference information between primary conference server and multiple secondary conference servers. We also proposed a procedure of conference information exchange between these conference servers. And the management of conference informations and SIP(Session Initiation Protocol) notifications to the total conference participants can be processed distributedly with these conference servers, therefore the load to the primary conference server can be decreased by using this method. The performance of our proposed model has been evaluated by experiments.

Collaborative Inference for Deep Neural Networks in Edge Environments

  • Meizhao Liu;Yingcheng Gu;Sen Dong;Liu Wei;Kai Liu;Yuting Yan;Yu Song;Huanyu Cheng;Lei Tang;Sheng Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1749-1773
    • /
    • 2024
  • Recent advances in deep neural networks (DNNs) have greatly improved the accuracy and universality of various intelligent applications, at the expense of increasing model size and computational demand. Since the resources of end devices are often too limited to deploy a complete DNN model, offloading DNN inference tasks to cloud servers is a common approach to meet this gap. However, due to the limited bandwidth of WAN and the long distance between end devices and cloud servers, this approach may lead to significant data transmission latency. Therefore, device-edge collaborative inference has emerged as a promising paradigm to accelerate the execution of DNN inference tasks where DNN models are partitioned to be sequentially executed in both end devices and edge servers. Nevertheless, collaborative inference in heterogeneous edge environments with multiple edge servers, end devices and DNN tasks has been overlooked in previous research. To fill this gap, we investigate the optimization problem of collaborative inference in a heterogeneous system and propose a scheme CIS, i.e., collaborative inference scheme, which jointly combines DNN partition, task offloading and scheduling to reduce the average weighted inference latency. CIS decomposes the problem into three parts to achieve the optimal average weighted inference latency. In addition, we build a prototype that implements CIS and conducts extensive experiments to demonstrate the scheme's effectiveness and efficiency. Experiments show that CIS reduces 29% to 71% on the average weighted inference latency compared to the other four existing schemes.

Mutual-Backup Architecture of SIP-Servers in Wireless Backbone based Networks (무선 백본 기반 통신망을 위한 상호 보완 SIP 서버 배치 구조)

  • Kim, Ki-Hun;Lee, Sung-Hyung;Kim, Jae-Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.1
    • /
    • pp.32-39
    • /
    • 2015
  • The voice communications with wireless backbone based networks are evolving into a packet switching VoIP systems. In those networks, a call processing scheme is required for management of subscribers and connection between them. A VoIP service scheme for those systems requires reliable subscriber management and connection establishment schemes, but the conventional call processing schemes based on the centralized server has lack of reliability. Thus, the mutual-backup architecture of SIP-servers is required to ensure efficient subscriber management and reliable VoIP call processing capability, and the synchronization and call processing schemes should be changed as the architecture is changed. In this paper, a mutual-backup architecture of SIP-servers is proposed for wireless backbone based networks. A message format for synchronization and information exchange between SIP servers is also proposed in the paper. This paper also proposes a FSM scheme for the fast call processing in unreliable networks to detect multiple servers at a time. The performance analysis results show that the mutual backup server architecture increases the call processing success rates than conventional centralized server architecture. Also, the FSM scheme provides the smaller call processing times than conventional SIP, and the time is not increased although the number of SIP servers in the networks is increased.

Design and Implementation of IP Video Wall System for Large-scale Video Monitoring in Smart City Environments (스마트 시티 환경에서 대규모 영상 모니터링을 위한 IP 비디오 월 시스템의 설계 및 구현)

  • Yang, Sun-Jin;Park, Jae-Pyo;Yang, Seung-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.7-13
    • /
    • 2019
  • Unlike a typical video wall system, video wall systems used for integrated monitoring in smart city environments should be able to display various videos, images, and texts simultaneously. In this paper, we propose an Internet Protocol (IP)-based video wall system that has no limit on the number of videos that can be monitored simultaneously, and that can arrange the monitor screen layout without restrictions. The proposed system is composed of multiple display servers, a wall controller, and video source providers, and they communicate with each other through an IP network. Since the display server receives and decodes the video stream directly from the video source devices, and displays it on the attached monitor screens, more videos can be simultaneously displayed on the entire video wall. When one video is displayed over several screens attached to multiple display servers, only one display server receives the video stream and transmits it to the other display servers by using IP multicast communications, thereby reducing the network load and synchronizing the video frames. Experiments show that as the number of videos increases, a system consisting of more display servers shows better decoding and rendering performance, and there is no performance degradation, even if the display server continues to be expanded.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.