• Title/Summary/Keyword: server performance

Search Result 1,690, Processing Time 0.029 seconds

A Framework for Developing IFC Server for Supporting Construction Product Life Cycle Management(CPLM) (CPLM 지원을 위한 OR-IFC 서버 개발 기초 연구)

  • Kang, Hoon-Sig;Lee, Ghang;Kim, Seon-Woo
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2008.04a
    • /
    • pp.458-463
    • /
    • 2008
  • An IFC Server is a database management system that stores data complying with a standard data format, called IFC, and keeps track of data transactions, modifications, and deletions. It plays a role as an information hub for storing and sharing information between various parties involved in construction projects. There have been several efforts to develop an IFC Server, however, they suffered from slow performance and long transaction time due to a complex mapping process between IFC files and relational database structures. In this study, we aim to develop an IFC Server using an object-relational database system. Since IFC has an object-flavored data structure, we expect to have a simpler and faster mapping process from IFC to the IFC Server. This paper reviews existing studies and describes the overall framework of the OR-IFC Server.

  • PDF

High Availability Web Server Cluster Using Self-healing Technique (자가 치유 기법을 이용한 고가용도 웹 서버 클러스터)

  • Chung, Ji Yung;Kim, Young Ro
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.1
    • /
    • pp.23-32
    • /
    • 2009
  • Although the web is becoming a widely accepted medium, it provides relatively poor performance and low availability. A cluster consists of a collection of interconnected stand-alone computers working together and provides a high-availability solution in application area such as web services or information systems. Web server clusters require a high-availability service with a proactive and practical fault management. However, as the system complexity grows, it is not easy to meet the requirement. Therefore, web server clusters must have self-fault management capability for meeting high-availability requirement. In this paper, we propose high availability web server clusters using self-healing technique with a minimal human intervention. Our experimental results show that a proposed method can be used to improve the availability of web server clusters.

Prediction-based Dynamic Thread Pool System for Massively Multi-player Online Game Server

  • Ju, Woo-Suk;Im, Choong-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.6
    • /
    • pp.876-881
    • /
    • 2009
  • Online game servers usually has been using the static thread pool system. But this system is not fit for huge online game server because the overhead is always up-and-down. Therefore, in this paper, we suggest the new algorithm for huge online game server. This algorithm is based on the prediction-based dynamic thread pool system. But it was developed for web servers and every 0.1 seconds the system prediction the needed numbers of threads and determine the thread pool size. Some experimental results show that the check time of 0.4 seconds is the best one for online game server and if the number of worker threads do not excess or lack to the given threshold then we do not predict and keep the current state. Otherwise we apply the prediction algorithm and change the number of threads. Some experimental results shows that this proposed algorithm reduce the overhead massively and make the performance of huge online game server improved in comparison to the static thread pool system.

  • PDF

Adaptable Online Game Server Design

  • Seo, Jintaek
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.2
    • /
    • pp.82-87
    • /
    • 2020
  • This paper discusses how to design a game server that is scalable, adaptable, and re-buildable with components. Furthermore, it explains how various implementation issues were resolved. To support adaptability, the server comprises three layers: network, user, and database. To ensure independence between the layers, each layer was designed to communicate with each other only via message queues. In this architecture, each layer can have an arbitrary number of threads; thus, scalability is guaranteed for each layer. The network layer uses input/output completion ports(IOCP), which shows the best performance on the Windows platform, it can handle up to 5,000 simultaneous connections on a typical entry-level computer, despite being built with a single-threaded user layer. To completely separate the database from the game server, the SQL code was not directly embedded in the database layer.

Implementation of Linux Virtual Server Load Balancing in Cloud Environment (클라우드 환경에서 Linux Virtual Server 로드밸런싱 구현)

  • Seo, Kyung-Seok;Lee, Bon-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.793-796
    • /
    • 2012
  • Recently adoption of the Green IT is regarded as an essential element in order to decrease server heat and save energy in data center because of continuous increase of energy consumption and energy price. Consequently the conventional IT infrastructure is replaced with cloud computing platform. In this paper, we have implemented a Linux virtual server load balancing in open source-based cloud platform and the performance of the LVS load balancing is analyzed.

  • PDF

A Job Scheduling Method for Digital Contents Delivery Service System using a Idle Computing Resources (유휴 컴퓨팅 자원을 이용한 디지털 콘텐츠 전송 서비스 시스템에서의 작업할당기법)

  • Kim, Jin-Il;Song, Jeong-Yeong
    • The Journal of Engineering Research
    • /
    • v.7 no.1
    • /
    • pp.29-36
    • /
    • 2005
  • In multi-server environment, We have to design a mechanism that selects appropriate servers for processing each service request while maximizing server throughput and minimizing average response of service requests. In this paper, we propose a job scheduling method for courseware delivery system using a idle computing resources on LAN. The proposed scheduling method propose an approach that uses client service request time, server available period and server load of requested service as guidelines of server selection. Comparing the our approach method to conventional one, experiments that the proposed approach provides better performance.

  • PDF

A Study on the Implementation of Web-Camera System and the Measurement of Traffic (웹 카메라 시스템의 구현과 트래픽 측정에 관한 연구)

  • 안영민;진현준;박노경
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04a
    • /
    • pp.187-189
    • /
    • 2001
  • In this study, the Web Camera System is implementation and simulated on two different architectures. In the one architecture, a Web-server and Camera-server are implemented on the same system, and the system transfers motion picture which compressed to JPEG file to users on the WWW(World Wide Web). In the other architecture, the Web-server and Camera-server are implemented on different systems, and the motion picture is transferred from the Camera-server to Web-server, and finally to users. In order to compare system performance between two architecture, data traffic is measured and simulated in the unit of byte per second and frame per second.

Design and Implementation of Event Based Message Exchange Architecture between Servers for Server Push (서버 푸시를 위한 이벤트 기반 서버간 메시지 교환 아키텍처의 설계 및 구현)

  • Cho, Dong-Il;Rhew, Sung-Yul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.4
    • /
    • pp.181-194
    • /
    • 2011
  • Server push which is technology of sending contents from servers to browsers in real time using long polling requests enables real time bidirectional communications between servers and browsers in HTTP environment. Recently, thanks to the rapid supply of mobile devices having ability of full browsing, server push is being applied to various applications. However, because servers providing services should offer distributed contents to a large number of users simultaneously in various user environments, they have a burden that offers contents quickly distinguishing much more concurrent users than before. The method of message exchange so far achieved in distributed server environment has difficulties in the performance of simultaneous user request process, the identification of users and the contents delivery. In this paper, We proposed message exchange architecture between servers for offering server push in the distributed server environment. The proposed architecture enables message exchange in the method of push between servers based on event driven architecture. In addition, the proposed architecture enables flexible identification of a event agent and event processing under the connected environment of a lot of users. In this paper, we designed and implemented the proposed architecture and compared performance with the previous way through a performance test. In addition, function is confirmed through the case realization. As a result of the performance test, the proposed architecture can lessen the use of server Thread and response time of users and increase simultaneous throughput.

A Performance Improvement of Linux TCP Networking by Data Structure Reuse (자료 구조 재사용을 이용한 리눅스 TCP 네트워킹 성능 개선)

  • Kim, Seokkoo;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.8
    • /
    • pp.261-270
    • /
    • 2014
  • As Internet traffic increases recently, much effort has been put on improving the performance of a web server. In addition to hardware side solutions such as replacement by high-end hardware or expansion of the number of servers, there are software side solutions to improve performance. Recent studies on these software side solutions have been actively performed. In this paper, we identify performance degradation problems occurring in a conventional TCP networking reception process and propose a way to solve them. We improve performance by combining three kinds of existing methods for Linux Networking Performance Improvement and two kinds of newly proposed methods in this paper. The three existing methods include 1) an allocation method of a packet flow to a core in a multi-core environment, 2) ITR(Interrupt Throttle Rate) method to control excessive interrupt requests, and 3) sk_buff data structure recycling. The two newly proposed methods are fd data structure recycling and epoll_event data structure recycling. Through experiments in a web server environment, we verify the effect of our two proposed methods and its combination with the three existing methods for performance improvement, respectively. We use three kinds of web servers: a simple web server, Lighttpd generally used in Linux, and Apache. In a simple web server environment, fd data structure recycling and epoll_event data structure recycling bring out performance improvement by about 7 % and 6%, respectively. If they are combined with the three existing methods, performance is improved by up to 40% in total. In a Lighttpd and an Apache web server environment, the combination of five methods brings out performance improvement by up to 36% and 20% in total, respectively.

Scalable and Dynamically Reconfigurable Internet Service System Based on Clustered System (확장과 동적재구성 가능한 클러스터기반의 인터넷서비스 시스템)

  • Kim Dong Keun;Park Se Myung
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1400-1411
    • /
    • 2004
  • Recently, explosion of internet user requires fundamental changes on the architecture of Web service system, from single server system to clustered server system, in parallel with the effort for improving the scalability of the single internet server system. But current cluster-based server systems are dedicated to the single application, for example, One-IP server system. One-IP server system has a clustered computing node with the same function and tries to distribute each request based on the If to the clustered node evenly. In this paper, we implemented the more useful application service platform. It works on shared clustered server(back-end server) with an application server(front-end server) for a particular service. An application server provides a particular service at a low load by itself, but as the load increases, it reconfigures itself with one or more available server from the shared cluster and distributes the load on selected server evenly We used PVM for an effective management of the clustered server. We found the implemented application service platform provides more stable and scalable operation characteristics and has remarkable performance improvement on the dynamic load changes.

  • PDF