• Title/Summary/Keyword: Server network bandwidth

Search Result 188, Processing Time 0.026 seconds

Utilizing Channel Bonding-based M-n and Interval Cache on a Distributed VOD Server (효율적인 분산 VOD 서버를 위한 Channel Bonding 기반 M-VIA 및 인터벌 캐쉬의 활용)

  • Chung, Sang-Hwa;Oh, Soo-Cheol;Yoon, Won-Ju;kim, Hyun-Pil;Choi, Young-In
    • The KIPS Transactions:PartA
    • /
    • v.12A no.7 s.97
    • /
    • pp.627-636
    • /
    • 2005
  • This paper presents a PC cluster-based distributed video on demand (VOD) server that minimizes the load of the interconnection network by adopting channel bonding-based MVIA and the interval cache algorithm Video data is distributed to the disks of each server node of the distributed VOD server and each server node receives the data through the interconnection network and sends it to clients. The load of the interconnection network increases because of the large volume of video data transferred. We adopt two techniques to reduce the load of the interconnection network. First, an Msupporting channel bonding technique is adopted for the interconnection network. n which is a user-level communication protocol that reduces the overhead of the TCP/IP protocol in cluster systems, minimizes the time spent in communicating. We increase the bandwidth of the interconnection network using the channel bonding technique with MThe channel bonding technique expands the bandwidth by sending data concurrently through multiple network cards. Second, the interval cache reduces traffic on the interconnection network by caching the video data transferred from the remote disks in main memory Experiments using the distributed VOD server of this paper showed a maximum performance improvement of $30\%$ compared with a distributed VOD server without channel bonding-based MVIA and the interval cache, when used with a four-node PC cluster.

Practical Patching for Efficient Bandwidth Sharing in VOD Systems

  • Ha Soak-Jeong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.12
    • /
    • pp.1597-1604
    • /
    • 2005
  • Recursive Patching is an efficient multicast technique for large-scale video on demand systems and recursively shares existing video streams with asynchronous clients. When Recursive Patching initiates a transition stream, it always makes a transition stream have additional data for the worst future request. In order to share a VOD server's limited network bandwidth efficiently, this paper proposes Practical Patching that removes the unnecessary data included in the transition stream. The proposed Practical Patching dynamically expands ongoing transition streams when a new request actually arrives at the server. As a result, the transition streams never have unnecessary data. Simulation result confirmed that the proposed technique is better than Recursive Patching in terms of service latency and defection rate.

  • PDF

Peer-to-Peer Transfer Scheme for Multimedia Partial Stream using Client Initiated with Prefetching (멀티미디어 데이터를 위한 피어-투-피어 전송모델)

  • 신광식;윤완오;정진하;최상방
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.7B
    • /
    • pp.598-612
    • /
    • 2004
  • Client requests have increased with the improvement of network resources at client side, whereas network resources at server side could not keep pace with the increased client request. Therefore, it is primary factor of the Qos that efficiently utilize network resources at server side. In this paper, we proposed a new model that peer-to-peer transfer scheme for partial multimedia stream based on CIWP which it decrease server network bandwidth by utilizing client disk resources saves additional server network resources. Especially, adapting Threshold_Based Multicast scheme guarantees to do that data transfer within clients never exceed service time of previous peer by restriction of which data size transferring from previous peer less than data size transferring from server. Peer-to-peer transfer within clients is limited in same group classified as ISPs. Our analytical result shows that proposed scheme reduces appling network resources at server side as utilizing additional client disk resource. furthermore, we perform various simulation study demonstrating the performance gain through comparing delay time and proportion of waiting requesters. As a result, when we compared to Threshold_Based Multicast scheme, the proposed scheme reduces server network bandwidth by 35%.

Mobile Client Buffer Level-based Scheduling Algorithms for Variable-Bit-Rate Video Stream Transmission (VBR 비디오 스트림 전송을 위한 모바일 클라이언트 버퍼 수준 기반 스케쥴링 알고리즘)

  • Kim, Jin-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.814-826
    • /
    • 2012
  • In this paper, we propose scheduling algorithms for transporting variable-bit-rate video stream using playback buffer level of the clients over wireless communication networks. The proposed algorithms attempt to maximize the utilization of limited bandwidth between the central video server and the clients over a mobile network. Since a video server may serve several video request at the same time, it is important to allocate and utilize network bandwidth to serve them fairly and efficiently. In order to improve the quality of service and real-time performance of individual video playback, the video server attempts to allocate temporarily more network bandwidth to serve a video request with the lower buffer level preferentially. The simulation results prove the fair service and load balancing among the mobile concurrent clients with different buffer levels and hence maximizing the number of frames that are transported successfully to the client prior to their playback times.

Server Side Solutions For Web-Based Video

  • Biernacki, Arkadiusz
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1768-1789
    • /
    • 2016
  • In contemporary video streaming systems based on HTTP protocol, video players at the client side are responsible for adjusting video quality to network conditions and user expectations. However, when multiple video clips are streamed simultaneously, an intricate application logic implemented in the video players overlays the TCP mechanism which is responsible for a balanced access to a shared network link. As a result, some video players may not obtain a fair share of network throughput and may be vulnerable to an unstable video bit-rate. Therefore, we propose to simplify the algorithms implemented in the video players, which are responsible for the adjustment of video quality and constrain their functionality only to sending feedback to a server about a state of the player buffer. The main logic of the system is shifted to the server, which is now responsible for bit-rate selection and prioritisation of the video streams transmitted to multiple clients. To verify our proposition, we performed several experiments in a laboratory environment which show that when the server cooperates with the clients, the video players experience fewer quality switches and the system achieves better fairness when allocating network throughput among the video players. However, this comes at the cost of worse utilisation of network bandwidth.

Cross-Layer Reduction of Wireless Network Card Idle Time to Optimize Energy Consumption of Pull Thin Client Protocols

  • Simoens, Pieter;Ali, Farhan Azmat;Vankeirsbilck, Bert;Deboosere, Lien;Turck, Filip De;Dhoedt, Bart;Demeester, Piet;Torrea-Duran, Rodolfo;Perre, Liesbet Van der;Dejonghe, Antoine
    • Journal of Communications and Networks
    • /
    • v.14 no.1
    • /
    • pp.75-90
    • /
    • 2012
  • Thin client computing trades local processing for network bandwidth consumption by offloading application logic to remote servers. User input and display updates are exchanged between client and server through a thin client protocol. On wireless devices, the thin client protocol traffic can lead to a significantly higher power consumption of the radio interface. In this article, a cross-layer framework is presented that transitions the wireless network interface card (WNIC) to the energy-conserving sleep mode when no traffic from the server is expected. The approach is validated for different wireless channel conditions, such as path loss and available bandwidth, as well as for different network roundtrip time values. Using this cross-layer algorithm for sample scenario with a remote text editor, and through experiments based on actual user traces, a reduction of the WNIC energy consumption of up to 36.82% is obtained, without degrading the application's reactivity.

Design and Implementation of A Dual CPU Based Embedded Web Camera Streaming Server (Dual CPU 기반 임베디드 웹 카메라 스트리밍 서버의 설계 및 구현)

  • 홍진기;문종려;백승걸;정선태
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.417-420
    • /
    • 2003
  • Most Embedded Web Camera Server products currently deployed on the market adopt JPEG for compression of video data continuously acquired from the cameras. However, JPEG does not efficiently compress the continuous video stream, and is not appropriate for the Internet where the transmission bandwidth is not guaranteed. In our previous work, we presented the result of designing and implementing an embedded web camera streaming server using MPEG4 codec. But the server in our previous work did not show good performance since one CPU had to both compress and process the network transmission. In this paper, we present our efforts to improve our previous result by using dual CPUs, where DSP is employed for data compression and StrongARM is used for network processing. Better performance has been observed, but it is found that still more time is needed to optimize the performance.

  • PDF

OFPT: OpenFlow based Parallel Transport in Datacenters

  • Liu, Bo;XU, Bo;Hu, Chao;Hu, Hui;Chen, Ming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.4787-4807
    • /
    • 2016
  • Although the dense interconnection datacenter networks (DCNs) (e.g. FatTree) provide multiple paths and high bisection bandwidth for each server pair, the single-path TCP (SPT) and ECMP which are widely used currently neither achieve high bandwidth utilization nor have good load balancing. Due to only one available transmission path, SPT cannot make full use of all available bandwidth, while ECMP's random hashing results in many collisions. In this paper, we present OFPT, an OpenFlow based Parallel Transport framework, which integrates precise routing and scheduling for better load balancing and higher network throughput. By adopting OpenFlow based centralized control mechanism, OFPT computes the optimal path and bandwidth provision for each flow according to the global network view. To guarantee high throughput, OFPT dynamically schedules flows with Seamless Flow Migration Mechanism (SFMM), which can avoid packet loss in flow rerouting. Finally, we test OFPT on Mininet and implement it in a real testbed. The experimental results show that the average network throughput in OFPT is up to 97.5% of bisection bandwidth, which is higher than ECMP by 36%. Besides, OFPT decreases the average flow completion time (AFCT) and achieves better scalability.

A Dynamic Bandwidth Tuning Mechanism for DQDB in Client-Server Traffic Environments (클라이언트-서버 트래픽 환경에서 분산-큐 이중-버스의 동적 대역폭 조절 방식)

  • Kim, Jeong-Hong;Kwon, Oh-Seok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11
    • /
    • pp.3479-3489
    • /
    • 2000
  • Most of the study on fairness control method for Distributed-Queue Dual-Bus(DQDB) have been performed under specific load types such as equal probability load types or symmetric load types. On Web-based internet enviroments client-server load types are more practical traffic patlerns than specrfic load types. In this paper, an effiective fairness control method to distribute DQDR network bandwidth fairly to all stations under a client-server load is proposed. In order to implement a dynamic bandwidth timing capabihty needed to distribute the bandwidth fairty at heavy loads, the proposed method uses two pararnetexs, one is an access hrnit to legulate each station's packet transmission and the other is the number of extra emply slots that are yielded to downstream stations. In point of implementation this mechanism is simpler and easier than Bandwidth Tuning Mechanism(BTM) that uses an intermediate pattern and an adptation function. Simulation results show that it outperforms othen mecharusms.

  • PDF

A Scalable VoD Service Scheme Based on Overlay Multicast Approach (오버레이 멀티캐스트를 적용한 확장성 있는 VoD 서비스 모델)

  • Kim Kyung-Hoon;Son Seung-Chul;Nam Ji-Seung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11B
    • /
    • pp.784-791
    • /
    • 2005
  • To provide VoD Service on the Internet VoD server is required to have a large amount of system resource and network bandwidth. Therefore, overlay Multicast schemes are considered as suitable alternatives but they also have some drawbacks to support on-demand media services. In this paper, we propose an overlay multicast based on-demand media service scheme which could exploit server's system resource and its network bandwidth efficiently. Proposed scheme uses shared buffer of clients involving the relay of traffic and Patching while it gives no restrictions to this scheme compared with unicast. Our simulation results show that proposed scheme can support more user than unicast and improve the network performance at the same time.