• Title/Summary/Keyword: Bandwidth Request

Search Result 187, Processing Time 0.025 seconds

Bandwidth Reservation and Call Admission Control Mechanisms for Efficient Support of Multimedia Traffic in Mobile Computing Environments (이동 컴퓨팅 환경에서 멀티미디어 트래픽의 효율적 지원을 위한 대역폭 예약 및 호 수락 제어 메커니즘)

  • 최창호;김성조
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.6
    • /
    • pp.595-612
    • /
    • 2002
  • One of the most important issues in guaranteeing the high degree of QoS on mobile computing is how to reduce hand-off drops caused by lack of available bandwidth in a new cell. Each cell can request bandwidth reservation to its adjacent cells for hand-off calls. This reserved bandwidth can be used only for hand-offs, not for new calls. It is also important to determine how much of bandwidth should be reserved for hand-off calls because reserving too much would increase the probability of a new call being blocked. Therefore, it is essential to develop a new mechanism to provide QoS guarantee on a mobile computing environment by reserving an appropriate amount of bandwidth and call admission control. In this paper. bandwidth reservation and call admission control mechanisms are proposed to guarantee a consistent QoS for multimedia traffics on a mobile computing environment. For an appropriate bandwidth reservation, we propose an adaptive bandwidth reservation mechanism based on an MPP and a 2-tier cell structure. The former is used to predict a next move of the client while the latter to apply our mechanism only to the client with a high hand-off probability. We also propose a call admission control that performs call admission test only on PNC(Predicted Next Cell) of a client and its current cell. In order to minimize a waste of bandwidth caused by an erroneous prediction of client's location, we utilize a common pool and QoS adaptation scheme. In order evaluate the performance of our call admission control mechanism, we measure the metrics such as the blocking probability of new calls, dropping probability of hand-off calls, and bandwidth utilization. The simulation results show that the performance of our mechanism is superior to that of the existing mechanisms such as NR-CAT2, FR-CAT2, and AR-CAT2.

Dynamic Bandwidth Allocation Algorithm with Two-Phase Cycle for Ethernet PON (EPON에서의 Two-Phase Cycle 동적 대역 할당 알고리즘)

  • Yoon, Won-Jin;Lee, Hye-Kyung;Chung, Min-Young;Lee, Tae-Jin;Choo, Hyun-Seung
    • The KIPS Transactions:PartC
    • /
    • v.14C no.4
    • /
    • pp.349-358
    • /
    • 2007
  • Ethernet Passive Optical Network(EPON), which is one of PON technologies for realizing FTTx(Fiber-To-The-Curb/Home/Office), can cost-effectively construct optical access networks. In addition, EPON can provide high transmission rate up to 10Gbps and it is compatible with existing customer devices equipped with Ethernet card. To effectively control frame transmission from ONUs to OLT EPON can use Multi-Point Control Protocol(MPCP) with additional control functions in addition to Media Access Control(MAC) protocol function. For EPON, many researches on intra- and inter-ONU scheduling algorithms have been performed. Among the inter-ONU scheduling algorithms, IPS(Interleaved Polling with Stop) based on polling scheme is efficient because OLT assigns available time portion to each ONU given the request information from all ONUs. Since the IPS needs an idle time period on uplink between two consecutive frame transmission periods, it wastes time without frame transmissions. In this paper, we propose a dynamic bandwidth allocation algorithm to increase the channel utilization on uplink and evaluate its performance using simulations. The simulation results show that the proposed Two-phase Cycle Danamic Bandwidth Allocation(TCDBA) algorithm improves the throughput about 15%, compared with the IPS and Fast Gate Dynamic Bandwidth Allocation(FGDBA). Also, the average transmission time of the proposed algorithm is lower than those of other schemes.

Design and Evaluation of a Reservation-Based Hybrid Disk Bandwidth Reduction Policy for Video Servers (비디오 서버를 위한 예약기반 하이브리드 디스크 대역폭 절감 정책의 설계 및 평가)

  • Oh, Sun-Jin;Lee, Kyung-Sook;Bae, Ihn-Han
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.523-532
    • /
    • 2001
  • A Critical issue in the performance of a video-on-demand system is the required I/O bandwidth of the Video server in order to satisfy clients requests, and it is the crucial resource that may cause delay increasingly. Several approaches such as batching and piggybacking are used to reduce the I/O demand on the video server through sharing. Bathing approach is to make single I/O request for storage server by grouping the requests for the same object. Piggybacking is th policy for altering display rates of requests in progress for the same object to merge their corresponding I/O streams into a single stream, and serve it as a group of merged requests. In this paper, we propose a reservation-based hybrid disk bandwidth reduction policy that dynamically reserves the I/O stream capacity of a video server for popular videos according to the loads of video server in order to schedule the requests for popular videos immediately. The performance of the proposed policy is evaluated through simulations, and is compared with that of bathing and piggybacking. As a result, we know that the reservation-based hybrid disk bandwidth reduction policy provides better probability of service, average waithing time and percentage of saving in frames than batching and piggybacking policy.

  • PDF

Design and Evaluation of a Channel Reservation Patching Method for True VOD Systems (True VOD 시스템을 위한 채널 예약 패칭 방법의 설계 및 평가)

  • Lee, Joo-Yung;Ha, Sook-Jeong;Bae, Ihn-Han
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.835-844
    • /
    • 2002
  • The number of channels available to a video server is limited since the number of channels a video server can support is determined by its communication bandwidth. Several approaches such as batching, piggybacking and patching have been proposed to reduce I/O demand on the video server by sharing multicast data. Patching has been shown to be efficient in the matter of the cost for VOD systems. Unlike conventional multicast techniques, patching is a dynamic multicast scheme which enables a new request to join an ongoing multicast. In addition, true VOD can be achieved since a new request can be served immediately without having to wait for the next multicast. In this paper. we propose two types of channel reservation patching algorithm : a fixed channel reservation patching and a variable channel reservation patching. To immediately schedule the requests for popular videos, these algorithms reserve the channels of video server for the fixed number of popular videos or for the variable number of popular videos which is determined dynamically according to the load of video server. The performance of the proposed algorithms is evaluated through simulations, and compared with that of simple patching. Our performance measures are average defection rate, average latency, service fairness and the amount of buffered data according to video server loads. Simulation results show that the proposed channel reservation patching algorithms provide better performance compared to simple patching algorithm.

An Efficient P2P Based Proxy Patching Scheme for Large Scale VOD Systems (대규모 VOD 시스템을 위한 효율적인 P2P 기반의 프록시 패칭 기법)

  • Kwon, Chun-Ja;Choi, Hwang-Kyu
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.341-354
    • /
    • 2005
  • The main bottleneck for large scale VOD systems is bandwidth of storage or network I/O due to the large number of client requests simultaneously, and then efficient techniques are required to solve the bottleneck problem of the VOD system. Patching is one of the most efficient techniques to overcome the bottleneck of the VOD system through the use of multicast scheme. In this paper, we propose a new patching scheme, called P2P proxy patching, for improving the typical patching technique by jointly using the prefix caching and P2P proxy. In our proposed scheme, each client plays a role in a proxy to multicast a regular stream to other clients that request the same video stream. Due to the use of the P2P proxy and the prefix caching, the client requests that ive out of the patching window range can receive the regular stream from other clients in the previous patching group without allocating the new regular channels from the VOD server to the clients. In the performance study, we show that our patching scheme can reduce the server bandwidth requirement about $33\%$ less than that of the existing patching technique with respect to prefix size and request interval.

Address Auto-Resolution Network System for Neutralizing ARP-Based Attacks (ARP 기반 공격의 무력화를 위한 주소 자동 결정 네트워크 시스템)

  • Jang, RhongHo;Lee, KyungHee;Nyang, DaeHun;Youm, HeungYoul
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • Address resolution protocol (ARP) is used for binding a logical address to a physical address in many network technologies. However, since ARP is an stateless protocol, it always abused for performing ARP-based attacks. Researchers presented many technologies to improve ARP protocol, but most of them require a high implementation cost or scarify the network performance for improving security of ARP protocol. In this paper, we present an address auto-resoultion (AAR) network system to neutralize the ARP-based attacks. The AAR turns off the communication function of ARP messages(e.g. request and reply), but does not disable the ARP table. In our system, the MAC address of destination was designed to be derived from destination IP address so that the ARP table can be managed statically without prior knowledge (e.g., IP and MAC address pairs). In general, the AAR is safe from the ARP-based attacks since it disables the ARP messages and saves network traffics due to so.

Analysis on the GPU Performance according to Hierarchical Memory Organization (계층적 메모리 구성에 따른 GPU 성능 분석)

  • Choi, Hongjun;Kim, Jongmyon;Kim, Cheolhong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.3
    • /
    • pp.22-32
    • /
    • 2014
  • Recently, GPGPU has been widely used for general-purpose processing as well as graphics processing by providing optimized hardware for parallel processing. Memory system has big effects on the performance of parallel processing units such as GPU. In the GPU, hierarchical memory architecture is implemented for high memory bandwidth. Moreover, both memory address coalescing and memory request merging techniques are widely used. This paper analyzes the GPU performance according to various memory organizations. According to our simulation results, GPU performance improves by 15.5%, 21.5%, 25.5%, 30.9% as adding 8KB L1, 16KB L1, 32KB L1, 64KB L1 cache, respectively, compared to case without L1 cache. However, experimental results show that some benchmarks decrease performance since memory transaction increases due to data dependency. Moreover, average memory access latency is increased as the depth of hierarchical cache level increases when cache miss occurs significantly.

Performance Evaluation and Design of Upstream Scheduling Algorithms To Support Channel Bonding (채널 결합 기반 상향스트림 스케줄링 알고리즘 설계와 성능평가)

  • Roh, Sun-Sik
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.5
    • /
    • pp.8-18
    • /
    • 2009
  • CableLAB published DOCSIS 3.0 Specifications to supply broadband access to homes and small businesses. The primary technique of DOCSIS 3.0 Specification is channel bonding which provides cable operators with a flexible way to significantly increase up/downstream speeds. In this paper, we propose the upstream scheduler that serves channel bonding. Proposed scheduler consists of two sub-scheduler: bonding group scheduler and channel scheduler. Also, we propose three scheduling algorithms to allocate request bandwidth of CM to each bonding channel: equivalent scheduling algorithm, current request-based scheduling algorithm, and last grant-based scheduling algorithm. In order to evaluate the performance of these algorithms and DOCSIS 3.0 MAC protocol, we develop the DOCSIS 3.0 simulator with the network simulator, OPNET, to model DOCSIS network, CMTS, and CM. Our results show that equivalent scheduling algorithm is superior to others in the view of transmission delay and throughput and DOCSIS 3.0 protocol provides higher throughput than pre-DOCSIS 3.0 protocol.

On Optimizing Route Discovery of Topology-based On-demand Routing Protocols for Ad Hoc Networks

  • Seet, Boon-Chong;Lee, Bu-Sung;Lau, Chiew-Tong
    • Journal of Communications and Networks
    • /
    • v.5 no.3
    • /
    • pp.266-274
    • /
    • 2003
  • One of the major issues in current on-demand routing protocols for ad hoc networks is the high resource consumed by route discovery traffic. In these protocols, flooding is typically used by the source to broadcast a route request (RREQ) packet in search of a route to the destination. Such network-wide flooding potentially disturbs many nodes unnecessarily by querying more nodes than is actually necessary, leading to rapid exhaustion of valuable network resources such as wireless bandwidth and battery power. In this paper, a simple optimization technique for efficient route discovery is proposed. The technique proposed herein is location-based and can be used in conjunction with the existing Location-Aided Routing (LAR) scheme to further reduce the route discovery overhead. A unique feature of our technique not found in LAR and most other protocols is the selective use of unicast instead of broadcast for route request/query transmission made possible by a novel reuse of routing and location information. We refer to this new optimization as the UNIQUE (UNIcast QUEry) technique. This paper studies the efficacy of UNIQUE by applying it to the route discovery of the Dynamic Source Routing (DSR) protocol. In addition, a comparative study is made with a DSR protocol optimized with only LAR. The results show that UNIQUE could further reduce the overall routing overhead by as much as 58% under highly mobile conditions. With less congestion caused by routing traffic, the data packet delivery performance also improves in terms of end-to-end delay and the number of data packets successfully delivered to their destinations.

An Efficient MAC Protocol for Supporting Multimedia Services in APON (APON에서 멀티미디어 전송을 위한 효율적인 MAC 프로토콜)

  • 은지숙;이호숙;윤현정;소원호;김영천
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.1A
    • /
    • pp.132-141
    • /
    • 2000
  • In this paper, we proposed the MAC protocol of APON supporting multi-class traffic such as CBBUVBR, ABR, UBR, to guarantee the required QoS of each service. For this, we analyze the performance of variousrequest mechanisms and employee the different request mechanism for each traffic classes. Upstream anddownstream frame structures to minimize transmission overhead are proposed based on our request mechanism.The proposed MAC protocol applies the different priority to permit distribution process. CBBWBR traffic, withthe stringent requirements on CDV or delay, is allocated prior to any other class. ABR traffic, which hasnon-strict CDV or delay criteria, uses flexibly the available bandwidth but ensures a minimum cell rate (MCR).UBR traffic is allocated with lowest priority for the remaining capacity. The performance of proposed protocol isevaluated in terms of transfer delay and 1-point CDV with various offered load. The result of simulation showsthat the proposed protocol guarantees the required QoS of the corresponding category, while making use of theavailable resources in both an efficient and dynamical way.

  • PDF