• Title/Summary/Keyword: queuing technology

Search Result 77, Processing Time 0.021 seconds

Incentive-Compatible Priority Pricing and Transfer Analysis in Database Services

  • Kim, Yong J.
    • The Journal of Information Technology and Database
    • /
    • v.4 no.2
    • /
    • pp.21-32
    • /
    • 1998
  • A primary concern of physical database design has been efficient retrieval and update of a record because predictable performance of a DBMS is indispensable to time-critical missions. To maintain such phenomenal performance, database manages often spends more than or as much as the goal of an organization can warrant. The motivation of this research stems from the fact that even predictable performance of a physical database can be hampered by stochastic query processing time, physical configurations of a database, and random arrival processes of queries. They all together affect the overall performance of a DBMS. In particular, if there are queuing delays due to limited capacity or during on-peak congestion, this paper suggest to prioritize database services. A surprising finding of this paper is that such a transition from a non-priority system to a corresponding priority-based system can be Pareto-improving in the sense that no users in the system will be worse off after the transition. Thus prioritizing database services can be a viable option for efficient database management.

  • PDF

A Study for a Methodology to Analyze Container Delays versus Security (보안대비 컨테이너 지연분석을 위한 방법론적 연구)

  • Yoon, Dae-Gwun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.13 no.1 s.28
    • /
    • pp.47-54
    • /
    • 2007
  • After September 11, 2001, the United State's Customs and Border Protection (CBP) has set up inspection stations in seaport terminals. The inspection station, however, may directly and indirectly affect delay time in the seaports, increasing by especially high and severe level of security. This paper studies for a methodology to analyze container delays versus security incurring by the various layouts of the inspection station in the United States.

  • PDF

Development of a Gateway System for Social Network Services

  • Kwon, Dongwoo;Jung, Insik;Lee, Shinho;Kim, Hyeonwoo;Ju, Hongtaek
    • Journal of Communications and Networks
    • /
    • v.17 no.2
    • /
    • pp.118-125
    • /
    • 2015
  • In this paper, we propose a method to reduce mobile social network services (SNSs) traffic using a mobile integrated SNS gateway (MISG) to improve network communication performance between the mobile client and SNS servers. The gateway connects the client and SNS servers using the contents adapter and the web service adapter and helps to improve communication performance using its cache engine. An integrated SNS application, the user's client, communicates with the gateway server using integrated SNS protocol. In addition, the gateway can alert the client to new SNS contents because of the broker server implemented by the message queuing telemetry transport protocol. We design and develop the modules of the gateway server and the integrated SNS application. We then measure the performance of MISG in terms of content response time and describe the result of the experiment.

Modeling and Analysis of Burst Switching for Wireless Packet Data (무선 패킷 데이터를 위한 Burst switching의 모델링 및 분석)

  • Park, Kyoung-In;Lee, Chae Young
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.28 no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The third generation mobile communication needs to provide multimedia service with increased data rates. Thus an efficient allocation of radio and network resources is very important. This paper models the 'burst switching' as an efficient radio resource allocation scheme and the performance is compared to the circuit and packet switching. In burst switching, radio resource is allocated to a call for the duration of data bursts rather than an entire session or a single packet as in the case of circuit and packet switching. After a stream of data burst, if a packet does not arrive during timer2 value ($\tau_{2}$), the channel of physical layer is released and the call stays in suspended state. Again if a packet does not arrive for timerl value ($\tau_{1}$) in the suspended state, the upper layer is also released. Thus the two timer values to minimize the sum of access delay and queuing delay need to be determined. In this paper, we focus on the decision of $\tau_{2}$ which minimizes the access and queueing delay with the assumption that traffic arrivals follow Poison process. The simulation, however, is performed with Pareto distribution which well describes the bursty traffic. The computational results show that the delay and the packet loss probability by the burst switching is dramatically reduced compared to the packet switching.

Adaptive Bandwidth Control System with Incoming Traffic in Home Network

  • Shin Hye Min;Kim Hyoung Yuk;Lee Ho Chan;Kim Hong Seok;Park Hong Seong
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.147-151
    • /
    • 2004
  • QoS is a subject of high interest for successful deployment of various services in a home gateway and the gateway is possible to support QoS by installing existing queuing disciplines, which control the outgoing traffic to guarantee only QoS of the traffic. But m the home gateway it is also important to guarantee QoS of the incoming traffic. This paper proposes an adaptive control of the traffic to guarantee QoS of incoming traffic into the home gateway. In the proposed method, the upper limit of the available bandwidth of sending rate varies with receiving rate. And the proposed method makes the gap between the allocated rate and the actual service rate of the traffic narrow. Some experiments on a test bed show that the proposed method is valid.

  • PDF

SQUIRREL SEARCH PID CONTROLLER ALGORITHM BASED ACTIVE QUEUE MANAGEMENT TECHNIQUE FOR TCP COMMUNICATION NETWORKS

  • Keerthipati.Kumar;R.A. KARTHIKA
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.123-133
    • /
    • 2023
  • Active queue management (AQM) is a leading congestion control system, which can keep smaller queuing delay, less packet loss with better network utilization and throughput by intentionally dropping the packets at the intermediate hubs in TCP/IP (transmission control protocol/Internet protocol) networks. To accelerate the responsiveness of AQM framework, proportional-integral-differential (PID) controllers are utilized. In spite of its simplicity, it can effectively take care of a range of complex problems; however it is a lot complicated to track down optimal PID parameters with conventional procedures. A few new strategies have been grown as of late to adjust the PID controller parameters. Therefore, in this paper, we have developed a Squirrel search based PID controller to dynamically find its controller gain parameters for AQM. The controller gain parameters are decided based on minimizing the integrated-absolute error (IAE) in order to ensure less packet loss, high link utilization and a stable queue length in favor of TCP networks.

SPMLD: Sub-Packet based Multipath Load Distribution for Real-Time Multimedia Traffic

  • Wu, Jiyan;Yang, Jingqi;Shang, Yanlei;Cheng, Bo;Chen, Junliang
    • Journal of Communications and Networks
    • /
    • v.16 no.5
    • /
    • pp.548-558
    • /
    • 2014
  • Load distribution is vital to the performance of multipath transport. The task becomes more challenging in real-time multimedia applications (RTMA), which impose stringent delay requirements. Two key issues to be addressed are: 1) How to minimize end-to-end delay and 2) how to alleviate packet reordering that incurs additional recovery time at the receiver. In this paper, we propose sub-packet based multipath load distribution (SPMLD), a new model that splits traffic at the granularity of sub-packet. Our SPMLD model aims to minimize total packet delay by effectively aggregating multiple parallel paths as a single virtual path. First, we formulate the packet splitting over multiple paths as a constrained optimization problem and derive its solution based on progressive approximation method. Second, in the solution, we analyze queuing delay by introducing D/M/1 model and obtain the expression of dynamic packet splitting ratio for each path. Third, in order to describe SPMLD's scheduling policy, we propose two distributed algorithms respectively implemented in the source and destination nodes. We evaluate the performance of SPMLD through extensive simulations in QualNet using real-time H.264 video streaming. Experimental results demonstrate that: SPMLD outperforms previous flow and packet based load distribution models in terms of video peak signal-to-noise ratio, total packet delay, end-to-end delay, and risk of packet reordering. Besides, SPMLD's extra overhead is tiny compared to the input video streaming.

Energy-Efficient Routing Protocol based on Interference Awareness for Transmission of Delay-Sensitive Data in Multi-Hop RF Energy Harvesting Networks (다중 홉 RF 에너지 하베스팅 네트워크에서 지연에 민감한 데이터 전송을 위한 간섭 인지 기반 에너지 효율적인 라우팅 프로토콜)

  • Kim, Hyun-Tae;Ra, In-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.3
    • /
    • pp.611-625
    • /
    • 2018
  • With innovative advances in wireless communication technology, many researches for extending network lifetime in maximum by using energy harvesting have been actively performed on the area of network resource optimization, QoS-guaranteed transmission, energy-intelligent routing and etc. As known well, it is very hard to guarantee end-to-end network delay due to uncertainty of the amount of harvested energy in multi-hop RF(radio frequency) energy harvesting wireless networks. To minimize end-to-end delay in multi-hop RF energy harvesting networks, this paper proposes an energy efficient routing metric based on interference aware and protocol which takes account of various delays caused by co-channel interference, energy harvesting time and queuing in a relay node. The proposed method maximizes end-to-end throughput by performing avoidance of packet congestion causing load unbalance, reduction of waiting time due to exhaustion of energy and restraint of delay time from co-channel interference. Finally simulation results using ns-3 simulator show that the proposed method outperforms existing methods in respect of throughput, end-to-end delay and energy consumption.

Design of Dynamic Buffer Assignment and Message model for Large-scale Process Monitoring of Personalized Health Data (개인화된 건강 데이터의 대량 처리 모니터링을 위한 메시지 모델 및 동적 버퍼 할당 설계)

  • Jeon, Young-Jun;Hwang, Hee-Joung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.187-193
    • /
    • 2015
  • The ICT healing platform sets a couple of goals including preventing chronic diseases and sending out early disease warnings based on personal information such as bio-signals and life habits. The 2-step open system(TOS) had a relay designed between the healing platform and the storage of personal health data. It also took into account a publish/subscribe(pub/sub) service based on large-scale connections to transmit(monitor) the data processing process in real time. In the early design of TOS pub/sub, however, the same buffers were allocated regardless of connection idling and type of message in order to encode connection messages into a deflate algorithm. Proposed in this study, the dynamic buffer allocation was performed as follows: the message transmission type of each connection was first put to queuing; each queue was extracted for its feature, computed, and converted into vector through tf-idf, then being entered into a k-means cluster and forming a cluster; connections categorized under a certain cluster would re-allocate the resources according to the resource table of the cluster; the centroid of each cluster would select a queuing pattern to represent the cluster in advance and present it as a resource reference table(encoding efficiency by the buffer sizes); and the proposed design would perform trade-off between the calculation resources and the network bandwidth for cluster and feature calculations to efficiently allocate the encoding buffer resources of TOS to the network connections, thus contributing to the increased tps(number of real-time data processing and monitoring connections per unit hour) of TOS.

Ethernet-Based Avionic Databus and Time-Space Partition Switch Design

  • Li, Jian;Yao, Jianguo;Huang, Dongshan
    • Journal of Communications and Networks
    • /
    • v.17 no.3
    • /
    • pp.286-295
    • /
    • 2015
  • Avionic databuses fulfill a critical function in the connection and communication of aircraft components and functions such as flight-control, navigation, and monitoring. Ethernet-based avionic databuses have become the mainstream for large aircraft owning to their advantages of full-duplex communication with high bandwidth, low latency, low packet-loss, and low cost. As a new generation aviation network communication standard, avionics full-duplex switched ethernet (AFDX) adopted concepts from the telecom standard, asynchronous transfer mode (ATM). In this technology, the switches are the key devices influencing the overall performance. This paper reviews the avionic databus with emphasis on the switch architecture classifications. Based on a comparison, analysis, and discussion of the different switch architectures, we propose a new avionic switch design based on a time-division switch fabric for high flexibility and scalability. This also merges the design concept of space-partition switch fabric to achieve reliability and predictability. The new switch architecture, called space partitioned shared memory switch (SPSMS), isolates the memory space for each output port. This can reduce the competition for resources and avoid conflicts, decrease the packet forwarding latency through the switch, and reduce the packet loss rate. A simulation of the architecture with optimized network engineering tools (OPNET) confirms the efficiency and significant performance improvement over a classic shared memory switch, in terms of overall packet latency, queuing delay, and queue size.