• Title/Summary/Keyword: shared buffer memory

Search Result 25, Processing Time 0.024 seconds

Implementation of a Shared Buffer ATM Switch Embedded Scalable Pipelined Buffer Memory (가변형 파이프라인방식 메모리를 내장한 공유버퍼 ATM 스위치의 구현)

  • 정갑중
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.703-717
    • /
    • 2002
  • This paper illustrates the implementation of a scalable shared buffer asynchronous transfer mode (ATM) switch. The designed shared buffer ATM switch has a shared buffet of a pipelined memory which has the access time of 4 ns. The high-speed buffer access time supports a possibility of the implementation of a shared buffer ATM switch which has a large switching capacity. The designed switch architecture provides flexible switching performance and port size scalability with the independence of queue address control from buffer memory control. The switch size and the buffer size of the designed ATM switch can be reconfigured without serious circuit redesign. The designed prototype chip has a shared buffer of 128-cell and 4 ${\times}$ 4 switch size. It is integrated in 0.6um, double-metal, and single-poly CMOS technology. It has 80MHz operating frequency and supports 640Mbps per port.

Design of a shared buffer memory switch with a linked-list architecture for ATM applications (Linked-list 구조를 갖는 ATM용 공통 버퍼형 메모리 스위치 설계)

  • 이명희;조경록
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.11
    • /
    • pp.2850-2861
    • /
    • 1996
  • This paper describes the design of AATM switch LIS of shared buffer type with linked-list architecture to control memory access. The proposed switch LSI consists of the buffer memory, controller and FIFO memory blocks and two special circuits to avoid the cell blocking. One of the special circuit is a new address control scheme with linked-list architecture which maintains the address of buffer memory serially ordered from write address to read address. All of the address is linked as chain is operated like a FIFO. The other is slip-flag register it will be hold the address chain when readaddress missed the reading of data. The circuits control the buffer memory efficiently and reduce the cell loss rate. As a result the designed chip operates at 33ns and occupied on 2.7*2.8mm$^{2}$ using 0.8.mu.m CMOS technology.

  • PDF

A Study on Buffer and Shared Memory Optimization for Multi-Processor System (다중 프로세서 시스템에서의 버퍼 및 공유 메모리 최적화 연구)

  • Kim, Jong-Su;Mun, Jong-Uk;Im, Gang-Bin;Jeong, Gi-Hyeon;Choe, Gyeong-Hui
    • The KIPS Transactions:PartA
    • /
    • v.9A no.2
    • /
    • pp.147-162
    • /
    • 2002
  • Multi-processor system with fast I/O devices improves processing performance and reduces the bottleneck by I/O concentration. In the system, the Performance influenced by shared memory used for exchanging data between processors varies with configuration and utilization. This paper suggests a prediction model for buffer and shared memory optimization under interrupt recognition method using mailbox. Ethernet (IEEE 802.3) packets are used as the input of system and the amount of utilized memory is measured for different network bandwidth and burstiness. Some empirical studies show that the amount of buffer and shared memory varies with packet concentration rate as well as I/O bandwidth. And the studies also show the correlation between two memories.

High-Speed Pipelined Memory Architecture for Gigabit ATM Packet Switching (Gigabit ATM Packet 교환을 위한 파이프라인 방식의 고속 메모리 구조)

  • Gab Joong Jeong;Mon Key Lee
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.11
    • /
    • pp.39-47
    • /
    • 1998
  • This paper describes high-speed pipelined memory architecture for a shared buffer ATM switch. The memory architecture provides high speed and scalability. It eliminates the restriction of memory cycle time in a shared buffer ATM switch. It provides versatile performance in a shared buffer ATM switch using its scalability. It consists of a 2-D array configuration of small memory banks. Increasing the array configuration enlarges the entire memory capacity. Maximum cycle time of the designed pipelined memory is 4 ns with 5 V V$\_$dd/ and 25$^{\circ}C$. It is embedded in the prototype chip of a shared scalable buffer ATM switch with 4 x 4 configuration of 4160-bit SRAM memory banks. It is integrated in 0.6 $\mu\textrm{m}$ 2-metal 1-poly CMOS technology.

  • PDF

A New Buffer Management Scheme using Weighted Dynamic Threshold for QoS Support in Fast Packet Switches with Shared Memories (공유 메모리형 패킷 교환기의 QoS 기능 지원을 위한 가중형 동적 임계치를 이용한 버퍼 관리기법에 관한 연구)

  • Kim Chang-Won;Kim Young-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.3
    • /
    • pp.136-142
    • /
    • 2006
  • Existing buffer management schemes for shared-memory output queueing switches can be classified into two types: In the first type, some constant amount of memory space is guaranteed to each virtual queue using static queue thresholds. The static threshold method (ST) belongs to this type. On the other hand, the second type of approach tries to maximize the buffer utilization in 머 locating buffer memories. The complete sharing (CS) method is classified into this type. In the case of CS, it is very hard to protect regular traffic from mis-behaving traffic flows while in the case of ST the thresholds can not be adjusted according to varying traffic conditions. In this paper, we propose a new buffer management method called weighted dynamic thresholds (WDT) which can process packet flows based on loss priorities for quality-of-service (QoS) functionalities with fairly high memory utilization factors. We verified the performance of the proposed scheme through computer simulations.

  • PDF

A Branch Target Buffer Using Shared Tag Memory with TLB (TLB 태그 공유 구조의 분기 타겟 버퍼)

  • Lee, Yong-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.899-902
    • /
    • 2005
  • Pipeline hazard due to branch instructions is the major factor of the degradation on the performance of microprocessors. Branch target buffer predicts whether a branch will be taken or not and supplies the address of the next instruction on the basis of that prediction. If the branch target buffer predicts correctly, the instruction flow will not be stalled. This leads to the better performance of microprocessor. In this paper, the architecture of a tag memory that branch target buffer and TLB can share is presented. Because the two tag memories used for branch target buffer and TLB each is replaced by single shared tag memory, we can expect the smaller ship size and the faster prediction. This hared tag architecture is more advantageous for the microprocessors that uses more bits of address and exploits much more instruction level parallelism.

  • PDF

An Analysis of Multi-processor System Performance Depending on the Input/Output Types (입출력 형태에 따른 다중처리기 시스템의 성능 분석)

  • Moon, Wonsik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.71-79
    • /
    • 2016
  • This study proposes a performance model of a shared bus multi-processor system and analyzes the effect of input/output types on system performance and overload of shared resources. This system performance model reflects the memory reference time in relation to the effect of input/output types on shared resources and the input/output processing time in relation to the input/output processor, disk buffer, and device standby places. In addition, it demonstrates the contribution of input/output types to system performance for comprehensive analysis of system performance. As the concept of workload in the probability theory and the presented model are utilized, the result of operating and analyzing the model in various conditions of processor capability, cache miss ratio, page fault ratio, disk buffer hit ratio (input/output processor and controller), memory access time, and input/output block size. A simulation is conducted to verify the analysis result.

Mutual exclusion of shared memory access in the simulation software of the midclass commuter (중형항공기 시뮬레이션 소프트웨어의 작업간 공유메모리 사용의 상호배제)

  • 이인석;이해창;이상혁
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.207-209
    • /
    • 1996
  • The software of the midclass commuter flight simulation is running on multiprocessor/multitasking environments The software is consist of tasks which are periodically alive at a given interval. Each task communicates via shared memory. The data shared by tasks is divided by several block. Only one task, called producer, can produce data for a data block but several tasks, called consumers, can read data from the data block. Double buffer and conditional flag are used to implement a mutual exclusion which prevents the producer and consumers from accessing the same data block simultaneously.

  • PDF

VLSI design of a shared multibuffer ATM Switch for throughput enhancement in multicast environments (멀티캐스트 환경에서 향상된 처리율을 갖는 공유 다중 버퍼 ATM스위치의 VLSI 설계)

  • Lee, Jong-Ick;Lee, Moon-Key
    • Proceedings of the IEEK Conference
    • /
    • 2001.06a
    • /
    • pp.383-386
    • /
    • 2001
  • This paper presents a novel multicast architecture for shared multibuffer ATM switch, which is tailored for throughput enhancement in multicast environments. The address queues for multicast cells are separated from those for unicast cells to arbitrate multicast cells independently from unicast cells. Three read cycles are carried out during each cell slot and multicast cells have chances to be read from shared buffer memory(SBM) in the third read cycle provided that the shared memory is not accessed to read a unicast cell. In this architecture, maximum two cells are queued at each fabric output port per time slot and output mask choose only one cell. Extensive simulations are carried out and it shows that the proposed architecture has enhanced throughput comparing with other multicast schemes in shared multibuffer switch architecture.

  • PDF

Construction Methods of Switching Network for a Small and a Large Capacity AMT Switching System (소용량 및 대용량의 ATM시스템에 적합한 스위칭 망의 구성 방안)

  • Yang, Chung-Ryeol;Kim, Jin-Tae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.4
    • /
    • pp.947-960
    • /
    • 1996
  • The primary goal for developing high performance ATM switching systems is to minimized the probability of cell loss, cell delay and deterioration of throughput. ATM switching element that is the most suitable for this purpose is the shared buffer memory switch executed by common random access memory and control logic. Since it is difficult to manufacture VLIS(Very Large Scale Integrated circuit) as the number of input ports increased, the used of switching module method the realizes 32$\times$32, 150 Mb/s switch utilizing 8$\times$8, 600Mb/s os 16$\times$16, 150Mb/s unit switch is latest ATM switching technology for small and large scale. In this paper, buffer capacity satisfying total-memory-reduction effect by buffer sharing in a shared buffer memory switch are analytically evalu ated and simulated by computer with cell loss level at traffic conditions, and also features of switching network utilizing the switching module methods in small and large-capacity ATM switching system is analized. Based on this results, the structure in outline of 32$\times$32(4.9Gb/s throughput), 150Mb/s switches under research in many countries is proposed, and eventually, switching-network structure for ATM switching system of small and large and capacity satisfying with above primary goals is suggested.

  • PDF