• Title/Summary/Keyword: LPM(Longest Prefix Matching)

Search Result 8, Processing Time 0.025 seconds

Coded and Scalar Prefix Trees: Prefix Matching Using the Novel Idea of Double Relation Chains

  • Behdadfar, Mohammad;Saidi, Hossein;Hashemi, Massoud Reza;Lin, Ying-Dar
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.344-354
    • /
    • 2011
  • In this paper, a model is introduced named double relation chains (DRC) based on ordered sets. It is proved that using DRC and special relationships among the members of an alphabet, vectors of this alphabet can be stored and searched in a tree. This idea is general; however, one special application of DRC is the longest prefix matching (LPM) problem in an IP network. Applying the idea of DRC to the LPM problem makes the prefixes comparable like numbers using a pair of w-bit vectors to store at least one and at most w prefixes, where w is the IP address length. This leads to good compression performance. Based on this, two recently introduced structures called coded prefix trees and scalar prefix trees are shown to be specific applications of DRC. They are implementable on balanced trees which cause the node access complexity for prefix search and update procedures to be O(log n) where n is the number of prefixes. As another advantage, the number of node accesses for these procedures does not depend on w. Additionally, they need fewer number of node accesses compared to recent range-based solutions. These structures are applicable on both IPv4 and IPv6, and can be implemented in software or hardware.

Enhanced Bitmap Lookup Algorithm for High-Speed Routers (고속 라우터를 위한 향상된 비트맵 룩업 알고리즘)

  • Lee, Kang-woo;Ahn, Jong-suk
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.129-142
    • /
    • 2004
  • As the Internet gets faster, the demand for high-speed routers that are capable of forwarding more than giga bits of data per second keeps increasing. In the previous research, Bitmap Trie algorithm was developed to rapidly execute LPM(longest prefix matching) process which is Well known as the Severe performance bottleneck. In this paper, we introduce a novel algorithm that drastically enhanced the performance of Bitmap. Trie algorithm by applying three techniques. First, a new table called the Count Table was devised. Owing to this table, we successfully eliminated shift operations that was the main cause of performance degradation in Bitmap Trie algorithm. Second, memory utilization was improved by removing redundant forwarding information from the Transfer Table. Lastly. the range of prefix lookup was diversified to optimize data accesses. On the other hand, the processing delays were classified into three categories according to their causes. They were, then, measured through the execution-driven simulation that provides the higher quality of the results than any other simulation techniques. We tried to assure the reliability of the experimental results by comparing with those that collected from the real system. Finally the Enhanced Bitmap Trie algorithm reduced 82% of time spent in previous algorithm.

O(1) IP Lookup Scheme (O(1) IP 검색 방법)

  • 이주민;안종석
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10e
    • /
    • pp.1-3
    • /
    • 2002
  • 백본 라우터에서의 최장 길이 프리픽스 검색(LPM: Longest Prefix Matching) 속도를 향상시키기 위해 활발히 연구된 방식들은 계산 량과 사용 메모리 량을 교환하는 방식들이다. 이러한 방식들은 성능향상을 위해서 대용량의 포워딩 테이블(Forwarding Table)을 캐쉬(Cache)에 저장할 수 있는 소용량 인덱스 테이블(Index Table)로 압축함으로써 고속 캐쉬 접근 회수와 그 계산량은 증가하는 대신 저속 메모리 접근 회수를 줄이는 방식이다.〔1〕본논문에서는 저속 메모리 사용량이 증가하는 반면 저속 메모리의 접근 빈도와 계산량을 동시에 감소시키는 FPLL(Fixed Prefix Length Lookup) 방식을 소개한다. 이 방식은 포워딩 엔트리(Entry)들을 프리픽스의 상위 비트(Bit)에 의해 그룹으로 나누고, 각 그룹에 속하는 엔트리들을 같은 길이로 정렬한다. FPLL에서의 LPM검색은 목적지 주소가 속하는 그룹들의 길이를 계산하여 검색할 최장 프리픽스의 길이를 미리 결정하고, 결정된 프리픽스를 키(key)로 하여 해시 테이블(Hash Table)로 구성된 포워딩 테이블에서 완전 일치(Exact Matching) 검색을 한다. 완전 일치 검색을 위해 같은 그룹에 속한 엔트리들을 정렬할 필요가 있는데 이 정렬을 위해 여분의 포워딩 테이블 엔트리가 생성된다. 3만개 엔트리를 갖는 Mae-West〔2〕 경우에, FPLL방식은 12만개 정도의 여분의 엔트리가 추가로 생성되는 대신에 1번 캐쉬 접근과 O(1)의 복잡도를 갖는 해시 테이블 검색으로 LPM 검색을 수행한다.

  • PDF

The Optimal pipelining architecture for PICAM (PICAM에서의 최적 파이프라인 구조)

  • 안희일;조태원
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.6A
    • /
    • pp.1107-1116
    • /
    • 2001
  • 고속 IP 주소 룩업(lookup)은 고속 인터넷 라우터의 성능을 좌우하는 주요 요소이다. LPM(longest prefix matching) 탐색은 IP 주소 룩업에서 가장 시간이 많이 걸리는 부분이다. PICAM은 고속 LPM 탐색을 위한 파이프라인 CAM 구조로서, 기존 CAM(content addressable memory, 내용 주수화 메모리)을 이용한 방법보다 룩업 테이블의 갱신속도가 빠르면서도 LPM 탐색율이 높은 CAM 구조이다. PICAM은 3단계의 파이프라인으로 구성된다. 단계 1 및 단계 2의 키필드분할수 및 매칭점의 분포에 따라 파이프라인의 성능이 좌우되며, LPM 탐색율이 달라질 수 있다. 본 논문에서는 PICAM의 파이프라인 성능모델을 제시하고, 이산사건 시뮬레이션(discrete event simulation)을 수행하여, 최적의 PICAM 구조를 도출하였다. IP version 4인 경우 키필드분할수를 8로 하고, 부하가 많이 걸리는 키필드블록을 중복 설치하는 것이 최적구조이며, IP version 6인 경우 키필드블록의 개수를 16으로 하는 것이 최적구조다.

  • PDF

A Bit-Map Trie for the High-Speed Longest Prefix Search of IP Addresses (고속의 최장 IP 주소 프리픽스 검색을 위한 비트-맵 트라이)

  • 오승현;안종석
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.2
    • /
    • pp.282-292
    • /
    • 2003
  • This paper proposes an efficient data structure for forwarding IPv4 and IPv6 packets at the gigabit speed in backbone routers. The LPM(Longest Prefix Matching) search becomes a bottleneck of routers' performance since the LPM complexity grows in proportion to the forwarding table size and the address length. To speed up the forwarding process, this paper introduces a data structure named BMT(Bit-Map Tie) to minimize the frequent main memory accesses. All the necessary search computations in BMT are done over a small index table stored at cache. To build the small index table from the tie representation of the forwarding table, BMT represents a link pointer to the child node and a node pointer to the corresponding entry in the forwarding table with one bit respectively. To improve the poor performance of the conventional tries when their height becomes higher due to the increase of the address length, BMT adopts a binary search algorithm for determining the appropriate level of tries to start. The simulation experiments show that BMT compacts the IPv4 backbone routers' forwarding table into a small one less than 512-kbyte and achieves the average speed of 250ns/packet on Pentium II processors, which is almost the same performance as the fastest conventional lookup algorithms.

A Design of the IP Lookup Architecture for High-Speed Internet Router (고속의 인터넷 라우터를 위한 IP 룩업구조 설계)

  • 서해준;안희일;조태원
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.7B
    • /
    • pp.647-659
    • /
    • 2003
  • LPM(Longest Prefix Matching)searching in If address lookup is a major bottleneck of IP packet processing in the high speed router. In the conventional lookup table for the LPM searching in CAM(Content Addressable Memory) the complexity of fast update take 0(1). In this paper, we designed pipeline architecture for fast update of 0(1) cycle of lookup table and high throughput and low area complexity on LPM searching. Lookup-table architecture was designed by CAM(Content Addressable Memory)away that uses 1bit RAM(Random Access Memory)cell. It has three pipeline stages. Its LPM searching rate is affected by both the number of key field blocks in stage 1 and stage 2, and distribution of matching Point. The RTL(Register Transistor Level) design is carried out using Verilog-HDL. The functional verification is thoroughly done at the gate level using 0.35${\mu}{\textrm}{m}$ CMOS SEC standard cell library.

An Efficient Updating Algorithm for IPv6 Lookup based on Priority-TCAM (IPv6 Lookup을 위한 효율적인 Priority TCAM Table 운영 알고리즘)

  • Hong, Seung-Woo;Noh, Sung-Kee;Hong, Sung-Back;Kim, Sang-Ha
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.10
    • /
    • pp.162-168
    • /
    • 2007
  • TCAM(Ternary content-addressable memory) has been widely used to perform fast routing lookup. It is able to accomplish the LPM(longest prefix matching) search in O(1) time without considering the number of prefixes and their lengths. As compared to software-based solutions, especially for IPv6, TCAM can oner sustained throughput and simple system architecture. However, There is no research for Priority-TCAM which can assign priority to each memory block. This paper addresses the difference or priority-TCAM compared to the existing TCAM and proposes CAO-PA algerian to manage the lookup table efficiently.

Design of Hybrid Parallel Architecture for Fast IP Lookups (고속 IP Lookup을 위한 병렬적인 하이브리드 구조의 설계)

  • 서대식;윤성철;오재석;강성호
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.5
    • /
    • pp.345-353
    • /
    • 2003
  • When designing network processors or implementing network equipments such as routers are implemented, IP lookup operations cause the major impact on their performance. As the organization of the IP address becomes simpler, the speed of the IP lookup operations can go faster. However, since the efficient management of IP address is inevitable due to the increasing number of network users, the address organization should become more complex. Therefore, for both IPv4(IP version 4) and IPv6(IP version 6), it is the essential fact that IP lookup operations are difficult and tedious. Lots of researcher for improving the performance of IP lookups have been presented, but the good solution has not been came out. Software approach alleviates the memory usage, but at the same time it si slow in terms of searching speed when performing an IP lookup. Hardware approach, on the other hand, is fast, however, it has disadvantages of producing hardware overheads and high memory usage. In this paper, conventional researches on IP lookups are shown and their advantages and disadvantages are explained. In addition, by mixing two representative structures, a new hybrid parallel architecture for fast IP lookups is proposed. The performance evaluation result shows that the proposed architecture provides better performance and lesser memory usage.