TY - GEN
T1 - Scalable, high-speed on-chip-based NDN name forwarding using FPGA
AU - Saxena, Divya
AU - Mahar, Suyash
AU - Raychoudhury, Vaskar
AU - Cao, Jiannong
PY - 2019/1/4
Y1 - 2019/1/4
N2 - Named Data Networking (NDN) is the most promising candidate among the proposed content-based future Internet architectures. In NDN, Forwarding Information Base (FIB) maintains name prefixes and their corresponding outgoing interface(s) and forwards incoming packets by calculating the longest prefix match (LPM) of their content names (CNs). A CN in NDN is of variable-length and is maintained using a hierarchical structure. Therefore, to perform name lookup for packet forwarding at wire speed is a challenging task. However, the use of GPUs can lead to much better lookup speeds than CPU but, they are often limited by the CPU-GPU transfer latencies. In this paper, we exploit the massive parallel processing power of FPGA technology and propose a scalable, high-speed on-chip SRAM-based NDN name forwarding scheme for FIB (OnChip-FIB) using Field-Programmable Gate Arrays (FPGA). OnChip-FIB scales well as the number of prefixes grow, due to low storage complexity and low resource utilization. Extensive simulation results show that the OnChip-FIB scheme can achieve 1.06 µs measured lookup latency with a 26% on-chip block memory usage in a single Xilinx UltraScale FPGA for 50K named dataset.
AB - Named Data Networking (NDN) is the most promising candidate among the proposed content-based future Internet architectures. In NDN, Forwarding Information Base (FIB) maintains name prefixes and their corresponding outgoing interface(s) and forwards incoming packets by calculating the longest prefix match (LPM) of their content names (CNs). A CN in NDN is of variable-length and is maintained using a hierarchical structure. Therefore, to perform name lookup for packet forwarding at wire speed is a challenging task. However, the use of GPUs can lead to much better lookup speeds than CPU but, they are often limited by the CPU-GPU transfer latencies. In this paper, we exploit the massive parallel processing power of FPGA technology and propose a scalable, high-speed on-chip SRAM-based NDN name forwarding scheme for FIB (OnChip-FIB) using Field-Programmable Gate Arrays (FPGA). OnChip-FIB scales well as the number of prefixes grow, due to low storage complexity and low resource utilization. Extensive simulation results show that the OnChip-FIB scheme can achieve 1.06 µs measured lookup latency with a 26% on-chip block memory usage in a single Xilinx UltraScale FPGA for 50K named dataset.
KW - FIB
KW - Forwarding
KW - Name lookup
KW - Named Data Networking
KW - NDN
UR - http://www.scopus.com/inward/record.url?scp=85060906842&partnerID=8YFLogxK
U2 - 10.1145/3288599.3288613
DO - 10.1145/3288599.3288613
M3 - Conference article published in proceeding or book
AN - SCOPUS:85060906842
T3 - ACM International Conference Proceeding Series
SP - 81
EP - 89
BT - ICDCN 2019 - Proceedings of the 2019 International Conference on Distributed Computing and Networking
PB - Association for Computing Machinery
T2 - 20th International Conference on Distributed Computing and Networking, ICDCN 2019
Y2 - 4 January 2019 through 7 January 2019
ER -