[c-nsp] BGP convergence with jumbo frames
Tantsura, Jeff
jeff.tantsura at capgemini.com
Mon Aug 2 08:30:17 EDT 2004
Path MTU and the ip tcp path-mtu-discovery Command
All TCP sessions are bounded by a limit on the number of bytes that can
be transported in a single packet. This limit, known as the Maximum
Segment Size (MSS), is 536 bytes by default. In other words, TCP breaks
up packets in a transmit queue into 536 byte chunks before passing
packets down to the IP layer. Use the show ip bgp neighbors | include
max data command to display the MSS of BGP peers:
Router# show ip bgp neighbors | include max data
Datagrams (max data segment is 536 bytes):
Datagrams (max data segment is 536 bytes):
Datagrams (max data segment is 536 bytes):
Datagrams (max data segment is 536 bytes):
The advantage of a 536 byte MSS is that packets are not likely to be
fragmented at an IP device along the path to the destination since most
links use an MTU of at least 1500 bytes. The disadvantage is that
smaller packets increase the amount of bandwidth used to transport
overhead. Since BGP builds a TCP connection to all peers, a 536 byte MSS
affects BGP convergence times.
The solution is to enable the Path MTU (PMTU) feature, using the ip tcp
path-mtu-discovery command. You can use this feature to dynamically
determine how large the MSS value can be without creating packets that
need to be fragmented. PMTU allows TCP to determine the smallest MTU
size among all links in a TCP session. TCP then uses this MTU value,
minus room for the IP and TCP headers, as the MSS for the session. If a
TCP session only traverses Ethernet segments, then the MSS will be 1460
bytes. If it only traverses Packet over SONET (POS) segments, then the
MSS will be 4430 bytes. The increase in MSS from 536 to 1460 or 4430
bytes reduces TCP/IP overhead, which helps BGP converge faster.
After enabling PMTU, again use the show ip bgp neighbors | include max
data command to see the MSS value per peer:
Router# show ip bgp neighbors | include max data
Datagrams (max data segment is 1460 bytes):
Datagrams (max data segment is 1460 bytes):
Datagrams (max data segment is 1460 bytes):
Datagrams (max data segment is 1460 bytes):
Jeff
-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net
[mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of
lee.e.rian at census.gov
Sent: Monday, August 02, 2004 1:15 PM
To: Pete Kruckenberg
Cc: cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] BGP convergence with jumbo frames
is the IP MTU size also 9000 bytes?
|---------+--------------------------------->
| | Pete Kruckenberg |
| | <pete at kruckenberg.com>|
| | Sent by: |
| | cisco-nsp-bounces at puck|
| | .nether.net |
| | |
| | |
| | 08/02/2004 01:42 AM |
| | |
|---------+--------------------------------->
>-----------------------------------------------------------------------
----------------------------------------------------------------------|
|
|
| To: <cisco-nsp at puck.nether.net>
|
| cc:
|
| Subject: [c-nsp] BGP convergence with jumbo frames
|
>-----------------------------------------------------------------------
----------------------------------------------------------------------|
Spent some time recently trying to tune BGP to get convergence down as
far as possible. Noticed some peculiar behavior.
I'm running 12.0.28S on GSR12404 PRP-2.
Measuring from when the BGP session first opens, the time to transmit
the full (~128K routes) table from one router to another, across a
jumbo-frame (9000-bytes) GigE link, using 4-port ISE line cards (the
routers are about 20 miles apart over dark fiber).
I noticed that the xmit time decreases from ~ 35 seconds with a 536-byte
MSS to ~ 22 seconds with a 2500-byte MSS.
>From there, stays about the same, until I get to 4000, when
it beings increasing dramatically until at 8636 bytes it takes over 2
minutes.
I had expected that larger frames would decrease the BGP converence
time. Why would the convergence time increase (and so significantly) as
the MSS increases?
Is there some tuning tweak I'm missing here?
Pete.
_______________________________________________
cisco-nsp mailing list cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Our name has changed. Please update your address book to the following format: "recipient at capgemini.com".
This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
More information about the cisco-nsp
mailing list