[c-nsp] BGP convergence with jumbo frames
Pete Kruckenberg
pete at kruckenberg.com
Mon Aug 2 12:00:56 EDT 2004
On Mon, 2 Aug 2004, Tony Li wrote:
> How are your huge processor buffers set up?
Defaults right now (5 permanent). I wasn't getting any
misses or failures during the tests, so I didn't think it
was an issue.
I just tried bumping this to 20 permanent on each end.
Doesn't seem to make a difference.
> I would not expect a larger MTU/MSS to have much of an
> effect, if at all. BGP is typically not constrained by
> throughput. In fact, what you may be seeing is that
> with really large MTUs and without a bigger TCP window,
> you're turning TCP into a stop and wait protocol.
Is there a way to set the initial TCP window larger?
I tried "ip tcp selective-ack" and it seemed to decrease the
number of retransmits (I wonder why there are so many
retransmits...).
This is a greenfield network, so right now there is nothing
happening on these routers.
Some concerning stats at 9000-byte MTU, from the "show ip
bgp neighbor":
Sending router (the one with the full BGP table):
Datagrams (max data segment is 8936 bytes):
Rcvd: 290 (out of order: 0), with data: 6, total data bytes: 178
Sent: 312 (retransmit: 73, fastretransmit: 0), with data: 309, total data bytes: 2517293
Receiving router (starts with an empty BGP table):
Datagrams (max data segment is 8936 bytes):
Rcvd: 314 (out of order: 71), with data: 311, total data bytes: 2518290
Sent: 291 (retransmit: 0, fastretransmit: 0), with data: 6, total data bytes: 178
Isn't 71 out of 314 packets is a lot of out-of-orders (or 73
out of 312 a lot of retransmits) on a point-to-point link
with nothing else on it?
"ip tcp selective-ack" doesn't seem to noticeably affect
these numbers.
Pete.
More information about the cisco-nsp
mailing list