[c-nsp] Re: BGP convergence with jumbo frames
Pete Kruckenberg
pete at kruckenberg.com
Fri Aug 6 14:21:25 EDT 2004
On Sun, 1 Aug 2004, Pete Kruckenberg wrote:
> Spent some time recently trying to tune BGP to get
> convergence down as far as possible. Noticed some
> peculiar behavior.
>
> I'm running 12.0.28S on GSR12404 PRP-2.
>
> Measuring from when the BGP session first opens, the
> time to transmit the full (~142K routes) table from one
> router to another, across a jumbo-frame (9000-bytes)
> GigE link, using 4-port ISE line cards (the routers are
> about 20 miles apart over dark fiber).
>
> I noticed that the xmit time decreases from ~ 35 seconds
> with a 536-byte MSS to ~ 22 seconds with a 2500-byte
> MSS. From there, stays about the same, until I get to
> 4000, when it beings increasing dramatically until at
> 8636 bytes it takes over 2 minutes.
>
> I had expected that larger frames would decrease the BGP
> converence time. Why would the convergence time increase
> (and so significantly) as the MSS increases?
>
> Is there some tuning tweak I'm missing here?
With some further research, I have been able to narrow this
down to an issue specifically with confederation eBGP
convergence.
Various convergence times (all times approximate) I've
measured with 8960-byte MSS (9000-byte MTU), for 142K
routes:
iBGP: 6 seconds
non-confederation eBGP: 33 seconds
confederation eBGP with "next-hop-unchanged": 39 seconds
confederation eBGP: 90 seconds
These results are the same for GSRs directly connected (same
PoP) as for GSRs over the GigE (20-mile) fiber WAN.
So this looks to be something with the interaction of large
frames with confederation eBGP, AS path manipulation, and
next-hop setting.
It's unusual that confederation eBGP performs significantly
better with smaller packets. If this were an issue of
processing time (lookups, AS path manipulation, next-hop
setting, etc), I'd expect convergence to be much worse for
more smaller packets. But at MSS 536, confederation eBGP
converges in about 27 seconds.
As noted previously, I see ~22% retransmits and
out-of-orders, but only with large-MTU confederation eBGP.
The other scenarios have negligible or no
retransmits/out-of-orders.
Does this ring a bell with anyone? Anyone on this list from
the IOS BGP group that knows why I'm getting these results?
Thanks.
Pete.
More information about the cisco-nsp
mailing list