[c-nsp] Any BGP fine tuning recommendation while Peering in IX
Nick Hilliard
nick at foobar.org
Thu May 9 05:46:11 EDT 2013
On 08/05/2013 09:50, arulgobinath emmanuel wrote:
> what are the common BGP fine tuning best practices while peering more than
> 200 - 300 peering. except to the Path MTU , peer group
> i'm observing when the RR flaps CPU goes high ( GSR 12406 / PRP-2/
> 12.0(33)S10 ) and due to that input queue on the interface goes high and
> it causes random flap on almost half of the peer .
if your ix sessions are flapping when your RR sessions go, then probably
the reason is because the bgp process is not issuing keepalives because
it's too busy dealing with either best path processing or doing network i/o
between peers. There are a couple of things to look out for here:
- check to see what's chewing the CPU (show proc cpu sorted). If it's "IP
Input", then you might be able to help this problem with "scheduler
allocate", and by doing the things that Adam Vitkovsky suggested separately.
- if you're not already using them, use peer-groups to cut down on cpu usage
- soft-reconfiguration inbound should be disabled on all sessions on the box
- make sure that none of your route-map statements contain any as-path
regex statements or anything else that will cause the cpu to spin
- does 12.0s support slow peer detection? i don't think it does, but if it
does, put your slow peers into a separate peer-group
- does this box need full routes? If you could cut down to IXP prefixes +
local prefixes + default, it might work a lot better. This may not be
feasible though.
You could tweak your keepalive intervals upwards, but there are serious
side effects to this. You could also look at the "scheduler allocate"
command but it's unlikely to help and would take a lot of work to try to
figure out what were good and bad values to use.
I don't know if any of these things is actually going to help much. The
prp-2 uses a freescale MPC7457 cpu, and this chip is about 10 years old.
You may be better off getting a more modern router at such a large IXP.
Nick
More information about the cisco-nsp
mailing list