[c-nsp] Long list of route-maps
Arie Vayner (avayner)
avayner at cisco.com
Thu Mar 11 13:02:21 EST 2010
This is true, but the grouping of the policies should be for the egress
policies, while the ingress policies can stay the same.
Also, as this is a peering point, I do not think that whatever is
received should be sent to the other peers, so I would say that the 1st
policy entry on any outgoing policy should be to block this kind of
advertisements (set a community for a peer on ingress, and block to
other peers).
I think increasing the MTU (PMTUD) would help as it would reduce the
number of packets.
If nothing helps, it could be a good idea to try a COPP policy which
polices the incoming BGP updates to a decent value so at link up events
you pace the updates, but do not overload the CPU, which would end up
with a faster convergence...
Another point to make sure is that you do not overload the TCAM table by
installing too many routes. If it's full, traffic would start hitting
the CPU... (I do not think this is the case here)
Arie
-----Original Message-----
From: Sven Huster [mailto:sven at huster.me.uk]
Sent: Thursday, March 11, 2010 19:46
To: Arie Vayner (avayner)
Cc: Paul Stewart; Andy B.; cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] Long list of route-maps
If any of the received updates lead to a new best path surely these
updates get processed by the 150 outbound route-maps now
The statement the grouping in update groups doesn't help is not
necessarily true here.
--
Sven
On 11 Mar 2010, at 17:33, Arie Vayner (avayner) wrote:
> This would not actually help much as still each received update has to
> be analyzed separately.
> The grouping is important for egress policies - all BGP peers with the
> same egress policy would be placed into the same BGP update group
> dramatically reducing processing of outgoing updates.
>
> Arie
>
> -----Original Message-----
> From: cisco-nsp-bounces at puck.nether.net
> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Paul Stewart
> Sent: Thursday, March 11, 2010 19:27
> To: 'Andy B.'; cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] Long list of route-maps
>
> Why a route-map PER peer? Can you not group them under the same
> conditions
> and simplify things a bit?
>
> This may not be the problem .... sounds like something else possibly..
>
> Paul
>
>
> -----Original Message-----
> From: cisco-nsp-bounces at puck.nether.net
> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Andy B.
> Sent: Thursday, March 11, 2010 12:19 PM
> To: cisco-nsp at puck.nether.net
> Subject: [c-nsp] Long list of route-maps
>
> I feel desperate: I just turned up a new Transit Session with an
> upstream and my router goes nuts and is dropping other BGP sessions on
> it: 4/0 (hold time expired) 0 bytes
>
> The situation is like this:
>
> The router is peering on a public IX with approximatively 150 members.
> Each BGP session has its own route-map, so the list is really BIG!
>
> When I turned up my transit about an hour ago, CPU went to 100% and is
> still at 100% right now and it drops BGP peers and brings them back,
> and drops and brings them back, ... I'm in a loop and I think the only
> way to get out of that look is to bring up each bgp peer step by step
> - really not an option.
>
> CPU utilization for five seconds: 100%/5%; one minute: 99%; five
> minutes:
> 99%
> PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
> 442 56982884 32932073 1730 83.93% 85.54% 82.20% 0 BGP
> Router
> 329 1639012 1857164 882 3.35% 2.01% 3.28% 0 IP RIB
> Update
> 403 6686764 2462837 2715 1.91% 0.71% 0.81% 0 BGP
> Scheduler
> 273 7514324 63409992 118 1.51% 1.55% 1.44% 0 IP Input
> 340 421908 2376861 177 1.35% 0.63% 0.96% 0 XDR
mcast
> 553 3487144 30648800 113 0.87% 1.21% 1.28% 0 BGP I/O
> 9 35017808 1752692 19979 0.79% 0.54% 0.50% 0 Check
> heaps
> 550 7132896 84539324 84 0.47% 0.30% 0.32% 0 IPv6
> Input
> 12 20351908 188141150 108 0.15% 0.54% 0.36% 0 ARP
Input
> 493 465108 4375354 106 0.07% 0.05% 0.03% 0 Port
> manager
> per
> 66 7916 202448 39 0.07% 0.00% 0.00% 0 BGP Open
> 333 284012 26233730 10 0.07% 0.16% 0.15% 0 TCP
Timer
> 402 41152 73291500 0 0.07% 0.00% 0.00% 0 RADIUS
> 51 155388 2479245 62 0.07% 0.05% 0.05% 0
> Per-Second
> Jobs
> 95 45504 2448923 18 0.07% 0.00% 0.00% 0
Heartbeat
> Proces
> 24 9277560 71169033 130 0.00% 0.10% 0.11% 0 IPC Seat
> Manager
> 52 1725428 43152 39984 0.00% 0.08% 0.05% 0
> Per-minute
> Jobs
> 328 5111328 41990 121727 0.00% 0.23% 0.18% 0 IP
> Background
> 341 723640 487509 1484 0.00% 0.01% 0.00% 0 IPC LC
> Message H
> 353 851584 3607096 236 0.00% 0.03% 0.04% 0 CEF:
IPv4
> proces
> 371 20720 2473579 8 0.00% 0.01% 0.00% 0 OSPF-1
> Router
> 372 475340 1244448 381 0.00% 0.03% 0.02% 0 HIDDEN
> VLAN
> Proc
> 546 33060 117785 280 0.00% 0.00% 0.03% 0 IPv6 RIB
> Redistr
> 551 77312 12335517 6 0.00% 0.03% 0.04% 0 IPv6 ND
> 560 49732 17676 2813 0.00% 0.01% 0.10% 0 SNMP
> Traps
> 562 41720 903 46201 0.00% 0.00% 0.23% 0
> Collection
> proce
> 563 55952820 420289 133129 0.00% 0.88% 1.80% 0 BGP
> Scanner
> 565 784 200 3920 0.00% 0.15% 0.13% 1 SSH
> Process
>
>
> 6500 box with SXI3
>
> What is eating my router's CPU?
> Is it the big list of route-maps?
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list