[j-nsp] l2circuit communities

Richard A Steenbergen ras at e-gerbil.net
Mon May 24 15:48:13 EDT 2010


On Mon, May 24, 2010 at 09:01:05PM +0800, Mark Tinka wrote:
> On Monday 24 May 2010 02:33:08 am Richard A Steenbergen 
> wrote:
> 
> > Oh and a word of warning before anybody runs out and
> >  tries this, doing this kind of forwarding-table policy
> >  to select specific LSPs seems to SIGNIFICANTLY increase
> >  cpu use, to the point of almost never being < 100%:
> 
> Might you know why?

Looking at rtsockmon -t, there is a huge and constant flood of rpd route
changes after applying the LSP mapping install-nexthop to the forwarding
table policy. 

I picked one prefix at random and followed its updates in rtsockmon with 
and without the install-nexthop policy enabled. With install-nexthop 
enabled, over the course of 5 hours this prefix saw route change 
messages 107 times, like so:

[00:00:35] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1048955 altfwdnhidx=0 filtidx=0
[00:00:36] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1048955 altfwdnhidx=0 filtidx=0
[00:10:27] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049583 altfwdnhidx=0 filtidx=0
[00:10:28] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049583 altfwdnhidx=0 filtidx=0
[00:11:18] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049828 altfwdnhidx=0 filtidx=0
[00:11:19] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049828 altfwdnhidx=0 filtidx=0
[00:40:05] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049079 altfwdnhidx=0 filtidx=0
[00:40:06] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049079 altfwdnhidx=0 filtidx=0
[00:40:52] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049799 altfwdnhidx=0 filtidx=0
[00:40:52] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049799 altfwdnhidx=0 filtidx=0
[00:40:53] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049799 altfwdnhidx=0 filtidx=0
[00:50:03] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1048947 altfwdnhidx=0 filtidx=0
[00:50:04] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1048947 altfwdnhidx=0 filtidx=0
[00:59:59] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049204 altfwdnhidx=0 filtidx=0
[01:00:00] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049204 altfwdnhidx=0 filtidx=0
[01:00:48] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049306 altfwdnhidx=0 filtidx=0
[01:00:49] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049306 altfwdnhidx=0 filtidx=0
[01:09:51] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049482 altfwdnhidx=0 filtidx=0
[01:09:51] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049482 altfwdnhidx=0 filtidx=0
[01:10:43] rpd      P    route      change  inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049720 altfwdnhidx=0 filtidx=0
[01:10:44] rpd      P    route      change  inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049720 altfwdnhidx=0 filtidx=0

Without the install-nexthop policy enabled it never saw a single
message. BGP for the route was also rock solid, nothing ever changed in
the "show route" last updated timestamp, and there was no network churn
at the time. Note the roughly 10 minute intervals between events, which
lines up perfectly with my adjust-interval 600. My guess is that with
install-nexthop enabled, every time the LSPs are resignaled to update
bandwidth reservations it ends up touching every route that is using
that LSP as a nexthop, driving cpu through the roof in the process. This 
is a 9.5R4 system with all adaptive LSPs and indirect nexthops enabled 
too.

-- 
Richard A Steenbergen <ras at e-gerbil.net>       http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)


More information about the juniper-nsp mailing list