[j-nsp] l2circuit communities
Phil Bedard
philxor at gmail.com
Mon May 24 19:46:17 EDT 2010
A little different scenario but I'm using CBF with a cos next-hop-map to set specific lsp-next-hops for CoS classes, also using autobw, and I'm not seeing similar behavior.
One thing I noticed is while the router is doing the MBB/re-signal the "ActiveRoute" value will drop to 0, but then it immediately goes back up to the prior value, so it makes you wonder what's going on behind the scenes.
I'm using 9.3R3.8, hope thing isn't something introduced later...
Are your paths actually changing output interfaces/paths or is it just a BW re-signal?
Phil
On May 24, 2010, at 3:48 PM, Richard A Steenbergen wrote:
> On Mon, May 24, 2010 at 09:01:05PM +0800, Mark Tinka wrote:
>> On Monday 24 May 2010 02:33:08 am Richard A Steenbergen
>> wrote:
>>
>>> Oh and a word of warning before anybody runs out and
>>> tries this, doing this kind of forwarding-table policy
>>> to select specific LSPs seems to SIGNIFICANTLY increase
>>> cpu use, to the point of almost never being < 100%:
>>
>> Might you know why?
>
> Looking at rtsockmon -t, there is a huge and constant flood of rpd route
> changes after applying the LSP mapping install-nexthop to the forwarding
> table policy.
>
> I picked one prefix at random and followed its updates in rtsockmon with
> and without the install-nexthop policy enabled. With install-nexthop
> enabled, over the course of 5 hours this prefix saw route change
> messages 107 times, like so:
>
> [00:00:35] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1048955 altfwdnhidx=0 filtidx=0
> [00:00:36] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1048955 altfwdnhidx=0 filtidx=0
> [00:10:27] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049583 altfwdnhidx=0 filtidx=0
> [00:10:28] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049583 altfwdnhidx=0 filtidx=0
> [00:11:18] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049828 altfwdnhidx=0 filtidx=0
> [00:11:19] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049828 altfwdnhidx=0 filtidx=0
> [00:40:05] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049079 altfwdnhidx=0 filtidx=0
> [00:40:06] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049079 altfwdnhidx=0 filtidx=0
> [00:40:52] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049799 altfwdnhidx=0 filtidx=0
> [00:40:52] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049799 altfwdnhidx=0 filtidx=0
> [00:40:53] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049799 altfwdnhidx=0 filtidx=0
> [00:50:03] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1048947 altfwdnhidx=0 filtidx=0
> [00:50:04] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1048947 altfwdnhidx=0 filtidx=0
> [00:59:59] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049204 altfwdnhidx=0 filtidx=0
> [01:00:00] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049204 altfwdnhidx=0 filtidx=0
> [01:00:48] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049306 altfwdnhidx=0 filtidx=0
> [01:00:49] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049306 altfwdnhidx=0 filtidx=0
> [01:09:51] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049482 altfwdnhidx=0 filtidx=0
> [01:09:51] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049482 altfwdnhidx=0 filtidx=0
> [01:10:43] rpd P route change inet 190.152.178.0 tid=0 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049720 altfwdnhidx=0 filtidx=0
> [01:10:44] rpd P route change inet 190.152.178.0 tid=5 plen=24 type=user flags=0x0 nh=indr nhflags=0x84 nhidx=1049720 altfwdnhidx=0 filtidx=0
>
> Without the install-nexthop policy enabled it never saw a single
> message. BGP for the route was also rock solid, nothing ever changed in
> the "show route" last updated timestamp, and there was no network churn
> at the time. Note the roughly 10 minute intervals between events, which
> lines up perfectly with my adjust-interval 600. My guess is that with
> install-nexthop enabled, every time the LSPs are resignaled to update
> bandwidth reservations it ends up touching every route that is using
> that LSP as a nexthop, driving cpu through the roof in the process. This
> is a 9.5R4 system with all adaptive LSPs and indirect nexthops enabled
> too.
>
> --
> Richard A Steenbergen <ras at e-gerbil.net> http://www.e-gerbil.net/ras
> GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list