[j-nsp] auto b/w mpls best practice -- cpu spikes

adamv0025 at netconsultings.com adamv0025 at netconsultings.com
Sun Sep 16 05:29:48 EDT 2018


> Of tim tiriche
> Sent: Wednesday, September 12, 2018 6:04 AM
> 
> Hi,
> 
> Attached is my MPLS Auto B/w Configuration and i see frequent path
> changes and cpu spikes.  I have a small network and wanted to know if
there
> is any optimization/best practices i could follow to reduce the churn.
> 
> protocols {
>     mpls {
>         statistics {
>             file mpls.statistics size 1m files 10;
>             interval 300;
>             auto-bandwidth;
>         }
>         log-updown {
>             syslog;
>             trap;
>             trap-path-down;
>             trap-path-up;
>         }
>         traffic-engineering mpls-forwarding;
> 
>         rsvp-error-hold-time 25;
>         smart-optimize-timer 180;
>         ipv6-tunneling;
>         optimize-timer 3600;
>         label-switched-path <*> {
>             retry-timer 600;
>             random;
>             node-link-protection;
>             adaptive;
>             auto-bandwidth {
>                 adjust-interval 7200;
>                 adjust-threshold 20;
>                 minimum-bandwidth 1m;
>                 maximum-bandwidth 9g;
>                 adjust-threshold-overflow-limit 2;
>                 adjust-threshold-underflow-limit 4;
>             }
>             primary <*> {
>                 priority 5 5;
>             }
>         }
>
My advice in short is Integrated Services QOS sucks, use Differentiated
Services QOS instead and use RSVP-TE solely for TE purposes, it will make
your life so much easier. 

The problem with IntServ QOS is twofold:
1) TE tunnels need to have their BW adjusted. 
2) Nodes in the network needs to know about available BW (per class) on each
link.
Where changes in 1 induce changes in 2, that's just not meant to scale. 

Yes you can make the setup scale but it will be very stiff by, 
1) Reducing the sampling frequency and/or making the adjust interval longer
and disabling underflow/overflow thresholds or by making them large.
2) Sparser population of link BW thresholds.
But then the side effects are not reacting to changes in time and you'll
likely run into the BW trailing effect. 

I guess I could come up with some use cases for IntServ, but those are
rather corner cases.
But thinking about it I believe that even those could still be solved just
by adding more static tunnels to increase the granularity of load
distribution. 

But I'd be very interested to hear about cases where the IntServ is the only
remedy.  
 
adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::




More information about the juniper-nsp mailing list