[j-nsp] Best practice for igp/bgp metrics

Pavel Lunin plunin at gmail.com
Thu Oct 26 19:05:10 EDT 2017


Well, in fact I rather meant a different way of setting the role based
costs rather than treating real link bandwidth as 1/cost.

>For LAG you should set minimum links to a number which allows you to
>carry traffic you need.

OK, good point. Though I am not sure that all vendors allow to do it this
way.

Basically I agree with your concept, but it's worth to note that you assume
the current traffic as "given" and that links can't be saturated. This
assumption only holds when there is a great number of short-lived low
bandwidth sessions, like residential broadband subscribers traffic, and
enough bandwidth to accommodate all the traffic. In this case, a saturated
link and consequent drops is equivalent to "no service". However in some
environments (many enterprise networks, some DCs) hight traffic periods are
observed during backups/replications/etc. This normally implies relatively
low number of high bandwidth long-lasting TCP flows. Being forwarded
through a saturated link, TCP slows down and traffic "adapts" to network
conditions. Your backup will take 4 hours instead of 2. This is where real
bandwidth might be a meaningful metric. If I understand correctly, it was
supposed to work like this in the era when you always had more traffic than
available bandwidth.



25 окт. 2017 г. 10:50 ПП пользователь "Saku Ytti" <saku at ytti.fi> написал:

> I disagree. Either traffic fits or it does not fit on SPT path,
> bandwidth is irrelevant.
>
> For LAG you should set minimum links to a number which allows you to
> carry traffic you need.
>
> Ideally you have capacity redundancy on SPT level, if best path goes
> down, you know redundant path, and you know it can carry 100% of
> demand. If this is something you cannot commercially promise, you need
> strategic TE to move traffic on next-best-path when SPT is full.
>
>
> On 25 October 2017 at 23:25, Pavel Lunin <plunin at gmail.com> wrote:
> > Reference bandwidth might however be useful for lags, when you may want
> to
> > lower the cost of a link if some members go down (though I prefer ECMP in
> > the core for most cases).
> >
> > And you can combine the role/latency approach with automatic reference
> > bandwidth-based cost, if you configure 'bandwidth' parameter on the
> > interfaces instead of IGP cost.
> >
> > 25 окт. 2017 г. 10:08 ПП пользователь "Saku Ytti" <saku at ytti.fi>
> написал:
> >
> >> Hey,
> >>
> >> This only matters if you are letting system assign metric
> >> automatically based on bandwidth. Whole notion of preferring
> >> interfaces with most bandwidth is fundamentally broken. If you are
> >> using this design, you might as well assign same number to every
> >> interface and use strict hop count.
> >>
> >> On 25 October 2017 at 22:41, Luis Balbinot <luis at luisbalbinot.com>
> wrote:
> >> > Never underestimate your reference-bandwidth!
> >> >
> >> > We recently set all our routers to 1000g (1 Tbps) and it was not a
> >> > trivial task. And now I feel like I'm going to regret that in a couple
> >> > years. Even if you work with smaller circuits, having larger numbers
> >> > will give you more range to play around.
> >> >
> >> > Luis
> >> >
> >> > On Tue, Oct 24, 2017 at 8:50 AM, Alexander Dube <nsp at layerwerks.net>
> >> > wrote:
> >> >> Hello,
> >> >>
> >> >> we're redesigning our backbone with multiple datacenters and pops
> >> >> currently and looking for a best practice or a recommendation for
> >> >> configuring the metrics.
> >> >> What we have for now is a full meshed backbone with underlaying isis.
> >> >> IBGP exports routes without any metric. LSP are in loose mode and
> are using
> >> >> isis metric for path calculation.
> >> >>
> >> >> Do you have a recommendation for metrics/te ( isis and bgp ) to have
> >> >> some values like path lengh ( kilometers ), bandwidth, maybe
> latency, etc
> >> >> inside of path calculation?
> >> >>
> >> >> Kind regards
> >> >> Alex
> >> >> _______________________________________________
> >> >> juniper-nsp mailing list juniper-nsp at puck.nether.net
> >> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> >> > _______________________________________________
> >> > juniper-nsp mailing list juniper-nsp at puck.nether.net
> >> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >>
> >>
> >>
> >> --
> >>   ++ytti
> >> _______________________________________________
> >> juniper-nsp mailing list juniper-nsp at puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
>   ++ytti
>


More information about the juniper-nsp mailing list