[c-nsp] ASR9k Bundle QoS in 6.0.1

Robert Williams Robert at CustodianDC.com
Thu Jun 16 12:18:00 EDT 2016

> Your customers are running MPLS between their sites - across L2 MPLS provider Links?
> This is something that I also want to do as an enterprise man, but was always worried about MTU etc.
> Just so I understand - this also causes a hashing issue for the ISP's as the sources and destinations are many labels deep - and you only see the /30 IP's I have chosen for the links across your backbone?

Yes that's correct. The 9K goes into the frame until the src/dst/ip/port then hashes based on that. However, it's actually unknowingly using the 'customer' MPLS adjacencies, not the underlying real 'end-user' data (for want of a better term). So the match is always the same.

The most interesting effect we see from this is after a port flap or configuration change as the MPLS-TE tunnels optimise without being aware of the bandwidth within them which can or cannot be etherchannel balanced successfully. Thus, we have some 10G tunnels that traverse 4 x 10G physicals, but sometimes the whole 'tunnel' containing one customer adjacency may land on just one physical member of a 10G bundle.

Alternatively, you can try to 'motivate' the customer to build 2 or 4 (or more) adjacencies between their edge boxes and use ECMP. Giving you a reasonable variety of IPs to has with. However, you still can't guarantee that the 9K will balance these out nicely for you. They may still finish up with 3:1:0:0 when it gets to 4 x 10G the physical layer.

As an aside, in one case we finished up mapping a customer 10G service (site-to-site) to it's own 10G port pair on our DWDM core, for this precise reason. Does seem a little crazy with technology today, but no crazier than my 'limit a bundle to a single total rate per class for all the member ports' original issue :)


Robert Williams
Custodian Data Centre
Email: Robert at CustodianDC.com

More information about the cisco-nsp mailing list