[c-nsp] Pseudowire and load-balancing - revisit

James Bensley jwbensley at gmail.com
Fri Apr 5 03:52:56 EDT 2019


On Tue, 19 Mar 2019 at 17:41, <adamv0025 at netconsultings.com> wrote:
> Interesting point you raised there,
> According to
> https://community.cisco.com/t5/service-providers-documents/asr9000-xr-load-b
> alancing-architecture-and-characteristics/ta-p/3124809#field
> ASR9k can parse 0x8847 for entropy  MPLS - IP Payload, with < 4 labels (or
> looking for entropy label)
> -but I guess that is constrained only to MPLS enabled interfaces and the
> problem here is that the interface in question is just L2 access port.

Yes, 9K as P/LSR can parse MPLS label stacks for entropy but the
PE/LER node here is only parsing layer 2 headers on ingress to source
entropy...


On Fri, 15 Mar 2019 at 19:52, James Jun <james at towardex.com> wrote:
...
> But the feature problem here is that anytime EtherType is not IP, entropy isn't
> generated for FAT-PW as it won't see the payload IP headers after ENET?  If the
> customer turns off MPLS or IGP shortcuts and switches to pure IP forwarding, problem
> would go away.
...
> Yea, but ENET headers don't do any good for me.  I can't expect passenger traffic to be
> load balanced between various MACs, it's like expecting a LAG interface to generate
> decent balance on a two-router p2p link relying on src/dst MACs as entropy.

The feature request is a nice idea, to have ASR9K match 0x8847 on
ingress of a layer 2 interface that is the AC of a pseudowire however,
you still have the same problem;

Your customer’s two PEs are back-to-back so it could be that their
transport label for any traffic between their PEs is always the same
(if any, as they may be using PHP), and if they are in turn running a
pseudowire between PEs to carry their customers traffic between MX PEs
the service label may also always be the same, so there is no
additional entropy there, it's turtles all the way down.

The best solution in my opinion is to never have customer links as
large as any core/backbone link as you have already mentioned.
Operationally this is easier said than done. Failing that, you need a
way to *reliably* introduce more entropy which the feature requested
doesn't guarantee.

Opt 1. If your PE and your customer's PEs are in the same rack or
neighbouring racks, some people on this list have actually had good
success with per-packet load-balancing when the conditions are tightly
controlled (i.e. cable length between yours and their PE is the same
for all links between the devices and the length is short etc.)

Opt 2. Find a way to introduce more entropy between you and your
customer directly. Can they do something crazy like setup a bunch of
/31s between their PEs using sub-interfaces and ECMP across them,
assuming they have better visibility into their customers traffic.
This doesn't solve the issue that they might just have one source and
destination thought, so not guaranteed to work (just shifting the
problem domain).

I agree these aren't the best ideas in the world. If the entropy isn't
there, it isn't there, we can't shoe horn it in. I think the ratio of
customer to core link sizing is the real issue here.

Cheers,
James.


More information about the cisco-nsp mailing list