[c-nsp] Seamless MPLS interacting with flat LDP domains
adamv0025 at netconsultings.com
adamv0025 at netconsultings.com
Mon May 6 05:12:25 EDT 2019
> Robert Raszuk
> Sent: Friday, May 3, 2019 3:16 AM
>
> Radu,
>
> The MPLS in modern DC is none starter purely from technology pov.
>
> In modern DCs compute nodes are your tenant PEs all talking to rest of the
> fabric L3. So if you want to roll MPLS you would need to do that to the
> compute nodes. That means that with exact match you will see in MSDCs
> millions of FECs and millions of underlay routes which you can not
> summarize. Plus on top of that an overlay say L3VPNs for the tenant/pods
> reachability.
>
Well I guess whenever the summarization is used in the pure IP underlay, in
MPLS underlay a seamless MPLS boundary would be used so all the underlay
routes/FECs would then be contained only to compute nodes.
> Good luck with operating that scale with MPLS forwarding. Besides while
> some vendors of the hosts NICs claim support for MPLS they do that only on
> ppt. In real life take very popular NIC vendor and you will find that MPLS
> packets do not get round robin queuing to kernel like IPv4 or IPv6 but all
line
> up to a single buffer.
>
> Only hacking the firmware of the NIC with some other NIC vendor which also
> out of the box was far from decent I was able to spread those flows around
> so performance of MPLS streams arriving at the compute was acceptable.
>
Hmm good to know, I wasn't aware of this.
I guess this was specific to a certain setup right?
https://www.netronome.com/blog/ovs-offload-models-used-nics-and-smartnics-pr
os-and-cons/
adam
More information about the cisco-nsp
mailing list