[c-nsp] OSPF equal cost load balancing
James Bensley
jwbensley at gmail.com
Thu Aug 31 04:12:05 EDT 2017
On 31 August 2017 at 01:35, CiscoNSP List <CiscoNSP_list at hotmail.com> wrote:
>
> AAh - Thank you James! So the ASR920 will not ECMP over 2 links, it requires 4...that would explain the difference between egress/ingress (and why the 920 is not working particularly well!)
I'm not 100% sure but that is what the doc's indicate (and as we know,
Cisco doc's aren't the best):
https://www.cisco.com/c/en/us/td/docs/routers/asr920/configuration/guide/mpls/mp-l3-vpns-xe-3s-asr920-book/mp-l3-vpns-xe-3s-asr920-book_chapter_0100.html#reference_EDE971A94BE6443995432BE8D9E82A25
Restrictions for ECMP Load Balancing
-Both 4 ECMP and 8 ECMP paths are supported.
-Load balancing is supported on global IPv4 and IPv6 traffic. For
global IPv4 and IPv6 traffic, the traffic distribution can be equal
among the available 8 links.
-Per packet load balancing is not supported.
-Label load balancing is supported.
> And yes, we are running MPLS over these links (But not a LAG as mentioned) - So does your comment re MPLS hasting still apply to our setup, or only to a LAG?
Hmm, OK well see above "Label load balancing is supported." - although
not clear I assume that means MPLS labels? So perhaps it seems ECMP
should supports MPLS labelled paths and recognise different labelled
paths with the same IGP cost as seperate "ECMP" paths.
> #sh ip cef YYY.YYY.229.193 internal
> YYY.YYY.229.192/30, epoch 2, flags [rnolbl, rlbls], RIB[B], refcnt 6, per-destination sharing
> sources: RIB
> feature space:
> IPRM: 0x00018000
> Broker: linked, distributed at 4th priority
> ifnums:
> GigabitEthernet0/0/22(29): XXX.XXX.67.152
> GigabitEthernet0/0/23(30): YYY.YYY.230.102
> path list 3C293988, 35 locks, per-destination, flags 0x26D [shble, hvsh, rif, rcrsv, hwcn, bgp]
> path 3C292714, share 1/1, type recursive, for IPv4
> recursive via XXX.XXX.76.211[IPv4:Default], fib 3C9AE64C, 1 terminal fib, v4:Default:XXX.XXX.76.211/32
> path list 3D583FF0, 13 locks, per-destination, flags 0x49 [shble, rif, hwcn]
> path 3D4A221C, share 0/1, type attached nexthop, for IPv4, flags [has-rpr]
> MPLS short path extensions: MOI flags = 0x21 label explicit-null
> nexthop YYY.YYY.230.102 GigabitEthernet0/0/23 label [explicit-null|explicit-null], IP adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3C287540
> repair: attached-nexthop XXX.XXX.67.152 GigabitEthernet0/0/22 (3D4A44A4)
> path 3D4A44A4, share 1/1, type attached nexthop, for IPv4, flags [has-rpr]
> MPLS short path extensions: MOI flags = 0x21 label explicit-null
> nexthop XXX.XXX.67.152 GigabitEthernet0/0/22 label [explicit-null|explicit-null], IP adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3CC74980
> repair: attached-nexthop YYY.YYY.230.102 GigabitEthernet0/0/23 (3D4A221C)
> output chain:
> loadinfo 3D43D410, per-session, 2 choices, flags 0103, 21 locks
> flags [Per-session, for-rx-IPv4, indirection]
> 16 hash buckets
> < 0 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51B980)
> <primary: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <repair: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> < 1 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51BA40)
> <primary: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <repair: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> < 2 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51B980)
> <primary: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <repair: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> < 3 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51BA40)
> <primary: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <repair: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> < 4 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51B980)
> <primary: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <repair: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> < 5 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51BA40)
> <primary: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <repair: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> < 6 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51B980)
> <primary: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <repair: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> < 7 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51BA40)
> <primary: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <repair: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> < 8 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51B980)
> <primary: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <repair: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> < 9 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51BA40)
> <primary: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <repair: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <10 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51B980)
> <primary: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <repair: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <11 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51BA40)
> <primary: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <repair: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <12 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51B980)
> <primary: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <repair: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <13 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51BA40)
> <primary: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <repair: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <14 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51B980)
> <primary: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> <repair: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <15 > label [explicit-null|explicit-null]
> FRR Primary (0x3D51BA40)
> <primary: TAG adj out of GigabitEthernet0/0/23, addr YYY.YYY.230.102 3CC74300>
> <repair: TAG adj out of GigabitEthernet0/0/22, addr XXX.XXX.67.152 3D643CE0>
> Subblocks:
> None
>From the output above it looks like everything should work, the ASR920
has filled 8 of the 16 hash buckets with path 1 and the other 8 with
path 2, so even though only "4" or "8" are configureable when using
"platform loadbalance max-paths..." it looks like it should be OK (in
that both paths are visible and equal in CEF).
Is there a difference between the software CEF tables and whats
programmed in hardware? I think these commands will check the hardware
for you (although not 100% sure):
show platform hardware pp active feature cef database ipv4 YYY.YYY.229.193/32
show platform hardware pp active feature cef database ipv4 YYY.YYY.229.194/32
show platform hardware pp active feature cef database ipv4 YYY.YYY.229.192/30
So given the above, it looks like you have 2 (and based on your most
recent post 4) labelled paths available in CEF, why isn't the traffic
being evenly distributed?
The link that Waris shared to the list a while back indicates that 4
ECMP paths are supported and that labelled traffic is supported for
hashing:
https://drive.google.com/drive/folders/0B5Q6qCRMe89_ZThNbWdDUWpyR2c
"Equal Cost Multi-Path (ECMP) Support - P router - 1. L3VPN traffic ,
default it will load balance on src/dst ip hash"
"Equal Cost Multi-Path (ECMP) Support - PE router - 1. L3VPN traffic –
based on src/ds tip hash"
> #ip cef load-sharing algorithm ?
> include-ports Algorithm that includes layer 4 ports
> original Original algorithm
> tunnel Algorithm for use in tunnel only environments
> universal Algorithm for use in most environments
I assume that the options above are related to what Waris provided in
that PDF; "tunnel" is not wanted here due to lack of GTP and probably
not "universal" either as you need to configure a hash offset manually
so I would have thought that either "original" (just src/dst IP) or
"include-ports" would provide the hashing entropy you need. When using
"original" or "include-ports" and the traffic is MPLS labelled
traffic, based on my quote from Waris' PDF above, I assume that would
apply to the L3VPN src/dst IPs or ports.
On 31 August 2017 at 04:13, CBL <alandaluz at gmail.com> wrote:
> What if you were to setup four BDIs running OSPF/MPLS across these two
> physical interfaces. Two BDIs per physical interface. Would that make ECMP
> work correctly using an ASR920?
>
> We're going to be in the same boat soon too.. ASR920's on both sides with
> OSPF across two physical paths and worried about load sharing. Most of our
> traffic is MPLS xconnects traversing these links (licensed backhauls).
This doesn't sound like a good idea to me (depending on your traffic
requirements). I have had mixed results when using BDIs/SVIs for core
MPLS facing interfaces, as an example, PPPoE frames wouldn't forward
over a pseudowire when the ASR920 used a BDI for the for interface:
https://null.53bits.co.uk/index.php?page=mpls-over-phy-vs-eff-bdi-svi
Cheers,
James.
More information about the cisco-nsp
mailing list