[j-nsp] MPLS VPN Load-balancing

Christian Martin christian.martin at teliris.com
Tue Aug 11 16:57:39 EDT 2009


Steven,

Thanks for the response.  I was unaware of this limitation in the ABC- 
chip, but I am still curious as to why the incoming traffic to the PE,  
which should be hashed at IP only, is not properly balanced across the  
outbound (MPLS) links.  The lookup is done on the IP header only,  
which should have enough entropy to create a reasonably balanced  
modulus.  Unless the outbound FIB entries play a role somehow?  I  
could see if this were a P and MPLS was coming in and out, but it is  
IP--->push--push---forward...

Also note that the outer label is of course different on the two links  
(learned via LDP).

Cheers,
Chris





On Aug 11, 2009, at 4:08 PM, Steven Brenchley wrote:

> Hi Christian,
>      The problem your hitting is a limitation of the M10i chip set.   
> It can only look at the top two labels and since both top labels are  
> the same for all this traffic it's going to look like the same flow  
> and send it all across the same link.  The only way I've been able  
> to get a simulance of load balancing is by creating multiple LSP's  
> between the same end points and manually push different traffic  
> across the different LSP's.  It's really clunky but there are no  
> switches that will work around this limitation on the current M10i  
> CFEB.
>       If you where using a T-series, M320,M120, or MX router you  
> don't have this limitation.  They can all go deeper into the packet  
> to determine load balance.
>       On the a semi brighter side, on the horizon there are some new  
> Ichip based CFEB's which will not have this limitation.  I don't  
> recall when those will be available but you could probably get a  
> hold of your SE and get a time table from them.
>
> Steven Brenchley
>
> ===============================
>
> On Tue, Aug 11, 2009 at 3:16 PM, Christian Martin <christian.martin at teliris.com 
> > wrote:
> NSP-ers,
>
> I have a Cisco---Juniper pair connected over a pair of T3 links.   
> The Juniper acts as a PE and is pushing two labels for a specific  
> route learned on the PE destined to a single remote PE well beyond  
> the Cisco P.  The traffic is destined to several IP addresses  
> clustered in this subnet (sort of like 10, 11, 12, 13) and the  
> forwarding table shows that there are two correctly installed next- 
> hops - same VPN label, different LDP label (we have applied several  
> different types of hashings and of course have our forwarding table  
> export policy in place).  Nevertheless, the Juniper is doing a very  
> poor job load-balancing the traffic, and the Cisco is splitting it  
> almost evenly.  There is in fact a larger number of routes being  
> shared across this link (about 20 or so VPN routes in different VRFs  
> and thus different VPN labels – all sharing the same 2 LDP labels,  
> but one particular subnet pair is exchanging quite a bit of  
> traffic).  All of the addresses are unique within our domain.
>
> Has anyone had issues with load-balancing a single subnet across an  
> MPLS VPN link pair?  Note again that this is a PE-P (J--C) problem  
> and that the IP addresses are all arranged locally.  I know Juniper  
> are secretive about their hashing algorithm (can't lose any hero  
> tests, can we?), but we are getting like 5:1 load share if we are  
> lucky and are bumping up against the T3's capacity.  The box is an  
> M10i.
>
> As always, any help would be appreciated.
>
> Cheers,
> C
>
> show route forwarding-table destination 10.160.2.0/24
>
> Routing table: foo.inet
> Internet:
> Destination        Type RtRef Next hop           Type Index NhRef  
> Netif
> 10.160.2.0/24      user     0                    indr 262175     2
>                                                 ulst 262196     2
>                                                Push 74   600     1  
> t3-0/0/0.1000
>                                                Push 74   632     1  
> t3-0/0/1.1000
>
>
> PE-P next-hop count (all showing load-balancing in effect)
>
> show route next-hop 172.16.255.11 terse | match > | count
> Count: 106 lines
>
>
> monitor interface traffic
>
> Interface    Link     Input bytes        (bps)      Output  
> bytes        (bps)
>  t3-0/0/0      Up    541252651233   (25667208)      691166913860    
> (35611752)
>  t3-0/0/1      Up    279149587856    (8737568)        
> 24893605598      (20112)
>
>
> Note that the Cisco is doing 25/9 Mbps and the Juniper 35/.02.
>
>
>
>
>
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> -- 
> Steven Brenchley
> -------------------------------------
> There are 10 types of people in the world those who understand  
> binary and those who don't.



More information about the juniper-nsp mailing list