[c-nsp] multipath BGP not balancing equally.

Kevin Loch kloch at kl.net
Fri Aug 7 00:59:06 EDT 2009


This sounds like the unequal multipath is a quirk (feature?)
of sup720 default load sharing behavior.  It happens to any multipath
routes (static, ospf, bgp) installed in the FIB:

http://cisco.cluepon.net/index.php/Sup720_load_balancing

shows a different ratios than OP but that might be due to different
behavior in different IOS versions or hardware revisions.

"mls ip cef load-sharing simple" works well for me
but "mls ip cef load-sharing full simple" should also work
if you also want layer4 hashes involved.

- Kevin

David Hughes wrote:
> 
> Hi
> 
> But seeing as the OP indicated that one of the circuits was 2GB 
> *underutilised* you'd be looking for 3 src/dst pairs that were all doing 
> 2GB to get this situation.  It's looking pretty unlikely that this is a 
> hashing issue.
> 
> 
> David
> ...
> 
> On 06/08/2009, at 6:23 AM, Rodney Dunn wrote:
> 
>> Ah...good one. If the sources were not random enough and it's NAT'ed 
>> to one external ip you could really be multiplexing flows with NAT. ;)
>>
>>
>>
>> Dean Smith wrote:
>>> Would agree that volume is rare between 2xIP addresses but we have 
>>> something similair although on not quite the scale.
>>> We NAT a very large organisation to the Internet. They have a large 
>>> number of disparate sites that all do their own AV updates. All the 
>>> PCs download at the same time in the evening and we generate about 
>>> .75 Gb/s of traffic between our external PAT address and the AV 
>>> download site for a good couple of hours. If we had a bigger internet 
>>> pipe it would be a higher figure. (for less time of course).
>>> Dean
>>> ----- Original Message ----- From: "Rodney Dunn" <rodunn at cisco.com>
>>> To: "Mikael Abrahamsson" <swmike at swm.pp.se>
>>> Cc: "Cisco" <cisco-nsp at puck.nether.net>
>>> Sent: Wednesday, August 05, 2009 2:19 PM
>>> Subject: Re: [c-nsp] multipath BGP not balancing equally.
>>>> For small flow combinations you are right. btw, it would be just L3 
>>>> src/dst flows by default unless the L4 port option is enabled.
>>>>
>>>> I thought about there being a single flow causing the difference 
>>>> that would be hashing down one of the paths. But 2G, while not 
>>>> impossible, typically isn't used between two ip addresses. It's 
>>>> something to check though for sure.
>>>>
>>>> Rodney
>>>>
>>>>
>>>>
>>>> Mikael Abrahamsson wrote:
>>>>> On Tue, 4 Aug 2009, Rodney Dunn wrote:
>>>>>
>>>>>> That's usually caused by routes not being the same on the paths.
>>>>>
>>>>> It was my understanding that this usually was caused by not having 
>>>>> enough L4 flows to loadshare on...? Ie if you have 100 TCP flows 
>>>>> and 4 paths, then it's not enough flows to get good load share on, 
>>>>> but if you instead have 10k flows and all of them are low-speed, 
>>>>> then the odds of them being equally load shared is much better?
>>>>>
>>>> _______________________________________________
>>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>>
>>>> __________ NOD32 4306 (20090804) Information __________
>>>>
>>>> This message was checked by NOD32 antivirus system.
>>>> http://www.eset.com
>>>>
>>>>
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list