[e-nsp] x650 FDB table full ???

Marcin Kuczera marcin at leon.pl
Tue Mar 19 06:55:22 EDT 2013


On 2013-03-18 18:25, Robert Kerr wrote:
> On 18/03/13 15:00, Marcin Kuczera wrote:
>> On 2013-03-18 15:03, Robert Kerr wrote:
>>> These two commands are probably relevant:
> [snip output]
>
>> Doesn't look to heavy ;)
> It doesn't - I was surprised to see only 1 'Local IPv4 Host' as I had
> expected this to be close to the size of your fdb table. I now realise
> this is in fact the ARP table, and as you are doing L2 only it is
> correct to be empty. It seems these commands only show memory in use for
> layer 3 stuff and not layer 2.

Look into links with dump files.


xfff seems to be last bucket of "some FDB hardware space" - "some" 
because I see here 3 tables that is strage.
xfff ix 16x256 - 4096, if capacity of each bucket is 8 = 32k


I'am lost, this FDB has entries, in short:
Slot-1 SummitStack-GZE.3 # debug hal show fdb
TIG-24X SW FDB table:
Software-learned entries:
... (entries)
Software-learned In-use count: 15681

Software-learned entries for "c"-series:
...(entries)
Software-learned for "c"-series In-use count: 15661

Software-learned entries for "a"-series and "original"-series:
...(entries)
Software-learned for "a"-series and "original"-series In-use count: 13237

Software-learned entries for "e"-series:
...(entries)
Software-learned for "e"-series  In-use count: 7455

Hardware-learned entries:
MAC               VlanId    Flags Port  HIT    VPLS
===================================================
Hardware-learned In-use count: 0

   Slot-1 SummitStack-GZE.3 #

Hardware learned entries = 0.
Funny, - x450 (no a, the old one) fdb dump shows hardware entries...

dumps location:
http://noc.leon.pl/~marcin/fdb-dumps/x650-fdb.log
http://noc.leon.pl/~marcin/fdb-dumps/x450-fdb.log


some Extreme developer needed ;)

Marcin




>
>>> It all depends how the memory is split up - there are quite a lot of
>>> configuration options here. The 32k number probably assumes you have no
>>> L3 routes, no snooped multicast entries, and no ACLs taking up memory.
>> So - I have just 3 L3 routes, <1000Mcast groups, almost no ACLs...
>> Then I suppose I should redistribute resources.
> As you have a lot of multicast groups it is probably worth setting
> things up to let multicast use the route table as well as the hash and
> making sure compression is enabled:
>
> http://ethernation.net/Default.aspx?tabid=84&forumid=8&threadid=2509&scope=posts
>
> I don't think this is going to fix your problem though... as IGMP
> snooping appears to only use the L3 table which is otherwise empty.
>
>>> 'Extended IPv4 Host Cache' in chapter 31 of the XOS concepts guide is
>>> worth a read. If you're not doing much in the way of L3 stuff on the
>>> stack it looks like you can reduce the number of entries reserved for
>>> routes in order to free up more for hosts.
>> Here:
>> http://ethernation.net/Default.aspx?tabid=84&forumid=8&postid=706&scope=posts
>> I found this:
>> The Summit switches use a hash algorithm to populate the internal
>> memory. The memory is divided up into buckets and each bucket can handle
>> up to 8 entries. The hash algorithm being used should be crc32 which is
>> intended to give the best division of addresses into the entries however
>> it is possible that you can get a bucket filled with more than 8
>> entries. When that happens the switch should use it's TCAM memory which
>> is usually used for the route table to handle those entries. The TCAM
>> memory is a one to one so only one entry will be placed in each memory
>> location. One thing to note is that the amount of TCAM available is
>> dependent on how many routes you are using.
>> This is exactly what I felt about FDB table construction.
>> My question is - how to force x650 and x450 switches to use TCAM ?
> The concepts guide talks about how to use the route table when the L3
> hash buckets are full, but nothing about layer 2. As far as I can tell
> 'debug hal show forwarding distributions' only shows the layer 3 hash.
>
> It is probably true the FDB is also a hash divided into various buckets,
> but I am not sure how to view the layer 2 hash information... It looks
> like there is a 'debug hal show fdb' which will list every fdb entry,
> and does show which bucket each one is in. The output is very odd
> though, giving the same tables multiple times for different series of
> switch. It also doesn't give a nice summary of how much is in use in
> each bucket.
>
>> Maybe it is possible to extend number of entries per bucket to i.e. 16 ?
> Maybe... but you would need to borrow memory from somewhere or you just
> end up with half as many buckets.
>




More information about the extreme-nsp mailing list