[e-nsp] x650 FDB table full ???

Marcin Kuczera marcin at leon.pl
Mon Mar 18 11:00:26 EDT 2013


On 2013-03-18 15:03, Robert Kerr wrote:
> On 15/03/13 20:16, Marcin Kuczera wrote:
>> hello,
>>
>> recently my switch shows:
>> 03/15/2013 15:33:54.64 <Warn:HAL.FDB.L2SlotTblFull> Slot-1: FDB entry
>> not added on slot 1. Hardware Table full.
>>
>>
>> Slot-1 SummitStack-GZE.12 # show fdb stats
>>
>> Total: 15731 Static: 0 Perm: 0 Dyn: 15731 Dropped: 0
>> FDB Aging time: 300
>>
>> 2 switches in stack.
>>
>> Specyfication sais - 32k L2 addresses..
>>
>> What's wrong ?
> These two commands are probably relevant:
>
>   show iproute reserved-entries statistics

Slot-1 SummitStack-GZE.1 # show iproute reserved-entries statistics
                        |-----In HW Route Table----|   |--In HW L3 Hash 
Table--|
                        # Used Routes   # IPv4 Hosts   IPv4   IPv4 IPv6  
IPv4
Slot  Type              IPv4   IPv6    Local Remote   Local  Rem. Loc.  
MCast
----  ---------------- ------  -----  ------ ------   -----  ----- ----  
-----
1     X650-24x(SS)          3      0       1      0       0 0     0    502
2     X650-24x(SS)          3      0       1      0       0 0     0    502
3                           -      -       -      -       - -     -      -
4                           -      -       -      -       - -     -      -
5                           -      -       -      -       - -     -      -
6                           -      -       -      -       - -     -      -
7                           -      -       -      -       - -     -      -
8                           -      -       -      -       - -     -      -

Theoretical maximum for each resource type:
       X440                 32     16      64     64     509    512 256  
* 256
       "e"-series          480    240     512    512    2045   2048 
1024  *2048
       "a"-series        12256   6128    8189  12288    8189   8192 
4096  *5000
       X650, E4G-200     12256   6128    8189  12288    8189   8192 
4096  *6000
       X460, E4G-400     12256   6128   12288  12288   16381  16384 
8192  *6000
       X670              16352   8176    8189  16384    8189   8192 
4096  *4096
       X480             262112   8192   16381  40960   16381  16384 
8192  *6000
       X480(40G4X)       16352   8176    8189  16384    8189   8192 
4096  *4096

Flags: (!) Indicates all reserved route entries in use.
        (d) Indicates only direct IPv4 routes are installed.
        (>) Some IPv6 routes with mask > 64 bits are installed and do 
not use
            entries in HW Route Table.
        (*) Assumes IP Multicast compression is on.
Slot-1 SummitStack-GZE.2 #

>   debug hal show forwarding distributions

Slot-1 SummitStack-GZE.2 # debug hal show forwarding distributions
Current hash table forwarding algorithm: crc32

Forwarding entries occupying L3 Hash Table and/or L3 Route Table:
   Resource Type          Total    Hardware Table
   -------------------   ------    ----------------------
   IPv4 Unicast Local         1    L3 Hash or Route Table (see Note)
   IPv4 Unicast Remote        0    L3 Hash or Route Table (see Note)
   IPv4 Multicast           493    L3 Hash Only
   IPv6 Unicast               0    L3 Hash Only
   IPv4 Routes                3    Route Table Only
   IPv6 Routes                0    Route Table Only
Note: Refer to documentation of:
         "configure iproute reserved-entries <num_routes_needed>"
         "show iproute reserved-entries statistics"
       IPv4 Hosts may not be on all slots since each slot removes unused 
hosts.

Table simulation statistics:
Hash    Hardware   Resource       In L3    Route Table    Failed
Algo.   Type       Type            Hash    # In/Avail.    No Room
-----   ---------  -----------    -----    -----------    -------
crc32   a-series   IPv4 UC Loc        0        1/12253          0
                    IPv4 UC Rem        0        0/12253          0
                    IPv4 MC          493      n/a                0
                    IPv6 UC            0      n/a                0

crc32   e-series   IPv4 UC Loc        1        0/    0          0
                    IPv4 UC Rem        0        0/    0          0
                    IPv4 MC          493      n/a                0
                    IPv6 UC            0      n/a                0

crc16   a-series   IPv4 UC Loc        0        1/12253          0
                    IPv4 UC Rem        0        0/12253          0
                    IPv4 MC          493      n/a                0
                    IPv6 UC            0      n/a                0

crc16   e-series   IPv4 UC Loc        1        0/    0          0
                    IPv4 UC Rem        0        0/    0          0
                    IPv4 MC          493      n/a                0
                    IPv6 UC            0      n/a                0

Hash bucket distribution simulation:

For a-series:
crc16[ 0 entries]:  625 buckets         crc32[ 0 entries]:  643 buckets
crc16[ 1   entry]:  319 buckets         crc32[ 1   entry]:  281 buckets
crc16[ 2 entries]:   67 buckets         crc32[ 2 entries]:   89 buckets
crc16[ 3 entries]:   12 buckets         crc32[ 3 entries]:   10 buckets
crc16[ 4 entries]:    1 bucket          crc32[ 4 entries]:    1 bucket
crc16[ 5 entries]:    0 buckets         crc32[ 5 entries]:    0 buckets
crc16[ 6 entries]:    0 buckets         crc32[ 6 entries]:    0 buckets
crc16[ 7 entries]:    0 buckets         crc32[ 7 entries]:    0 buckets
crc16[ 8 entries]:    0 buckets         crc32[ 8 entries]:    0 buckets
crc16[>8 entries]:    0 buckets         crc32[>8 entries]: 0 buckets
Hardware newer than a- and e-series allows configuration of dual-hash via:
     configure forwarding hash-algorithm [crc16 | crc32] {dual-hash [on 
| off]}

   If dual-hash is "off", utilization is equal to "a"-series.
   If dual-hash is "on" (default), utilization is typically better.

For e-series:
crc16[ 0 entries]:   38 buckets         crc32[ 0 entries]:   35 buckets
crc16[ 1   entry]:   65 buckets         crc32[ 1   entry]:   73 buckets
crc16[ 2 entries]:   80 buckets         crc32[ 2 entries]:   75 buckets
crc16[ 3 entries]:   43 buckets         crc32[ 3 entries]:   39 buckets
crc16[ 4 entries]:   17 buckets         crc32[ 4 entries]:   23 buckets
crc16[ 5 entries]:   10 buckets         crc32[ 5 entries]:    7 buckets
crc16[ 6 entries]:    0 buckets         crc32[ 6 entries]:    2 buckets
crc16[ 7 entries]:    2 buckets         crc32[ 7 entries]:    1 bucket
crc16[ 8 entries]:    1 bucket          crc32[ 8 entries]:    1 bucket
crc16[>8 entries]:    0 buckets         crc32[>8 entries]: 0 buckets

Note: To display the current hash bucket distribution for one slot, use:
       "debug hal show forwarding distributions slot <slot>"
Slot-1 SummitStack-GZE.3 #


Doesn't look to heavy ;)

>
> It all depends how the memory is split up - there are quite a lot of
> configuration options here. The 32k number probably assumes you have no
> L3 routes, no snooped multicast entries, and no ACLs taking up memory.

So - I have just 3 L3 routes, <1000Mcast groups, almost no ACLs...

Then I suppose I should redistribute resources.

> 'Extended IPv4 Host Cache' in chapter 31 of the XOS concepts guide is
> worth a read. If you're not doing much in the way of L3 stuff on the
> stack it looks like you can reduce the number of entries reserved for
> routes in order to free up more for hosts.

Here:
http://ethernation.net/Default.aspx?tabid=84&forumid=8&postid=706&scope=posts

I found this:
The Summit switches use a hash algorithm to populate the internal 
memory. The memory is divided up into buckets and each bucket can handle 
up to 8 entries. The hash algorithm being used should be crc32 which is 
intended to give the best division of addresses into the entries however 
it is possible that you can get a bucket filled with more than 8 
entries. When that happens the switch should use it's TCAM memory which 
is usually used for the route table to handle those entries. The TCAM 
memory is a one to one so only one entry will be placed in each memory 
location. One thing to note is that the amount of TCAM available is 
dependent on how many routes you are using.

This is exactly what I felt about FDB table construction.
My question is - how to force x650 and x450 switches to use TCAM ?

Maybe it is possible to extend number of entries per bucket to i.e. 16 ?

Regards,
Marcin





More information about the extreme-nsp mailing list