[c-nsp] ASR 1002-X FIB scalability

Beck, Andre cisco-nsp at ibh.net
Tue May 28 15:03:00 EDT 2013


On Tue, May 28, 2013 at 12:06:32PM -0400, Pete Lumbis wrote:
> Since this is hardware based* you'll also need to look at how the FIB fit
> down into TCAM with "show plat hard qfp act tcam resource-manager usage"
> 
> *CPP is a network processor not an ASIC like 6k, but it does rely on
> similar TCAM

According to my "research" so far, the ASR1k does use TCAM but *not*
for the actual FIB. It's used for ACLs and QoS stuff, though. I'm
currently at

asr1002-x#sh ip cef summary 
IPv4 CEF is enabled for distributed and running
VRF Default
 3342651 prefixes (3342651/0 fwd/non-fwd)
 Table id 0x0
 Database epoch:        2 (3342651 entries at this epoch)

but the TCAM is essentially unused:

asr1002-x#show plat hard qfp act tcam resource-manager usage
QFP TCAM Usage Information

80 Bit Region Information
--------------------------
Name                                : Leaf Region #0
Number of cells per entry           : 1
Current 80 bit entries used         : 0
Current used cell entries           : 0
Current free cell entries           : 0

160 Bit Region Information
--------------------------
Name                                : Leaf Region #1
Number of cells per entry           : 2
Current 160 bits entries used       : 4
Current used cell entries           : 8
Current free cell entries           : 4088

320 Bit Region Information
--------------------------
Name                                : Leaf Region #2
Number of cells per entry           : 4
Current 320 bits entries used       : 0
Current used cell entries           : 0
Current free cell entries           : 0


Total TCAM Cell Usage Information
----------------------------------
Name                                : TCAM #0 on CPP #0
Total number of regions             : 3
Total tcam used cell entries        : 8
Total tcam free cell entries        : 524280
Threshold status                    : below critical limit


Given IOSd is nearly maxed out now

asr1002-x#sh mem sum
                Head    Total(b)     Used(b)     Free(b)   Lowest(b)  Largest(b)
Processor  7F1BA8969010   7056007904   6868929988   187077916   186832504   185968684

the published limit of approx. 3.5M IPv4 prefixes sounds very realistic.
As for the remainder of the RAM:

asr1002-x#show platform software status control-processor   
RP0: online, statistics updated 4 seconds ago
Load Average: healthy
  1-Min: 0.09, status: healthy, under 8.00
  5-Min: 0.07, status: healthy, under 8.00
  15-Min: 0.04, status: healthy, under 10.00
Memory (kb): healthy
  Total: 16337628
  Used: 14100464 (86%)
  Free: 2237164 (14%)
  Committed: 12774296 (78%), status: healthy, under 95%
Per-core Statistics
CPU0: CPU Utilization (percentage of time spent)
  User:  2.69, System:  6.88, Nice:  0.00, Idle: 90.31
  IRQ:  0.00, SIRQ:  0.09, IOwait:  0.00
CPU1: CPU Utilization (percentage of time spent)
  User:  0.09, System:  0.19, Nice:  0.00, Idle: 99.70
  IRQ:  0.00, SIRQ:  0.00, IOwait:  0.00
CPU2: CPU Utilization (percentage of time spent)
  User:  0.00, System:  0.00, Nice:  0.00, Idle:100.00
  IRQ:  0.00, SIRQ:  0.00, IOwait:  0.00
CPU3: CPU Utilization (percentage of time spent)
  User:  0.10, System:  0.00, Nice:  0.00, Idle: 99.89
  IRQ:  0.00, SIRQ:  0.00, IOwait:  0.00

That shows nicely that we have 86% of the total 16GB of DRAM in use. It
also shows (through actually having of a lot of sections missing) what
exactly is meant by the statement that the ASR1002-X is sharing memory
between the RP and the ESP: There is in fact just one single General
Purpose CPU in the box. It runs the RP load (RP base OS stuff, IOSd),
but also the FECP (Forwarding Engine Control Processor, the thing
controlling the QFPs and making them into the ESP) and even the SIP
load for the single fixed SIP that can be considered to hide in the
box (so no dedicated IO control processor either). That makes the
process list look a little crowded, but it's also seemingly quite
efficient a shrink of the ASR1000 architecture to the essentials,
nicely fitting a 2RU fixed box.

What I'm still looking for is more insight into the QFP ressource
usage, given the FIB has to live in the 1GB of QFP RAM. I think
it's that:

asr1002-x#show platform hardware qfp active infrastructure exmem statistics 
QFP exmem statistics

Type: Name: DRAM, QFP: 0
  Total: 1073741824
  InUse: 439699456
  Free: 634042368
  Lowest free water mark: 634042368
Type: Name: IRAM, QFP: 0
  Total: 134217728
  InUse: 6660096
  Free: 127557632
  Lowest free water mark: 127557632
Type: Name: SRAM, QFP: 0
  Total: 0
  InUse: 0
  Free: 0
  Lowest free water mark: 0


and for the full details, that:

asr1002-x#show platform hardware qfp active infrastructure exmem statistics user 
Type: Name: IRAM, QFP: 0
  Allocations  Bytes-Alloc  Bytes-Total  User-Name
  ---------------------------------------------------------------------------
  1            115200       115712       CPP_FIA
Type: Name: GLOBAL, QFP: 0
  Allocations  Bytes-Alloc  Bytes-Total  User-Name
  ---------------------------------------------------------------------------
  7            16040        19456        P/I
  1            4384         5120         EPC
  1            4            1024         MMON
  1            4            1024         CFT
  1            4            1024         CVLA
  7            270704       274432       CEF
  1            512          1024         B2B HA
  3            4819680      4820992      QM RM
  1            16384        16384        Qm 16
  1            32768        32768        ING_EGR_UIDB
  1            4194304      4194304      TCAM
  1            16384        16384        ING EGR OUTPUT CHUNK_Queue_0
  1            16384        16384        ING-EGR_IfMap_0
  4            25856        28672        GIC
  1            1048576      1048576      PLU Mgr_CEF_0_0
  205          214958080    214958080    PLU Mgr_CEF_0_3
  1            1572864      1572864      PLU Mgr_CEF_0_8
  1            1048576      1048576      PLU Mgr_CEF_0_9
  16           16777216     16777216     PLU Mgr_PLU_GLOBAL_0_0
  6            6291456      6291456      PLU Mgr_PLU_GLOBAL_0_1
  2            1572864      1572864      PLU Mgr_PLU_GLOBAL_0_2
  3            3145728      3145728      PLU Mgr_PLU_GLOBAL_0_3
  4            5242880      5242880      PLU Mgr_PLU_GLOBAL_0_4
  9            7077888      7077888      PLU Mgr_PLU_GLOBAL_0_5
  9            8257536      8257536      PLU Mgr_PLU_GLOBAL_0_6
  8            8388608      8388608      PLU Mgr_PLU_GLOBAL_0_7
  13           20447232     20447232     PLU Mgr_PLU_GLOBAL_0_8
  3            3145728      3145728      PLU Mgr_PLU_GLOBAL_0_9
  1            1310720      1310720      PLU Mgr_PLU_GLOBAL_0_10
  1            1572864      1572864      PLU Mgr_PLU_GLOBAL_0_11
  1            1835008      1835008      PLU Mgr_PLU_GLOBAL_0_12
  1            1048576      1048576      PLU Mgr_PLU_GLOBAL_0_13
  1            1310720      1310720      PLU Mgr_PLU_GLOBAL_0_14
  1            1572864      1572864      PLU Mgr_PLU_GLOBAL_0_15
  3            2752512      2752512      PLU Mgr_PLU_GLOBAL_0_16
  1            1048576      1048576      PLU Mgr_PLU_GLOBAL_0_17
  4            4718592      4718592      PLU Mgr_PLU_GLOBAL_0_18
  4            34772        37888        SSLVPN
  1            16           1024         cpp_epc_sbs_client
  5            1336184      1339392      BFD
  3            4400         7168         LI
  1            64           1024         cpp_li_sbs_client
  1            4096         4096         SMI
  1            40           1024         cpp_smi_sbs_client
  2            512          2048         TFC
  3            48000        49152        TUNNEL
  1            4384         5120         ERSPAN
  1            112          1024         cpp_erspan_sbs_client
  12           1790272      1793024      ESS
  2            32           2048         ICMP
  1            32000        32768        cpp_icmp_sb_chunk
  1            524288       524288       QoS 1024
  3            8240         9216         cpp_punt_sbs_client
  1            320          1024         punt path chunk 0
  1            32000        32768        punt subblock chunk
  25           9600         25600        punt policer chunk
  22           719476       736256       PKTLOG
  1            512          1024         queue info chunk 0
  1            16           1024         CPP IPHC
  7            1286432      1288192      IPFRAG
  1            16000        16384        cpp_ipfrag_sb_chunk
  10           26048        34816        cpp_ipfrag_sbs_client
  1            32000        32768        cpp_ipreass_sb_chunk
  1            16000        16384        cpp_ipreass_cur_dgram_cnt_chunk
  1            64000        64512        cpp_ipv6reass_sb_chunk
  1            6528         7168         sbs_cef
  2            8388608      8388608      ING_EGR_UIDB
  1            868352       868352       ING EGR INPUT CHUNK_Config_0
  1            16384        16384        ING EGR INPUT CHUNK_Sm_Name_0
  1            32768        32768        ING EGR INPUT CHUNK_Lg_Name_0
  1            802816       802816       ING EGR OUTPUT CHUNK_Config_0
  1            16384        16384        ING EGR OUTPUT CHUNK_Sm_Name_0
  1            32768        32768        ING EGR OUTPUT CHUNK_Lg_Name_0
  1            4096         4096         SPAMARMOT
Type: Name: LOCAL_PVT, QFP: 0
  Allocations  Bytes-Alloc  Bytes-Total  User-Name
  ---------------------------------------------------------------------------
  2            3262688      3263488      QM RM


It would mean we utilize just 40% of the QFP DRAM for that kind of FIB and
the box is apparently more limited by the RP/ESP RAM than by QFP RAM
(the QFP part is specced identical or better then ESP40 at 1GB QFP RAM,
40MB TCAM and 512MB packet buffer), which explains why the real ESP40
with dedicated 8GB of FECP RAM can be specced for 4M routes in the FIB,
while the 1002-X reaches the roof a little earlier. But is the 3.3M
prefixes FIB really condensed to those 316MB of PLU Mgr stuff (and what
does PLU stand for)?

Thanks,
Andre.
--
                    Cool .signatures are so 90s...

-> Andre Beck    +++ ABP-RIPE +++      IBH IT-Service GmbH, Dresden <-


More information about the cisco-nsp mailing list