[c-nsp] GSR E2 linecard performance woes

David Freedman david.freedman at uk.clara.net
Wed Mar 16 07:07:40 EST 2005


Well,

#exec slot 5 sh contr psa bund
========= Line Card (Slot 5) =========


global_vars
   current bundle: BUNDLE_ge_input_acl_128               enabled 
features bv: 0x0 400407

features_array[]
   index name                                           en       bv
   ----- ----                                           --       --
   [1]   FEAT_basic_ip                                  1        0x0 1
   [2]   FEAT_basic_mpls                                1        0x0 2
   [3]   FEAT_mpls_vpn                                  1        0x0 4
   [4]   FEAT_cr_mpls                                   0        0x0 8
   [5]   FEAT_fr_switching_2byte                        0        0x0 10
   [6]   FEAT_fr_switching_4byte                        0        0x0 20
   [7]   FEAT_per_packet_loadbal                        0        0x0 40
   [8]   FEAT_multicast                                 0        0x0 80
   [9]   FEAT_uti_ingress_card                          0        0x0 100
   [10]  FEAT_mpls_aware_sampled_netflow                0        0x0 200
   [11]  FEAT_sampled_netflow                           1        0x0 400
   [12]  FEAT_carrier_support_carrier                   0        0x0 800
   [13]  FEAT_vrf_selection                             0        0x0 1000
   [14]  FEAT_fr_traffic_policing                       0        0x0 2000
   [15]  FEAT_frame_over_mpls                           0        0x0 4000
   [16]  FEAT_ppp_hdlc_over_mpls                        0        0x0 8000
   [17]  FEAT_bgp_policy_accounting                     0        0x0 10000
   [18]  FEAT_cos_transparency                          0        0x0 20000
   [19]  FEAT_ip_coloring                               0        0x0 40000
   [20]  FEAT_link_bundling                             0        0x0 80000
   [21]  FEAT_pirc                                      0        0x0 100000
   [22]  FEAT_urpf                                      0        0x0 200000
   [23]  FEAT_input_acl_128                             1        0x0 400000
   [24]  FEAT_output_acl_128                            0        0x0 800000
   [25]  FEAT_input_acl_448                             0        0x0 1000000
   [26]  FEAT_output_acl_448                            0        0x0 2000000
   [27]  FEAT_acl_debug                                 0        0x0 4000000
   [28]  FEAT_server_card                               0        0x0 8000000
   [29]  FEAT_eompls                                    0        0x0 
10000000

bundles_array[]
   index name                                           fbv
   ----- ----                                           ---
   [0]   BUNDLE_pos_vanilla                             0x0 FC7
   [1]   BUNDLE_pos_pirc                                0x0 120083
   [2]   BUNDLE_pos_urpf                                0x0 200603
   [3]   BUNDLE_pos_bgp_pa                              0x0 10083
   [4]   BUNDLE_pos_cos_transparency                    0x0 20CC7
   [5]   BUNDLE_pos_ip_coloring                         0x0 40683
   [6]   BUNDLE_pos_input_acl_128                       0x0 440607
   [7]   BUNDLE_pos_output_acl_128                      0x0 840603
   [8]   BUNDLE_pos_input_acl_128_debug                 0x0 4400003
   [9]   BUNDLE_pos_output_acl_128_debug                0x0 4800003
   [10]  BUNDLE_pos_input_acl_448                       0x0 1000003
   [11]  BUNDLE_pos_output_acl_448                      0x0 2000003
   [12]  BUNDLE_pos_input_acl_448_debug                 0x0 5000003
   [13]  BUNDLE_pos_output_acl_448_debug                0x0 6000003
   [14]  BUNDLE_pos_server_card                         0x0 8000003
   [15]  BUNDLE_pos_fr_traffic_policing                 0x0 2733
   [16]  BUNDLE_pos_vrf_selection                       0x0 1007
   [17]  BUNDLE_pos_link_bundle_snf                     0x0 80407
   [18]  BUNDLE_pos_link_bundle_acl                     0x0 480007
   [19]  BUNDLE_pos_frame_over_mpls                     0x0 4003
   [20]  BUNDLE_pos_ppp_hdlc_over_mpls                  0x0 8003
   [21]  BUNDLE_ge_vanilla                              0x0 7C7
   [22]  BUNDLE_ge_pirc                                 0x0 120083
   [23]  BUNDLE_ge_bgp_pa                               0x0 10083
   [24]  BUNDLE_ge_cos_transparency                     0x0 204C7
   [25]  BUNDLE_ge_input_acl_128                        0x0 400603
   [26]  BUNDLE_ge_output_acl_128                       0x0 800603
   [27]  BUNDLE_ge_input_acl_448                        0x0 1000003
   [28]  BUNDLE_ge_output_acl_448                       0x0 2000003
   [29]  BUNDLE_ge_eompls                               0x0 10000087
   [30]  BUNDLE_ge_link_bundle_snf                      0x0 80403
   [31]  BUNDLE_ge_link_bundle_acl                      0x0 480003
   [32]  BUNDLE_ge_csc                                  0x0 CC7
   [33]  BUNDLE_ge_vrf_selection                        0x0 1007
   [34]  BUNDLE_ge_urpf                                 0x0 200603
   [35]  BUNDLE_atm_inuit_vanilla                       0x0 807
   [36]  BUNDLE_atm_inuit_output_acl                    0x0 800003
   [37]  BUNDLE_atm_taz_vanilla                         0x0 80F
   [38]  BUNDLE_atm_taz_input_acl_128                   0x0 400003


The following "bad" event counters are observed to increment largely:


- "Packets punt to RP" (does this mean RP CPU or LC CPU?)
- some "HW engine reject" (not much)


Where can I find a list of which features appear as PSA bundles for this 
card in different IOS releases?

(trying to avoid parsing *all* the release notes)

For instance, I note that "output_acl_128" is not enabled,
do I need this to maintain performance with egress ACLs?

Dave.




Oliver Boehmer (oboehmer) wrote:
> David,
> 
> E2 generally forwards in hardware, so LC-CPU usage is no straight
> indication for LC utilization. LC-CPU is invoked when the hardware punts
> traffic to the LC-CPU, so you want to find out what is happening here.
> Does your loaded PSA bundle support all the features you have configured
> on your LC (the PSA ASIC doing the forwarding is loaded with a microcode
> bundle depending on the feature you have enabled. If the loaded bundle
> does not support all the features, pkts are punted to the LC-CPU for
> switching)..
> 
> "exec slot 5 sh contr psa bund" 
> "exec slot 5 sh contr events" (several times to see which counter
> increases)
> 
> might provide more info..
> 
> 	oli
> 
> 
> David Freedman <> wrote on Wednesday, March 16, 2005 12:32 PM:
> 
>> Hiya,
>> 
>> AFAIK, the cisco "marketing" packet rate figure for the GSR120XX E2
>> linecards is 4MPPS.
>> 
>> I'm running a 3GE E2 linecard in a 12012 with the following features:
>> 
>> 
>> Global -
>> 
>> access-list compiled
>> 
>> 
>> Port 1 -
>> 
>> - Ingress ACL (7 lines)
>> - Egress ACL (7 lines)
>> - netflow sampling (interval = 1/1000)
>> - sparse pim
>> - 120MB/Sec ingress
>> - 190MB/Sec egress
>> - 39Kpps ingress
>> - 34Kpps egress
>> 
>> 
>> Port 2 -
>> 
>> - Ingress ACL (7 lines)
>> - Egress ACL (7 lines)
>> - netflow sampling (interval = 1/1000)
>> - sparse pim
>> - 320MB/Sec ingress
>> - 320MB/Sec egress
>> - 49Kpps ingress
>> - 77Kpps egress
>> 
>> Port 3 -
>> 
>> - Ingress ACL (7 lines)
>> - Egress ACL (7 lines)
>> - netflow sampling (interval = 1/1000)
>> - sparse pim
>> - MPLS (tag-switching) with NO TE
>> - 532MB/Sec ingress
>> - 513MB/Sec egress
>> - 120Kpps ingress
>> - 97Kpps egress
>> 
>> 
>> The same ACL is used both ingress and egress on all ports.
>> 
>> 
>> Now look at this:
>> 
>> #execute-on slot 5 sh proc cpu | exc 0.00
>> ========= Line Card (Slot 5) =========
>> 
>> CPU utilization for five seconds: 94%/73%; one minute: 85%; five
>> minutes: 84%
>>   PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY
>>     Process 4   172497132   1297743     132922  1.53%  0.51%  0.55%  
>>    0 BFLC PSA ACL 10    59944212  62202985        963  0.07%  0.14% 
>> 0.13%   0 CEF LC 
>> IPC Backg
>>    21   114470916  13418571       8530  0.23%  0.23%  0.22%   0
>> Per-Second Jobs
>>    46    43622128  66352970        657  0.23%  0.29%  0.29%   0 Queue
>>    Mgr 59  2177980408   6086199     357858 18.19%  7.89%  7.70%   0
>> TAG 
>> Stats Backgr
>> 
>> 
>> Why is the card working so hard?
>> Is the marketing data that far off?
>> or am I doing something known to degrade the performance of the card?
>> 
>> Removing the netflow and PIM doesn't really affect the CPU interrupt
>> percentage much.....
>> 
>> Here is the sh diag for the slot:
>> 
>> 
>> 
>> SLOT 5  (RP/LC 5 ): 3 Port Gigabit Ethernet
>>    MAIN: type 68,  800-6376-05 rev B0
>>          Deviation: 0
>>          HW config: 0x00    SW key: 00-00-00
>>    PCA:  73-4775-07 rev C0 ver 2
>>          Design Release 2.0  S/N XXXXXXXXXX
>>    MBUS: Embedded Agent
>>          Test hist: 0x00    RMA#: 00-00-00    RMA hist: 0x00
>>    DIAG: Test count: 0x00000003    Test results: 0x00000000
>>    FRU:  Linecard/Module: 3GE-GBIC-SC=
>>          Route Memory: MEM-GRP/LC-256=
>>          Packet Memory: MEM-LC1-PKT-256=
>>    L3 Engine: 2 - Backbone OC48 (2.5 Gbps)
>>    MBUS Agent Software version 1.86 (RAM) (ROM version is 2.32)
>>    ROM Monitor version 16.12
>>    Fabric Downloader version used 9.3 (ROM version is 9.3)
>>    Primary clock is CSC 0
>>    Board is analyzed
>>    Board State is Line Card Enabled (IOS  RUN )
>>    Insertion time: 00:09:58 (21w6d ago)
>>    DRAM size: 268435456 bytes
>>    FrFab SDRAM size: 134217728 bytes, SDRAM pagesize: 8192 bytes
>>    ToFab SDRAM size: 134217728 bytes, SDRAM pagesize: 8192 bytes
>>    0 crashes since restart
>> 
>> 
>> 
>> System is running  12.0(25)S4 and all linecards had firmware upgraded
>> via the "upgrade all" command.
>> 
>> Processor is a fully expanded GRP-B.
>> 
>> Any help (or even confirmation that I am running the card hot) would
>> be appreciated.
>> 
>> 
>> Thanks,
>> 
>> Dave.
>> 
>> 
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 



More information about the cisco-nsp mailing list