[c-nsp] Using DFC cards on a L2 6500/7600 system

Tim Stevenson tstevens at cisco.com
Fri Jun 15 12:53:38 EDT 2007


At 07:18 PM 6/15/2007 +0300, Tassos Chatzithomaoglou observed:
>Hi Tim,
>
>Thank you very much for the answers. I hope you 
>don't mind for asking some other questions too....
>
>You said that each DFC equipped module has its 
>own mac address table. So i can have 9 x 65536 (or 64000) macs per 6509?

No, sorry to mislead you. While you could 
actually have as many as 17 unique L2 engines in 
the system (DFC3B/BXL has 2 L2 engines, one for 
each 1/2 of the card), each w/their own copy of 
the MAC table, there are actually hw mechanisms 
(and sw mechanisms w/the sync command I 
mentioned) in place to try to keep them in sync, 
and as such you can't scale n * 64K MAC entries, 
we still advertise 64K for the system.

Reason being, a FE on one card needs to know 
about all the other MACs in the system in case it 
gets a frame destined to one of those MACs; else, 
you'll get flooding. Yes there are certain cases 
where in theory you could scale the MACs (vlan 
only on one FE, etc) but these have never been developed.

So for now, for all practical purposes, the MAC table is synched on all cards.

>What is the maximum possible pps number in the following output?

I guess you are asking what is the per-FE fwding 
capacity: 30Mpps for the PFC; 48Mpps for the DFC. 
Since most of your packets prob aren't 64 bytes, 
you will likely not get to close to that number; 
and, if you have DFCs, the PFC will never see 
30Mpps, it will be primarily handling the sup 
uplinks & inband ports, which is 4G max.


>6509#sh platform hardware capacity forwarding | beg engine load
>      Forwarding engine load:
>                      Module       pps 
> peak-pps                     peak-time
>                      6         125429 
> 480462  04:27:11 EET Sun Feb 25 2007       <== SUP720-3BXL
>                      8        1194652 
> 5272556  15:38:10 EET Thu Feb 15 2007       <== X6724-SFP
>
>30 Mpps per each module, if all modules use DFC?
>30 Mpps total, if all modules are fabric non-DFC?
>
>15 Mpps per sum of all classic modules and 30 Mpps per each DFC module?
>15 Mpps per sum of all classic modules and 30 
>Mpps per sum of all fabric non-DFC modules?

If there are classic cards, ya, the PFC handles 
those lookups, and the max # for the PFC becomes 15Mpps.


>The 400 Mpps forwarding performance written in 
>technical datasheets comes from 13 slots x 30 Mpps (=390 Mpps)?

The number comes from the EANTC test results, 
which had 6748s & 6724s with DFCs in a 6513.
http://www.eantc.com/fileadmin/eantc/downloads/test_reports/2003-2005/EANTC-Summary-Report-Cisco-GigE-Catalyst6500-Supervisor720.pdf


>PS: Is there somewhere on CCO a "calculator" 
>where you can enter your modules' details and 
>get some info about the L2/L3 performance per module or per chassis?

No, afraid not. But it's relatively simple:
DFC gives 48Mpps @ 64 bytes for that card
PFC gives 30Mpps with all fabric cards for the system
PFC gives 15Mpps with classic cards for the system

So your system fwding engine capacity is approx 
30 + (48 * n), or 15 * (48 * n).

Tim


>Regards,
>Tassos
>
>Tim Stevenson wrote on 15/6/2007 6:05 ìì:
>>At 03:35 PM 6/15/2007 +0300, Tassos Chatzithomaoglou observed:
>>
>>>Watching the latest emails about DFC cards, i 
>>>was wondering if the addition of 
>>>WS-F6700-DFC3BXL cards to WS-X67xx modules would
>>>help a 6500/7600 in nay case, when used 
>>>exclusively as a L2 switch (plus 802.1q tunneling/QoS/ACLs).
>>>
>>>
>>>According to CCO:
>>>
>>>The Cisco? Catalyst? 6500 Series Distributed 
>>>Forwarding Card 3 (DFC3), including 
>>>WS-F6700-DFC3A (DFC3A), WS-F6700-DFC3B (DFC3B),
>>>WS-F6700-DFC3BXL (DFC3BXL), is an optional 
>>>daughter card for CEF720-based line cards such as WS-X6704-10GE, WS-X6724-SFP,
>>>WS-X6748-SFP, and WS-X6748-GE-TX. The DFC3 
>>>provides localized forwarding decisions for 
>>>each line card and scales the aggregate
>>>system performance to reach up to 400 mpps. 
>>>The new DFC3B and DFC3BXL offer enhancements 
>>>to support Multiprotocol Label Switching
>>>(MPLS) and Access Control Entries (ACE) 
>>>counters on the Cisco 6700 Series line cards. 
>>>The DFC3BXL also has improved scalability to
>>>support one million IPv4 routes and 256-KB NetFlow entries.
>>>
>>>
>>>Does the 400 mpps forwarding performance have 
>>>any relationship with L2 switching?
>>Yes, it is the same.
>>
>>>Can you enable and use netflow on a L2 switch?
>>Yes, you can enable bridged NF. There are some 
>>limitations to this, the most commonly annoying 
>>one is, the in & out ifindex are the same (the 
>>vlan ID where the packet was bridged). Ie, the L2 switchports are NOT provided.
>>Also, there are negative interactions between 
>>NF & other features, like uflow policing. 
>>Lastly, in current s/w you must have an SVI 
>>with an IP configured & in the up/up state to 
>>do bridged NDE, which is obviously undesired 
>>(at best). This should be fixed in future s/w.
>>
>>>Something i could think of is probably "better" QoS characteristics.
>>>
>>>6509#sh int gi1/1 capabilities | inc Model|QOS
>>>    Model:                 WS-X6724-SFP
>>>    QOS scheduling:        rx-(1q8t), tx-(1p3q8t)       <- CFC
>>>
>>>6509#sh int gi8/1 capabilities | inc Model|QOS
>>>    Model:                 WS-X6724-SFP
>>>    QOS scheduling:        rx-(2q8t), tx-(1p3q8t)       <- DFC3BXL
>>Yes, adding DFC changes the ingress queueing capability.
>>
>>>Also when using non-fabric & fabric modules 
>>>(yep, i know that's a no), the fabric ones 
>>>with DFCs use "dCEF/flow through" as a
>>>Switching Mode, while the fabric ones without 
>>>DFCs use "Crossbar/CEF/truncated". Is there 
>>>any advantage of this on L2 switching?
>>Well, yes, this indicates the card is isolated 
>>from whatever happens on the bus, so it 
>>operates w/maximum efficiency regardless of 
>>whether a classic card is present or not.
>>
>>>6509#sh platform hardware capacity system
>>>System Resources
>>>    PFC operating mode: PFC3BXL
>>>    Supervisor redundancy mode: administratively sso, operationally sso
>>>    Switching resources: Module   Part number               Series
>>>CEF mode
>>>                         1        WS-X6724-SFP 
>>>               CEF720          CEF
>>>                         2        WS-X6724-SFP 
>>>               CEF720          CEF
>>>                         6 
>>> WS-SUP720-3BXL        supervisor          CEF
>>>                         8        WS-X6724-SFP 
>>>               CEF720         dCEF
>>>                         9 
>>> WS-X6408A-GBIC           classic          CEF
>>>
>>>6509#sh fabric switching-mode
>>>Global switching mode is Truncated
>>>dCEF mode is not enforced for system to operate
>>>Fabric module is not  required for system to operate
>>>Modules are allowed to operate in bus mode
>>>Truncated mode is allowed, due to presence of DFC, CEF720 module
>>>
>>>Module Slot     Switching Mode
>>>      1                 Crossbar
>>>      2                 Crossbar
>>>      6                      Bus
>>>      8                     dCEF
>>>      9                      Bus
>>>
>>>
>>>6509#sh platform hardware capacity fabric
>>>Switch Fabric Resources
>>>    Bus utilization: current: 0%, peak was 67% 
>>> at 15:39:57 EET Thu Feb 15 2007
>>>    Fabric utilization:     Ingress                    Egress
>>>      Module  Chanl  Speed  rate  peak                 rate  peak
>>>      1       0        20G    0%    1% @09:35 
>>> 15Feb07    1%   26% @16:36 15Feb07
>>>      2       0        20G    1%    3% @23:13 
>>> 13Jun07    1%    1% @15:40 24May07
>>>      6       0        20G    0%   11% @06:41 
>>> 30Mar07    0%   26% @16:36 15Feb07
>>>      8       0        20G    0%   26% @16:36 
>>> 15Feb07    0%   11% @06:41 30Mar07
>>>    Switching mode: Module
>>>Switching mode
>>>                    1    truncated
>>>                    2    truncated
>>>                    6 flow through
>>>                    8      compact
>>>
>>>
>>>
>>>Is there something else i'm missing?
>>Not really - DFCs give you fully distributed 
>>fwding for every type of traffic, including L2. 
>>Most of the tables are symmetric across all 
>>fwding engines (MAC & NF tables excepted).
>>A couple potential downsides with DFCs include:
>>- Layer 2 distributed etherchannels (EC 
>>configured as L2 port/trunk with ports on 
>>different linecards): there have been a variety 
>>of problems with this configuration so if you 
>>have it, run the latest SXF rebuild to make 
>>sure you have all the fixes, and turn on the 
>>mac table sync process as well 
>>(mac-address-table synchronize), it's off by default.
>>- policing: each forwarding engine does 
>>aggregate policing independently, so vlan-based 
>>agg policing (ingress or egress) is enforced by 
>>each fwding engine to the configured rate, 
>>resulting in n * rate packets potentially 
>>passing through the system, where n is the # of fwding engines in the box
>>- CoPP/CPU RLs: CoPP & the CPU rate limiters 
>>essentially suffer from the same limit as above 
>>for policing, the configured rates are enforced 
>>per fwding engine so the CPU inband port is presented w/n * rate packets.
>>HTH,
>>Tim
>>
>>>--
>>>Tassos
>>>_______________________________________________
>>>cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>>https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>>Tim Stevenson, tstevens at cisco.com
>>Routing & Switching CCIE #5561
>>Technical Marketing Engineer, Data Center BU
>>Cisco Systems, http://www.cisco.com
>>IP Phone: 408-526-6759
>>********************************************************
>>The contents of this message may be *Cisco Confidential*
>>and are intended for the specified recipients only.



Tim Stevenson, tstevens at cisco.com
Routing & Switching CCIE #5561
Technical Marketing Engineer, Data Center BU
Cisco Systems, http://www.cisco.com
IP Phone: 408-526-6759
********************************************************
The contents of this message may be *Cisco Confidential*
and are intended for the specified recipients only.


More information about the cisco-nsp mailing list