[c-nsp] do i *need* DFCs on the 6500?

Ben Steele illcritikz at gmail.com
Thu Sep 3 07:42:01 EDT 2009


On Thu, Sep 3, 2009 at 7:35 PM, Phil Mayers <p.mayers at imperial.ac.uk> wrote:

> Ben Steele wrote:
>
>> Unless you are hitting a cam limit on any of your resources on your
>> SUP(very
>> possible if you are exporting netflow) OR you are congesting the crossbar
>> fabric(sh fabric util) which is pretty unlikely when you are talking a 24G
>> linecard on a 40G fabric connection then you probably won't see any
>> difference putting a DFC on a 6724
>>
>
> That depends completely on what other cards are on the box, what their
> offered forwarding load is, and whether they have DFCs.


Hence asking him to check these values, or at least implying from
that sentence that he should :)


>
>
>
>> Remember these chassis are a hardware only based forwarding solution, so
>> all
>> your doing with a DFC is moving cam/asic resources off the sup, so in
>> regards to your specific questions unless you have filled all your QoS
>> queues on the sup you are going to see nothing more on the DFC, also the
>> sup
>> does (from memory) up to 100-200m pps in ipv6, I don't believe for a
>> moment
>>
>
> No. The PFC3 does 30Mpps IPv4 (and 15Mpps IPv6 I think). A DFC3 does 48Mpps
> IPv4 (and 24Mpps IPv6).
>
>
> http://www.cisco.com/en/US/products/hw/switches/ps708/products_qanda_item09186a00809a7673.shtml
>
> A fully-populated and fully-DFCed 6509 does 400Mpps IPv4 or 200Mpps IPv6
> (well, actually 192Mpps - 24x8 linecards). In this configuration, the PFC
> does very little.
>

Ok my bad, my memory was for the full chassis not the individual PFC, should
read docco next time before posting! i'm still quite certain our OP isn't
doing 15Mpps of IPv6, if he is then he must be the IPv6 hub of the world.


>
> It's worth noting that a 6724 doing 64-bytes packets on all ports offers
> ~47Mpps forwarding load - well in excess of the PFC capacity. A chassis full
> of 6724s without DFCs at 10% load with 64-bytes packets also exceeds the PFC
> capacity.
>
> Obviously these are worst-case numbers but illustrative of the problems you
> can get yourself into if you don't capacity plan well.
>

I think it's safe to say our OP is no where near these limits or he would
definitely know about it, in fact I doubt anyone in the world has hit 47Mpps
on any 6500 linecard(in a real world situation, no labs), please if someone
has feel free to let me know about it.

But yes capacity planning is very important.


>
> It's worth noting that some linecards have different (i.e. more flexible)
> rx & tx queueing methods with a DFC versus the CFC.
>

True but keep in mind the OP already has some DFC enabled linecards so I
would assume he is familiar with what QoS he can and can't schedule on the
CFC vs DFC, his particular comment related to performance and offloading of
QoS - not features, the same goes for different line cards in general
though, like the 4 and 8 port 10Gb line cards, totally different buffering
capabilities, you need to choose your line card wisely, our OP already has
his in place.


> There's also the bus-stall issues, which go away (supposedly) with a DFC
> installed since they're not connected to the bus.


Interesting.. i'll take your word for that, can't say i've seen much in the
way of bus stalls when working with them(at least in recent times) except
the standard OIR one, i'll assume this is an actual performance impacting
stall you are referring to, does this apply even if the chassis is in
compact mode?


>
>
>  you are even remotely close to this, and the global ipv6 routing table is
>> no
>> where near the cam limit for that either, by the way is your SUP an XL?
>> does
>> the DFC's on the 10G's match the sup or have they fallen back to the
>> lowest
>> common configuration?
>>
>
> I'm not sure why you mention CAM limits, but it's worth noting that DFCs do
> not help with FIB CAM at all, since they hold a copy of the PFC FIB.
>

Yeah my ipv6 FIB CAM statement was pretty irrelevant and was more me typing
then realising i'm not sure if we are even talking XL or not here, wasn't
the greatest sentence.


>
> Personally we get DFCs on everything since we're using plain -3B (or -3C
> not) rather than XL, and the cost of the DFC is a pretty minimal percentage
> of the linecard for the future-proofing.
>

No doubt it's better to have a DFC than not have a DFC but some companies
are tight with money and justifying just a few thousand for something you
don't *really* need can be hard, while non XL upgrade might seem trivial I
think you'll find to upgrade a 6724 from stock to a 3CXL DFC is around the
price of the actual line card itself, that said neither of us know what PFC
the OP is running :)


>
> We've also seen software bugs manifest on CFC cards in the past; this
> implies to me that Cisco "prefer" DFC chassis. Similarly some of the new
> linecards e.g. 6708/6716 are DFC-only. I suspect that will be the case going
> forward.
>

Well from a performance point of view it makes sense, but it all equals $$
and companies are being stingier than ever with the GFC in everyones head.

I still get the feeling the OP doesn't need the DFC, generally you know if
you need one as it's not something you think ooo that might be nice to have
i'll take 5! you are normally saying I need X because Y doesn't work or Y
will stop working in the near future, maybe the OP has a valid reason for
needing it, I just didn't see it and don't want him(or his company) to buy
something they don't need.


More information about the cisco-nsp mailing list