[c-nsp] BFD expectations
Tassos Chatzithomaoglou
achatz at forthnet.gr
Thu Sep 23 03:28:16 EDT 2010
Probably only CWAN cards will be able to offload the cpu.
DFCs, as far as i remember, cannot generate packets.
--
Tassos
Pete Lumbis wrote on 23/09/2010 05:11:
> The forwarding on the 6k can be decentralized but as of today I believe that
> BFD is still a centralized process. That is, it is punted to the CPU and
> control plane issues can give false positives as Phil mentioned.
>
> I think there are plans to make BFD distributed in the future but I have no
> idea what that time line is.
>
> -Pete
>
> On Wed, Sep 22, 2010 at 7:19 PM, Chris Evans<chrisccnpspam2 at gmail.com>wrote:
>
>
>> Phil you bring up a great point. Until sxi bfd code was crap on the 6500..
>> We have done exstensive testing at the ECATS lab. We concluded that 450ms
>> is
>> a good number on this platform with its centralized architecture. We tested
>> this with approx 35 peers and had no issues under heavy CPU load.
>>
>> As stated before bfd is a triggering mechanism it still doesn't fix overall
>> protocol reconvergence issues.
>>
>>> On 09/22/2010 03:22 PM, Jason Lixfeld wrote:
>>>
>>>> It's my understanding that BFD can provide failure detection and
>>>> recovery similar to that found in POS. To that end, I'd like to use
>>>> BFD with ISIS to design an L3 network that has failure detection and
>>>> recovery mechanisms which rival L2 mechanisms like REP/G.8023/STP's
>>>> various incarnations, etc.
>>>>
>>> Wouldn't we all?
>>>
>>> AFAICT, you will have to try very, very hard to get<200msec failover
>>> using available layer3 mechanisms. It can be done, but it's difficult
>>> and the configurations are highly topology-specific. Certainly achieving
>>> 50msec / layer2 failover times seems to be all but impossible in the
>>> general case.
>>>
>>> If you search the archives, you'll get posts from the helpful Cisco guys
>>> on the list saying "contact your account manager and we can help you
>>> tune X to get 100msec failover".
>>>
>>> Have you tuned your IGP? There is a lot of stuff to tweak on this, and
>>> without it, BFD will not help you overmuch.
>>>
>>>
>>>> I've labbed BFD+ISIS between a 7301 and an ME3600, run MTR between
>>>> test hosts connected to each of the two devices and yanked one of the
>>>> two links connecting the 7301 and the ME. I lose about 2-3 seconds
>>>> worth of packets. Those results seem a little inconsistent with the
>>>> claims of BFD's timing, unless there's something I'm missing and even
>>>> with the BFD hooks, ISIS isn't able to react at near POS speeds.
>>>>
>>>> Anyone have any perspective from the real world?
>>>>
>>> For us, BFD was useless. It triggered false positives all the time, then
>>> Cisco removed SVI support under later 12.2SX IOS. It didn't seem to be
>>> "distributed", so anything which loaded the sup RP/SP CPUs caused it to
>>> crap out.
>>>
>>> We gained far more from simply:
>>>
>>> router ospf 1
>>> timers throttle spf 10 100 5000
>>> timers throttle lsa all 10 100 5000
>>> timers lsa arrival 80
>>>
>>> ...on all our boxes.
>>>
>>> YMMV, but I would not believe the marketing hype around BFD.
>>> _______________________________________________
>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>
More information about the cisco-nsp
mailing list