[c-nsp] Multiple flow-masks

Tóth András diosbejgli at gmail.com
Sun Dec 9 09:47:19 EST 2012


The outputs you pasted suggests that you're using "interface-full"
flowmask. The workaround is to use "full" flowmask instead of
"interface-full" as mentioned in my last email. IPv6 entries consume more
TCAM space than IPv4, so comparing them should take this fact into account.

Best regards,
Andras


On Sun, Dec 9, 2012 at 2:52 PM, Robert Williams <Robert at custodiandc.com>wrote:

>  Hi,
>
> Thanks very much for that, I’ll have to run through everything in that
> document because the tests I’m doing suggest that it ‘should’ work, for
> example:
>
> With both my policy and the mls ipv6 flow interface-full disabled I see
> one entry, which is presumably because ‘mls qos’ is enabled:
>
> IPv6:       1   Intf Ful    *FM_IPV6_QOS*
> IPv6:       2   Null
>
> Then if I enable only “*mls ipv6 flow interface-full*”, the
> FM_IPV6_GUARDIAN ‘feature’ appears:
>
> IPv6:       1   Intf Ful    *FM_IPV6_GUARDIAN* *FM_IPV6_QOS*
> IPv6:       2   Null
>
> As you can see there is still a gap for the policy to take the second flow
> mask and use whatever type of mask it wants.
>
> As a test - if I disable the ipv6 flow and just enable *my policy** *by
> itself, it goes in the second slot correctly - and uses the Destination
> Only mask:
>
> IPv6:       1   Intf Ful    *FM_IPV6_QOS*
> IPv6:       2   Dest onl    *FM_IPV6_QOS*
>
> So, I was assuming that a combination of these two features would be
> acceptable to it, since they operate in different mask slots when enabled
> separately anyway I didn’t see why they should collide.
>
> I am correct as far as its operation in IPv4 goes because for v4 there is
> no conflict warning (and the policy works in hardware perfectly!). However,
> in IPv6 that does not appear to be the case.
>
> Looks like yet another unhappy IPv6 feature on the Sup-720, unless anyone
> can see a way around it that I’m missing?
>
> As an aside, does anybody know why it is called *FM_IPV6_GUARDIAN*instead of
> *FM_IPV**6**_QOS* (like in v4)? I’m wondering if this difference is the
> reason for its inability to combine the two masks successfully…
>
> Cheers!
>
> *From:* Tóth András [mailto:diosbejgli at gmail.com <diosbejgli at gmail.com>]
> *Sent:* 08 December 2012 21:09
> *To:* Robert Williams
> *Cc:* cisco-nsp NSP
> *Subject:* Re: [c-nsp] Multiple flow-masks
>
> Hi Robert,
>
> A few things to keep in mind.
>
> With Release 12.2(33)SXI4 and later releases, when appropriate for the
> configuration of the policer, microflow policers use the interface-full
> flow mask, which can reduce flowmask conflicts. Releases earlier than
> Release 12.2(33)SXI4 use the full flow mask.
>
> The flowmask requirements of QoS, NetFlow, and NetFlow data export (NDE)
> might conflict, especially if you configure microflow policing.
>
> *
> http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/qos.html
> *<http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/qos.html>
>
> To add to this, note the following restrictions/recommendations well:
>
> The micro-flow policing full flow mask is compatible with NDE’s flow masks
> that are shorter than or equal to full flow (except for destination source
> interface).
> With any micro-flow policing partial mask an error message is displayed
> and either the micro-flow policer or NDE might get disabled.
>
> Best regards,
> Andras
>
>
> On Sat, Dec 8, 2012 at 3:50 PM, Robert Williams <*Robert at custodiandc.com*<Robert at custodiandc.com>>
> wrote:
> Hi,
>
> Unfortunately we use Netflow for an automated system we have (it doesn't
> need to accurately record everything, just the highest number of flows /
> packets etc). So I cannot just remove it, however I have made some progress.
>
> I've tracked it down to the problem actually being with the IPv6 netflow /
> masks. With all netflow removed I am able to add my policy-map in and it
> works. Then by adding netflow commands back in I can get everything back
> except the command:
>
>  mls flow ipv6 <any command>
>
> So even if I specify:
>
>  mls flow ipv6 destination
>
> I still get:
>
> %FM-2-FLOWMASK_CONFLICT: Features configured on interface <name> have
> conflicting flowmask requirements, traffic may be switched in software
>
> At this point in time, with my policy attached and working I'm showing:
>
>                  Flowmasks:   Mask#   Type        Features
>                       IPv4:       0   reserved    none
>                       IPv4:       1   Intf Ful    FM_QOS Intf NDE L3
> Feature
>                       IPv4:       2   Dest onl    FM_QOS             <---
> My policy (V4)
>                       IPv4:       3   reserved    none
>
>                       IPv6:       0   reserved    none
>                       IPv6:       1   Intf Ful    FM_IPV6_QOS
>                       IPv6:       2   Dest onl    FM_IPV6_QOS        <---
> My policy (V6)
>                       IPv6:       3   reserved    none
>
> The command "mls flow ipv6 <anything>" just plain refuses to go active in
> the config, so if I re-send it I get the error shown above every time.
>
> The flowmasks are correctly showing "Intf Full" and "Dest only" in slots 1
> and 2 respectively. So, why does my netflow request not attach alongside
> either one of them when it's looking for the same mask as is already active
> in those slots?
>
> The policy itself is working correctly at this point, but I cannot enable
> IPv6 netflow.
>
> Can anyone help?
>
> Robert Williams
> Backline / Operations Team
> Custodian DataCentre
> tel: *+44 (0)1622 230382* <%2B44%20%280%291622%20230382>
> email: *Robert at CustodianDC.com* <Robert at CustodianDC.com>
> *http://www.custodiandc.com/disclaimer.txt*<http://www.custodiandc.com/disclaimer.txt>
>
>
> *Robert Williams*
> Backline / Operations Team
> Custodian DataCentre
> tel: +44 (0)1622 230382
> email: Robert at CustodianDC.com
> http://www.custodiandc.com/disclaimer.txt
>
> _______________________________________________
> cisco-nsp mailing list  *cisco-nsp at puck.nether.net*<cisco-nsp at puck.nether.net>
> *https://puck.nether.net/mailman/listinfo/cisco-nsp*<https://puck.nether.net/mailman/listinfo/cisco-nsp>
> archive at *http://puck.nether.net/pipermail/cisco-nsp/*<http://puck.nether.net/pipermail/cisco-nsp/>
>
>
>


More information about the cisco-nsp mailing list