[c-nsp] Multiple flow-masks

Tóth András diosbejgli at gmail.com
Sun Dec 9 14:51:10 EST 2012


Hi Robert,

This limitation holds true for IPv4 and IPv6 as well, it does not change if
you're not using IPv6. I double-checked on my device and even without any
ipv6 config or ipv6 NDE, if you have a dest-only QoS microflow policer and
NDE enabled, there's a flowmask conflict. I can give you the outputs in
unicast if you're interested.

You have the following options if you would like to use UBRL:
- Disable Netflow only on the interface where microflow QoS should be
applied, by removing 'ip flow ingress'.
- Use 'full-flow' mask instead of 'dest-only' in the QoS policy-map.

Feel free to check it yourself by using the 'sh fm fie int gi x/y' command
and check the "Flowmask conflict status for protocol IP" line which should
state FIE_FLOWMASK_STATUS_SUCCESS. If it shows something else, there's a
conflict.

Best regards,
Andras


On Sun, Dec 9, 2012 at 7:03 PM, Robert Williams <Robert at custodiandc.com>wrote:

>  Hi Andras,****
>
> ** **
>
> Yes that has indeed applied without error, however as you point out it
> does not meet with the requirements I have from the customer. In short, the
> customer needs a limit which is applied per IP (of his clients) to restrict
> inbound traffic in the direction Internet->Client. This is because
> downstream of us the client has several links which are only 1GB serving
> large numbers of his clients. I know there are several ways this could be
> resolved (not least of which a switch upgrade within the client’s network)
> however they are not currently options so we are trying to come up with
> something for them which we can deploy ahead of their connection with us.*
> ***
>
> ** **
>
> My understanding of the full-flow mask is that it will limit flow, only,
> so two transfers to the same IP will count as 2 x the limit. Which in this
> scenario makes it inappropriate unfortunately.****
>
> ** **
>
> So, it looks like we will have to deploy it for IPv4 for them only; and
> skip IPv6, which is a shame because the client is a growing IPv6 user so I
> can see the problem surfacing again soon.****
>
> ** **
>
> Interestingly, can anyone confirm that the Sup-2T can do what we are
> trying to do without this IPv6 oddness?****
>
> ** **
>
> Once again, my thanks to you!****
>
> ** **
>
> *From:* Tóth András [mailto:diosbejgli at gmail.com]
> *Sent:* 09 December 2012 17:16
>
> *To:* Robert Williams
> *Cc:* cisco-nsp NSP
> *Subject:* Re: [c-nsp] Multiple flow-masks****
>
> ** **
>
> Hi Robert,****
>
> ** **
>
> Thanks for the clarification. What I meant is that QoS microflow policy
> which needs to have full-flow mask, otherwise you might likely have a
> conflict with NDE. I understand that it's a requirement for you to have
> destination based policing.****
>
> In addition to that, if the NDE flowmask is interface-full-flow, it will
> not work with QoS microflow policing.****
>
> ** **
>
> Try configuring the following:****
>
> mls flow ip interface-destination-source****
>
> mls flow ipv6 full****
>
> ** **
>
> policy-map test-policy****
>
>   class class-default****
>
>      police flow mask full-flow 200000000 512000 conform-action transmit
> exceed-action drop****
>
> ** **
>
> This should work but this will use full-flow mask for the QoS.****
>
> ** **
>
> Best regards,****
>
> Andras****
>
> ** **
>
> On Sun, Dec 9, 2012 at 4:36 PM, Robert Williams <Robert at custodiandc.com>
> wrote:****
>
> Hi Andras,****
>
>  ****
>
> Many thanks for that – sorry the outputs were misleading but I have
> already tried all 6 variants of the mls flow ipv6 <…> command, including
> "mls flow ipv6 full" – they all produce the same results, culminating in
> the %FM-2-FLOWMASK_CONFLICT error.****
>
>  ****
>
> I have removed all other policies off the device to rule them out, so a
> clean test / demonstration of the issue is as follows:****
>
>  ****
>
> mls commands currently set:****
>
>  ****
>
> mls ipv6 acl compress address unicast****
>
> mls netflow interface                      <- (there is no non-interface
> version of this command)****
>
> mls nde sender****
>
> mls qos****
>
> mls flow ip interface-destination-source   <- (there is no version of this
> command which doesn’t include the ‘interface’ keyword for ipv4)****
>
> mls flow ipv6 full****
>
>  ****
>
> flow commands currently set:****
>
>  ****
>
> ip flow-export source xx****
>
> ip flow-export version 9****
>
> ip flow-export destination x.x.x.x xxxx****
>
> ip flow-top-talkers****
>
>  ****
>
>  ****
>
> With all the above in place, I do get the Full Flow mask as you say:****
>
>  ****
>
>                       IPv6:       0   reserved    none****
>
>                       IPv6:       1   Full Flo    FM_IPV6_GUARDIAN ****
>
>                       IPv6:       2   Null        ****
>
>                       IPv6:       3   reserved    none****
>
>  ****
>
> Plus there is a space in slot 2 - however, when I try to apply my policy:*
> ***
>
>  ****
>
> policy-map test-policy****
>
>   class class-default****
>
>     police flow mask dest-only 200000000 512000 conform-action transmit
> exceed-action drop****
>
>  ****
>
> interface xx****
>
> service-policy input test-policy****
>
>  ****
>
> I still get: %FM-2-FLOWMASK_CONFLICT: Features configured on interface xx
> have conflicting flowmask requirements, traffic may be switched in software
> ****
>
>  ****
>
> My policy has to be destination based so I cannot change that, but apart
> from that I think I'm trying what you are describing, but my apologies if
> I'm still missing the point!****
>
>  ****
>
> Cheers!****
>
>  ****
>
>  ****
>
> From: Tóth András [mailto:diosbejgli at gmail.com <diosbejgli at gmail.com>] ***
> *
>
> Sent: 09 December 2012 14:47****
>
> To: Robert Williams****
>
> Cc: cisco-nsp NSP****
>
> Subject: Re: [c-nsp] Multiple flow-masks****
>
>  ****
>
> The outputs you pasted suggests that you're using "interface-full"
> flowmask. The workaround is to use "full" flowmask instead of
> "interface-full" as mentioned in my last email. IPv6 entries consume more
> TCAM space than IPv4, so comparing them should take this fact into account.
> ****
>
>  ****
>
> Best regards,****
>
> Andras****
>
>  ****
>
> On Sun, Dec 9, 2012 at 2:52 PM, Robert Williams <Robert at custodiandc.com>
> wrote:****
>
> Hi,****
>
>  ****
>
> Thanks very much for that, I’ll have to run through everything in that
> document because the tests I’m doing suggest that it ‘should’ work, for
> example: ****
>
>  ****
>
> With both my policy and the mls ipv6 flow interface-full disabled I see
> one entry, which is presumably because ‘mls qos’ is enabled:****
>
>  ****
>
>       IPv6:       1   Intf Ful    FM_IPV6_QOS ****
>
>       IPv6:       2   Null     ****
>
>  ****
>
> Then if I enable only “mls ipv6 flow interface-full”, the FM_IPV6_GUARDIAN
> ‘feature’ appears:****
>
>  ****
>
>       IPv6:       1   Intf Ful    FM_IPV6_GUARDIAN FM_IPV6_QOS ****
>
>       IPv6:       2   Null     ****
>
>  ****
>
> As you can see there is still a gap for the policy to take the second flow
> mask and use whatever type of mask it wants.****
>
>  ****
>
> As a test - if I disable the ipv6 flow and just enable my policy by
> itself, it goes in the second slot correctly - and uses the Destination
> Only mask:****
>
>  ****
>
>       IPv6:       1   Intf Ful    FM_IPV6_QOS ****
>
>       IPv6:       2   Dest onl    FM_IPV6_QOS****
>
>  ****
>
> So, I was assuming that a combination of these two features would be
> acceptable to it, since they operate in different mask slots when enabled
> separately anyway I didn’t see why they should collide.****
>
>  ****
>
> I am correct as far as its operation in IPv4 goes because for v4 there is
> no conflict warning (and the policy works in hardware perfectly!). However,
> in IPv6 that does not appear to be the case.****
>
>  ****
>
> Looks like yet another unhappy IPv6 feature on the Sup-720, unless anyone
> can see a way around it that I’m missing?****
>
>  ****
>
> As an aside, does anybody know why it is called FM_IPV6_GUARDIAN instead
> of FM_IPV6_QOS (like in v4)? I’m wondering if this difference is the reason
> for its inability to combine the two masks successfully…****
>
>  ****
>
> Cheers!****
>
>  ****
>
> From: Tóth András [mailto:diosbejgli at gmail.com <diosbejgli at gmail.com>] ***
> *
>
> Sent: 08 December 2012 21:09****
>
> To: Robert Williams****
>
> Cc: cisco-nsp NSP****
>
> Subject: Re: [c-nsp] Multiple flow-masks****
>
>  ****
>
> Hi Robert,****
>
>  ****
>
> A few things to keep in mind.****
>
>  ****
>
> With Release 12.2(33)SXI4 and later releases, when appropriate for the
> configuration of the policer, microflow policers use the interface-full
> flow mask, which can reduce flowmask conflicts. Releases earlier than
> Release 12.2(33)SXI4 use the full flow mask.****
>
>  ****
>
> The flowmask requirements of QoS, NetFlow, and NetFlow data export (NDE)
> might conflict, especially if you configure microflow policing.****
>
>  ****
>
>
> http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/qos.html
> ****
>
>  ****
>
> To add to this, note the following restrictions/recommendations well:****
>
>  ****
>
> The micro-flow policing full flow mask is compatible with NDE’s flow masks
> that are shorter than or equal to full flow (except for destination source
> interface).****
>
> With any micro-flow policing partial mask an error message is displayed
> and either the micro-flow policer or NDE might get disabled.****
>
>  ****
>
> Best regards,****
>
> Andras****
>
>  ****
>
>  ****
>
> On Sat, Dec 8, 2012 at 3:50 PM, Robert Williams <Robert at custodiandc.com>
> wrote:****
>
> Hi,****
>
>  ****
>
> Unfortunately we use Netflow for an automated system we have (it doesn't
> need to accurately record everything, just the highest number of flows /
> packets etc). So I cannot just remove it, however I have made some progress.
> ****
>
>  ****
>
> I've tracked it down to the problem actually being with the IPv6 netflow /
> masks. With all netflow removed I am able to add my policy-map in and it
> works. Then by adding netflow commands back in I can get everything back
> except the command:****
>
>  ****
>
>  mls flow ipv6 <any command>****
>
>  ****
>
> So even if I specify:****
>
>  ****
>
>  mls flow ipv6 destination****
>
>  ****
>
> I still get:****
>
>  ****
>
> %FM-2-FLOWMASK_CONFLICT: Features configured on interface <name> have
> conflicting flowmask requirements, traffic may be switched in software****
>
>  ****
>
> At this point in time, with my policy attached and working I'm showing:***
> *
>
>  ****
>
>                  Flowmasks:   Mask#   Type        Features****
>
>                       IPv4:       0   reserved    none****
>
>                       IPv4:       1   Intf Ful    FM_QOS Intf NDE L3
> Feature****
>
>                       IPv4:       2   Dest onl    FM_QOS             <---
> My policy (V4)****
>
>                       IPv4:       3   reserved    none****
>
>  ****
>
>                       IPv6:       0   reserved    none****
>
>                       IPv6:       1   Intf Ful    FM_IPV6_QOS****
>
>                       IPv6:       2   Dest onl    FM_IPV6_QOS        <---
> My policy (V6)****
>
>                       IPv6:       3   reserved    none****
>
>  ****
>
> The command "mls flow ipv6 <anything>" just plain refuses to go active in
> the config, so if I re-send it I get the error shown above every time.****
>
>  ****
>
> The flowmasks are correctly showing "Intf Full" and "Dest only" in slots 1
> and 2 respectively. So, why does my netflow request not attach alongside
> either one of them when it's looking for the same mask as is already active
> in those slots?****
>
>  ****
>
> The policy itself is working correctly at this point, but I cannot enable
> IPv6 netflow.****
>
>  ****
>
> Can anyone help?****
>
>  ****
>
> Robert Williams****
>
> Backline / Operations Team****
>
> Custodian DataCentre****
>
> tel: +44 (0)1622 230382****
>
> email: Robert at CustodianDC.com****
>
> http://www.custodiandc.com/disclaimer.txt****
>
>  ****
>
> Robert Williams****
>
> Backline / Operations Team****
>
> Custodian DataCentre****
>
> tel: +44 (0)1622 230382****
>
> email: Robert at CustodianDC.com****
>
> http://www.custodiandc.com/disclaimer.txt****
>
>  ****
>
>
> *Robert Williams*****
>
> Backline / Operations Team****
>
> Custodian DataCentre****
>
> tel: +44 (0)1622 230382****
>
> email: Robert at CustodianDC.com****
>
> http://www.custodiandc.com/disclaimer.txt
>
>
> *Robert Williams*
>
> Backline / Operations Team
>
>      **[image: Custodian DataCentre] <http://www.CustodianDC.com/>
>
>    *Tel:   *
>
> *+44 (0)1622 230 382*
>
> *Email:   *
> *Robert at CustodianDC.com* <support at CustodianDC.com>
>
> *Web:   *
>
> *www.CustodianDC.com* <http://www.CustodianDC.com/>
>
> *Status:   *
>
> *status.CustDC.net* <http://status.custdc.net/>
>
>   *
> ISO:27001  IS:567248*
>
> Click to view our email disclaimer<http://www.custodiandc.com/disclaimer.txt>
>
>
> _______________________________________________
>
> ****
>
> cisco-nsp mailing list  cisco-nsp at puck.nether.net****
>
> https://puck.nether.net/mailman/listinfo/cisco-nsp****
>
> archive at http://puck.nether.net/pipermail/cisco-nsp/****
>
>  ****
>
>  ****
>
>  ****
>
>  ****
>
> ** **
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/cisco-nsp/attachments/20121209/ff4de4e6/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: imageb9d65d.jpg at acf57610.0c8a4454
Type: image/jpeg
Size: 14679 bytes
Desc: not available
URL: <https://puck.nether.net/pipermail/cisco-nsp/attachments/20121209/ff4de4e6/attachment-0001.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image9e7fb8.png at d5057fdf.12c14486
Type: image/png
Size: 10780 bytes
Desc: not available
URL: <https://puck.nether.net/pipermail/cisco-nsp/attachments/20121209/ff4de4e6/attachment-0001.png>


More information about the cisco-nsp mailing list