[c-nsp] Nexus 9300 sflow performance

Satish Patel satish.txt at gmail.com
Tue Apr 13 12:13:16 EDT 2021


Folks,

I know this thread is old but I am having some strange issues, you
guys may help me out.

This is my config on CIsco Nexus 9396PX  (NXOS 7.0(3)I7(8))

hardware access-list tcam region span-sflow 256
!
feature sflow
sflow counter-poll-interval 30
sflow collector-ip 10.30.0.91 vrf management
sflow collector-port 9995
sflow agent-ip 172.30.0.26

When i am trying to set a data-source interface I am not able to use 40G ports.

sflow data-source interface e2/1

above command silently accepts command but when i run "show run sflow"
command to verify i am not seeing data-source interface e2/1 in
config, but if i use 10G interface e1/1 that does work.

Cisco official document saying:

Make sure that the sFlow and SPAN ACL TCAM region sizes are configured
for any uplink ports that are to be configured as an sFlow data source
on the following devices: Cisco Nexus 9332PQ, 9372PX, 9372TX, and
93120TX switches and Cisco Nexus 9396PX, 9396TX, and 93128TX switches
with the N9K-M6PQ or N9K-M12PQ generic expansion module (GEM).

I didn't find any region for SPAN ACL to carve size, what and where do
I need to set SPAN ACL size? This is what i have currently.

(config)# show hardware access-list tcam region | exclude 0
                               IPV4 PACL [ifacl] size =  512
                             IPV4 Port QoS [qos] size =  256
                                IPV4 VACL [vacl] size =  512
                                IPV4 RACL [racl] size =  512
                         Egress IPV4 VACL [vacl] size =  512
                       Egress IPV4 RACL [e-racl] size =  256
                                  Ingress System size =  256
                                   Egress System size =  256
                             Ingress COPP [copp] size =  256
                             Redirect [redirect] size =  512
                       NS IPV4 Port QoS [ns-qos] size =  256
                      NS IPV4 VLAN QoS [ns-vqos] size =  256
                       NS IPV4 L3 QoS [ns-l3qos] size =  256
 VPC Convergence/ES-Multi Home [vpc-convergence] size =  256
                       ranger+ IPV4 QoS [rp-qos] size =  256
                  ranger+ IPV6 QoS [rp-ipv6-qos] size =  256
                    ranger+ MAC QoS [rp-mac-qos] size =  256
                               sFlow ACL [sflow] size =  256

On Mon, May 13, 2019 at 2:08 PM Tim Stevenson (tstevens) via cisco-nsp
<cisco-nsp at puck.nether.net> wrote:
>
>
>
>
> ---------- Forwarded message ----------
> From: "Tim Stevenson (tstevens)" <tstevens at cisco.com>
> To: Lasse Birnbaum Jensen <lasse at sdu.dk>, "cisco-nsp at puck.nether.net" <cisco-nsp at puck.nether.net>
> Cc:
> Bcc:
> Date: Mon, 13 May 2019 18:06:58 +0000
> Subject: RE: [c-nsp] Nexus 9300 sflow performance
> First gen n9k does not support Netflow at all, only sflow. 2nd gen (EX/FX/FX2) support both, but there is the SPAN+SFlow limitation (we are working on fixing that for FX2, which can theoretically support these concurrently).
>
> For recommended sampling value, we set the rate limiters at values that we feel the switch can handle. So you should select a sampling value where you do NOT see HWRL drops, as then it's truly 1:n. If you drop samples, then your sampling is 1:n up to the point where you tail drop excess packets, and that will skew the samples you actually process and export and reduce the statistical validity of the sample.
>
> Using mgmt0 should be fine for sflow export.
>
> Hope that helps,
> Tim
>
>
> -----Original Message-----
> From: Lasse Birnbaum Jensen <lasse at sdu.dk>
> Sent: Monday, May 13, 2019 10:58 AM
> To: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] Nexus 9300 sflow performance
>
> Im not totally sure about the N9300 architecture. But normally the mgmt interface is connected "directly" to the control plane cpu, thus having it process a lot of packets will take CPU resources and might impact control-plane protocols and jobs. Netflow is performed in the ASICs and I think it would be better to use an ASIC bounded interface if possible.
>
> Best regards
>
> Lasse Birnbaum Jensen
> Network Architect
> IT-services
>
> T  +45 65 50 28 73
> M  +45 60 11 28 73
> lasse at sdu.dk
> http://www.sdu.dk/ansat/lbje
>
> University of Southern Denmark
> Campusvej 55
> DK-5230 Odense M
> www.sdu.dk <http://www.sdu.dk/>
> Lasse Birnbaum Jensen
>
> D. 20/03/2019 18.27 skrev "cisco-nsp på vegne af Satish Patel" <cisco-nsp-bounces at puck.nether.net på vegne af satish.txt at gmail.com>:
>
>     Thanks Tim,
>
>     Here is the output of show hardware rate-limiter.  ( i believe it's 40k)
>
>     This is my first time dealing with SFLOW, Can you share some
>     configuration parameter i should use for best practice would be great,
>     What is 1-in-N sample actually?
>
>     I am planning to use mgmt0 interface for SFLOW and its 1G so i assume
>     it will handle all the flow. do you seeing any concern there?
>
>
>     # show hardware rate-limiter
>
>     Units for Config: packets per second
>     Allowed, Dropped & Total: aggregated since last clear counters
>
>
>     Module: 1
>       R-L Class           Config           Allowed         Dropped            Total
>      +------------------+--------+---------------+---------------+-----------------+
>       L3 glean                 100               0               0                 0
>       L3 mcast loc-grp        3000               0               0                 0
>       access-list-log          100               0               0                 0
>       bfd                    10000               0               0                 0
>       exception                 50               0               0                 0
>       fex                     3000               0               0                 0
>       span                      50               0               0                 0
>       dpss                    6400               0               0                 0
>       sflow                  40000     25134089890               0       25134089890
>
>     On Wed, Mar 20, 2019 at 12:07 PM Tim Stevenson (tstevens)
>     <tstevens at cisco.com> wrote:
>     >
>     > Yes, this is 1st gen. The SFLOW/SPAN restriction should not apply there.
>     >
>     > Re: 60Gbps/24Mpps and SFLOW, SFLOW does not do aggregation of stats for flows in the switch like netflow does - it's just 1-in-n packet sampling. As such, the value of "n" should be high enough that both the switch & the collector are not overburdened. Note that we will rate limit SFLOW copies to the CPU so that's the first 'bottleneck'. If you end up tail-dropping samples, the statistical validity of your sampled set goes out the window, so you want to ensure that 1-in-n is a number that does not hit that rate limiter.
>     >
>     > I don't have a 1st gen switch handy to see what the defaults are for that value. It should show up in 'sh hardware rate-limiter'. In 9300-EX with 9.2.2 it's 40Kpps.
>     >
>     > Beyond that, you also want to make sure the collector is able to consume everything coming from all sflow enabled switches without dropping, for the same reason mentioned above.
>     >
>     > Hope that helps,
>     > Tim
>     >
>     >
>     > -----Original Message-----
>     > From: Satish Patel <satish.txt at gmail.com>
>     > Sent: Wednesday, March 20, 2019 8:40 AM
>     > To: Nick Cutting <ncutting at edgetg.com>
>     > Cc: Tim Stevenson (tstevens) <tstevens at cisco.com>; cisco-nsp at puck.nether.net
>     > Subject: Re: [c-nsp] Nexus 9300 sflow performance
>     >
>     > We have cisco Nexus9000 C9396PX
>     >
>     > 60 Gbs is data traffic, and 24Mpps ( packet per second ) not sure how
>     > to convert it into flows. Could you please share your sflow
>     > configuration if you don't mind?
>     >
>     > I had nfsen in past with 8CPU / 4GB memory but it was damn slow :(
>     > but it could be me.. i will set up again and see if it worth it or
>     > not.
>     >
>     > On Wed, Mar 20, 2019 at 11:34 AM Nick Cutting <ncutting at edgetg.com> wrote:
>     > >
>     > > Good point.  We waited for the second Gen
>     > >
>     > > Regarding 60 Gbs, isn’t that is the data traffic, not the flows or sampled flows levels?
>     > >
>     > > Our NFSEn box is centos
>     > >
>     > > 4 vCPU and 4 GBrams
>     > >
>     > > Collecting flows from maybe only 30 devices, about 20Gbs and 3k flows per sec.
>     > >
>     > > -----Original Message-----
>     > > From: Tim Stevenson (tstevens) <tstevens at cisco.com>
>     > > Sent: Wednesday, March 20, 2019 11:20 AM
>     > > To: Nick Cutting <ncutting at edgetg.com>; Satish Patel <satish.txt at gmail.com>; cisco-nsp at puck.nether.net
>     > > Subject: RE: [c-nsp] Nexus 9300 sflow performance
>     > >
>     > > This message originated from outside your organization.
>     > >
>     > > Make sure you distinguish between N9300 (1st generation) and N9300-EX/FX/FX2 (2nd generation). The SFLOW + SPAN limitation applies only to the latter. It's also on the latter that Netflow is supported, which can run concurrently with SPAN sessions.
>     > >
>     > > Tim
>     > >
>     > > -----Original Message-----
>     > > From: cisco-nsp <cisco-nsp-bounces at puck.nether.net> On Behalf Of Nick Cutting
>     > > Sent: Wednesday, March 20, 2019 6:19 AM
>     > > To: Satish Patel <satish.txt at gmail.com>; cisco-nsp at puck.nether.net
>     > > Subject: Re: [c-nsp] Nexus 9300 sflow performance
>     > >
>     > > We use sflow on 9300's, no performance hit - but you cannot use span sessions at the same time.
>     > >
>     > > Newer code revisions support netflow, without the SPAN session limitation, although we have not tried netflow on the 9300 yet.
>     > >
>     > > For a collector We use NFSEN - opensource, and quite a big install base, and it seems to handle a lot of flows.
>     > >
>     > > It supports sflow and netflow as we have a mix, just make sure you add the sflow option at build time as it’s a bit funky old linux to add it after.
>     > >
>     > >
>     > >
>     > > -----Original Message-----
>     > > From: cisco-nsp <cisco-nsp-bounces at puck.nether.net> On Behalf Of Satish Patel
>     > > Sent: Wednesday, March 20, 2019 8:21 AM
>     > > To: cisco-nsp at puck.nether.net
>     > > Subject: [c-nsp] Nexus 9300 sflow performance
>     > >
>     > > This message originates from outside of your organisation.
>     > >
>     > > Folks,
>     > >
>     > > I have L3 Nexus 9300 switch which is running 60Gbps traffic on ISP interface so I’m planning to run sflow on that specific interference to get flow.
>     > >
>     > > Does it going to create any performances issue on switch?
>     > >
>     > > Can I run sflow on Layer 3 LACP interface?
>     > >
>     > > Can anyone suggest free open source sflow collector?
>     > >
>     > > Sent from my iPhone
>     > > _______________________________________________
>     > > cisco-nsp mailing list  cisco-nsp at puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp
>     > > archive at http://puck.nether.net/pipermail/cisco-nsp/
>     > >
>     > > _______________________________________________
>     > > cisco-nsp mailing list  cisco-nsp at puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp
>     > > archive at http://puck.nether.net/pipermail/cisco-nsp/
>     _______________________________________________
>     cisco-nsp mailing list  cisco-nsp at puck.nether.net
>     https://puck.nether.net/mailman/listinfo/cisco-nsp
>     archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>
>
>
>
> ---------- Forwarded message ----------
> From: "Tim Stevenson (tstevens) via cisco-nsp" <cisco-nsp at puck.nether.net>
> To: Lasse Birnbaum Jensen <lasse at sdu.dk>, "cisco-nsp at puck.nether.net" <cisco-nsp at puck.nether.net>
> Cc:
> Bcc:
> Date: Mon, 13 May 2019 18:06:58 +0000
> Subject: Re: [c-nsp] Nexus 9300 sflow performance
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list