[c-nsp] Nexus 7k with F2e line cards and egress queuing

Curtis Piehler cpiehler2 at gmail.com
Sat Dec 14 11:17:18 EST 2019


I am hoping some of you Cisco Nexus veterans out there could shed some
light on this issue or provide some insight if this has been encountered
before.

Has anyone had egress VQ Congestion issues on the Nexus 7k using F2e line
cards causing input discards?  There has been intentional influx of traffic
over the past few months to these units (Primarily VoIP traffic) IE: SBCs
and such.  These SBCs are mostly 1G interfaces with a 10G uplink to the
core router.  At some point of traffic shift the switch interfaces facing
the SBC accrue egress VQ congestion and input discards start dropping
packets into the switches from the core router uplinks.

We have opened a Cisco TAC ticket and they go through the whole thing about
the Nexus design and dropping packets on ingress if the destination port is
congestion, etc... and I get all that.  They also say going from a 10G
uplink to a 1G downlink is not appropriate however those downstream devices
are not capable of 10G.  They amount of traffic influx isn't that much
(your talking 20-30M max of VoIP).  We have removed Cisco FabricPath from
all VDCs and even upgraded our code from 6.2.16 to 6.2.20a on the SUP-2E
supervisors.

I understand the N7K-F248XP-23E/25E have 288KB/Port and 256KB/SOC and I
would think these would be more than sufficient.  I know the F3248XP-23/25
have 295KB/Port 512KB/SOC however I can't see the need to drop 7x the
amount for line cards that should be able to handle this traffic?

We have recently taken the approach of moving the 1G SBCs down to a N5596
VPC stack linked via 20G port-channel (per 5596) from the parent 7ks as I
understand the 5596 have different egress queue structures and maybe more
suited to handle this type of application?.

Any insight would be appreciated.

Thanks


More information about the cisco-nsp mailing list