[c-nsp] Exactly how bad is the 6704-10GE?
Saku Ytti
saku at ytti.fi
Thu Oct 9 02:40:06 EDT 2014
On (2014-10-09 01:16 +0100), Simon Lockhart wrote:
> From talking to people on IRC, etc, I'm told that the 6704 runs out of steam
> around 24-26Gbps of throughput when handling imix traffic. I'm also told that
> this is largely driven by pps, rather than bps.
Here's my testing from 2006 on WS-X6704-10GE with WS-F6700-CFC
3. Topologies used
I used four different topologies:
a) anritsu --darkfibre-- ten7/1:7600:ten7/3 --darkfibre-- anritsu
b) anritsu --darkfibre-- ten7/1:7600:ten4/1 --darkfibre-- anritsu
c) anritsu --darkfibre-- ten7/1:7600:ten7/2 --darkfibre-- anritsu
d) anr -dark- ten9/3:7600:ten9/2 -dwdm- ten4/1:7600:ten7/1 -dark- anr
4. Pure IP performance
4.1 no features configured, plain IP routing
a) 67bytes and above is linerate in both directions
b) 65bytes and above is linerate in both directions
c) 64bytes does 87.5% of linerate, rate appraoches 100% as size grows,
but is both bps and pps bound, so no configuration of packet size
and interval got 100%.
d) 67bytes and above is linerate in boh directions
4.2 input ACL, output ACL, input policer, output policer.
I had ~200 lines of ACL's in both directions, with L4 matches and
fragments matches. Policer was imposing explicit null and policing
traffic to ~25 destinations.
Performance matched to that of 4.1
4.3 RPF, loose or strict. With ACL/policer configured or not configured
a) 101 bytes and above is linerate in both directions
b) 101 bytse and above is linerate in both directions
c) 64 bytes does 71.8% of linerate, rate approaches 100% as size grows,
but is both bps and pps bound, so no configuration of packet size
and interval got 100%
d) 101 bytes and above is linerate in both directions
Fabric utilization during a, b, d was ~65%. During c 0% due to 7600
supporting local switching, when ingress and egress are in same
fabric channel. Local switching is supported regardless of DFC.
Peak-pps in forwarding engine was 29.55Mpps during the tests.
5. MPLS performance
MPLS performance was only measured in setup d), with MPLS running
only over the DWDM between the 7600's.
5.1 MPLS without explicit-null
No performance degretaion could be observed when comparing to IP
forwarding, how ever no labels ever ever send.
5.2 MPLS with explicit-null
About 7.5Mpps packets at 64bytes full duplex was lossless.
It appears that this is both pps bound (Centralized MPLS performance
is 20Mpps according to cisco) and bps bound (I belive explicit-null
is always recirculated, while I think platform could support it
without recirculation).
Notice that in my topology, microburst would not be a problem, as ingress
capacity is not greater than egress capacity, which might be woefully
optimistic scenario.
--
++ytti
More information about the cisco-nsp
mailing list