[c-nsp] GSR12008|GRP-B|4OC12/ATM-MM-SC|3GE-GBIC-SC throughput?
Lamar Owen
lowen at pari.edu
Thu Apr 16 22:45:52 EDT 2009
On Wednesday 15 April 2009 11:34:34 Lamar Owen wrote:
> On Tuesday 14 April 2009 18:22:03 Jason Lixfeld wrote:
> > For the life of us, we can't seem to get any more than 60Mbps
> > sustained across the ATM testing with iperf, so we're just trying to
> > figure out if the GSR just can't push any more than what it's doing or
> > if there's something else afoot.
> I have a 12012 here in production, and have some of the kit necessary to
> test point to point ATM connections (including a Catalyst 8540MSR with
> OC12, ARM, and gigabit cards), and have a 4xOC12/ATM/MM, but it will be a
> few days before I could have the time to set up a test to see if the 12012
> is limited.
Ok, had a little time today, so got some data. Setup:
Dell Inspiron 600m w/ Gigabit ethernet, running Fedora 10's iperf to a server,
which is a CentOS 4 VM on an eight-way Opteron VMware ESX system (Dell
PowerEdge 6950).
GSR has a 4xOC12 MM ATM card.
Other ATM OC12 endpoint is a Catalyst 8540CSR with an OC12 ATM MM uplink card
and a dual GigE card (while I have an 8540MSR, the setup is more complex with
the MSR than with the CSR with the ATM uplink, and I wanted the simplest
possible setup to see if the GSR was a limiter).
As the server is in production, I left it attached to the server farm core
Extreme Summit1i's, which are GigE-attached to the 12012 GSR. In the topology
below, I only list one Summit1i, but there are two in an ESRP setup.
Topology:
600m <-->8540CSR GigabitEthernet10/0/0 via 1000Base-T GBIC
8540CSR ATM0/0/0.1 (VPI/VCI 1/17 PVC) <--> 12012 ATM7/0.1 (VPI/VCI 1/17 PVC)
12012 GigabitEthernet4/0 <--> Extreme Summit1i port 8
Extreme Summit1i port 1 <--> Dell 6950 ESX server GE1.
12012 and the Summit1i are in production (the 12012 is the working side of our
APS protected OC3 WAN link, and the Summit1i is half of the server farm core),
and had other traffic, with variable traffic on the VM during test. I'm pretty
happy with how much traffic the Dell 600m laptop generated, by the way!
12012 ATM7 is a 4xOC12 ATM MM LC, 8540CSR ATM0/0/0 is a Catalyst 8540 OC12 ATM
uplink module. IOS on 12012 is 12.0(32)S12, on the 8540CSR it's 12.1(27b)E3
12012 has two GRP-B's.
Data:
12012 throughput at peak:
pari-gsr-12#sh int atm7/0 load
Interface bits/sec pack/sec
-------------------- ------------ ----------
AT7/0 Tx 206605000 24617
Rx 354535000 34717
pari-gsr-12#
8540CSR throughput at peak:
sr1-8540c-1>sh int atm0/0/0 summ
*: interface is up
IHQ: pkts in input hold queue IQD: pkts dropped from input queue
OHQ: pkts in output hold queue OQD: pkts dropped from output queue
RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)
TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)
TRTL: throttle count
Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL
------------------------------------------------------------------------
* ATM0/0/0 0 0 0 0 207281000 24491 353708000
34530
* ATM0/0/0.1 - - - - - - - - -
NOTE:No separate counters are maintained for subinterfaces
Hence Details of subinterface are not shown
sr1-8540c-1>
Output of iperf at client (Dell Inspiron 600m, Pentium M 1.8GHz, Fedora 10),
slightly sanitized:
[root at localhost ~]# iperf --client esx-host -t 720 --dualtest
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to esx-host, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 5] local 10.250.132.30 port 46676 connected with esx-host port 5001
[ 4] local 10.250.132.30 port 5001 connected with esx-host port 45629
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-719.9 sec 18.0 GBytes 215 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-720.0 sec 31.3 GBytes 374 Mbits/sec
[root at localhost ~]#
Output of iperf on server (2vCPU VM on a four-way dual core Opteron 2.8GHz
Dell 6950 ESX 3.5U3; VM running CentOS 4):
[root at esx-host ~]# iperf --server
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local esx-host port 5001 connected with 10.250.132.30 port 46676
------------------------------------------------------------
Client connecting to 10.250.132.30, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 5] local esx-host port 45629 connected with 10.250.132.30 port 5001
Waiting for server threads to complete. Interrupt again to force quit.
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-720.0 sec 18.0 GBytes 215 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-720.1 sec 31.3 GBytes 374 Mbits/sec
[root at esx-host ~]#
Port configs:
GSR:
interface ATM7/0
no ip address
no ip directed-broadcast
atm clock INTERNAL
no atm enable-ilmi-trap
no atm ilmi-keepalive
!
interface ATM7/0.1 point-to-point
ip address 10.250.132.25 255.255.255.252
no ip directed-broadcast
no atm enable-ilmi-trap
snmp trap link-status
pvc 1/17
!
!
Catalyst 8540CSR:
interface ATM0/0/0
no ip address
atm clock INTERNAL
sonet ais-shut
arp timeout 900
!
interface ATM0/0/0.1 point-to-point
ip address 10.250.132.26 255.255.255.252
pvc 1/17
!
!
That is pretty good throughput for a single workstation attaching over a GigE
throttled through an ATM OC12 with AAL5 overhead (SAR, VPI/VCI cell tax, etc)
to a fairly busy server.
You might find
http://www.osti.gov/bridge/servlets/purl/764365-05obbP/native/764365.pdf and
http://www-didc.lbl.gov/Talks/GBN.final.pdf to be interesting reading.
In light of LBNL's experience, detailed in those two papers, I'm very happy
indeed with the results of the laptop test.
Hope that helps.
More information about the cisco-nsp
mailing list