[f-nsp] MLX throughput issues

nethub at gmail.com nethub at gmail.com
Fri Feb 13 12:41:02 EST 2015


We have three switch fabrics installed, all are under 1% utilized.

 

 

From: Jeroen Wunnink | Hibernia Networks [mailto:jeroen.wunnink at atrato.com] 
Sent: Friday, February 13, 2015 12:27 PM
To: nethub at gmail.com; 'Jeroen Wunnink | Hibernia Networks'
Subject: Re: [f-nsp] MLX throughput issues

 

How many switchfabrics do you have in that MLX and how high is the
utilization on them

On 13/02/15 18:12, nethub at gmail.com wrote:

We also tested with a spare Quanta LB4M we have and are seeing about the
same speeds as we are seeing with the FLS648 (around 20MB/s or 160Mbps).

 

I also reduced the number of routes we are accepting down to about 189K and
that did not make a difference.

 

 

From: foundry-nsp [mailto:foundry-nsp-bounces at puck.nether.net] On Behalf Of
Jeroen Wunnink | Hibernia Networks
Sent: Friday, February 13, 2015 3:35 AM
To: foundry-nsp at puck.nether.net
Subject: Re: [f-nsp] MLX throughput issues

 

The FLS switches do something weird with packets. I've noticed they somehow
interfere with changing the MSS window size dynamically, resulting in
destinations further away having very poor speed results compared to
destinations close by. 

We got rid of those a while ago.


On 12/02/15 17:37, nethub at gmail.com wrote:

We are having a strange issue on our MLX running code 5.6.00c.  We are
encountering some throughput issues that seem to be randomly impacting
specific networks.

 

We use the MLX to handle both external BGP and internal VLAN routing.  Each
FLS648 is used for Layer 2 VLANs only.

 

>From a server connected by 1 Gbps uplink to a Foundry FLS648 switch, which
is then connected to the MLX on a 10 Gbps port, running a speed test to an
external network is getting 20MB/s.

 

Connecting the same server directly to the MLX is getting 70MB/s.

 

Connecting the same server to one of my customer's Juniper EX3200 (which BGP
peers with the MLX) also gets 70MB/s.

 

Testing to another external network, all three scenarios get 110MB/s.

 

The path to both test network locations goes through the same IP transit
provider.

 

We are running NI-MLX-MR with 2GB RAM, NI-MLX-10Gx4 connect to the Foundry
FLS648 by XFP-10G-LR, NI-MLX-1Gx20-GC was used for directly connecting the
server.  A separate NI-MLX-10Gx4 connects to our upstream BGP providers.
Customer's Juniper EX3200 connects to the same NI-MLX-10Gx4 as the FLS648.
We take default routes plus full tables from three providers by BGP, but
filter out most of the routes.

 

The fiber and optics on everything look fine.  CPU usage is less than 10% on
the MLX and all line cards and CPU usage at 1% on the FLS648.  ARP table on
the MLX is about 12K, and BGP table is about 308K routes.

 

Any assistance would be appreciated.  I suspect there is a setting that
we're missing on the MLX that is causing this issue.







_______________________________________________
foundry-nsp mailing list
foundry-nsp at puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp







-- 
 
Jeroen Wunnink
IP NOC Manager - Hibernia Networks
Main numbers (Ext: 1011): USA +1.908.516.4200 | UK +44.1704.322.300 
Netherlands +31.208.200.622 | 24/7 IP NOC Phone: +31.20.82.00.623
jeroen.wunnink at hibernianetworks.com
www.hibernianetworks.com






-- 
 
Jeroen Wunnink
IP NOC Manager - Hibernia Networks
Main numbers (Ext: 1011): USA +1.908.516.4200 | UK +44.1704.322.300 
Netherlands +31.208.200.622 | 24/7 IP NOC Phone: +31.20.82.00.623
jeroen.wunnink at hibernianetworks.com
www.hibernianetworks.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20150213/ae827cf0/attachment.html>


More information about the foundry-nsp mailing list