[f-nsp] MLX throughput issues

Chris Hellkvist chris.hellkvist at googlemail.com
Fri Feb 13 05:44:05 EST 2015


Hey,

this sounds like a good tip. We are seeing a issue very similar to the one
reported in this thread.
Speed to local servers is fine, but to remote servers the speed decreases
depending on the latency to them (while having no overloaded links or
something like that).
In our case the core devices are also MLX(e) boxes, but the servers are not
directly terminating in the MLX, the path to them includes a Cisco with
sup720 for routing and HP switching stuff on the L2 path to the servers.
Jeroen, could you share a bit more insight on the issue you had with
dynamic MSS adjustments? Have you been able to find a way to change the
behaviour of the switches? Have you seen such a issue also with your other
Brocade equipment you have at Hibernia?

Thanks,
Chris

Am Freitag, 13. Februar 2015 schrieb Jeroen Wunnink | Hibernia Networks :

>  The FLS switches do something weird with packets. I've noticed they
> somehow interfere with changing the MSS window size dynamically, resulting
> in destinations further away having very poor speed results compared to
> destinations close by.
>
> We got rid of those a while ago.
>
>
> On 12/02/15 17:37, nethub at gmail.com
> <javascript:_e(%7B%7D,'cvml','nethub at gmail.com');> wrote:
>
>  We are having a strange issue on our MLX running code 5.6.00c.  We are
> encountering some throughput issues that seem to be randomly impacting
> specific networks.
>
>
>
> We use the MLX to handle both external BGP and internal VLAN routing.
> Each FLS648 is used for Layer 2 VLANs only.
>
>
>
> From a server connected by 1 Gbps uplink to a Foundry FLS648 switch, which
> is then connected to the MLX on a 10 Gbps port, running a speed test to an
> external network is getting 20MB/s.
>
>
>
> Connecting the same server directly to the MLX is getting 70MB/s.
>
>
>
> Connecting the same server to one of my customer's Juniper EX3200 (which
> BGP peers with the MLX) also gets 70MB/s.
>
>
>
> Testing to another external network, all three scenarios get 110MB/s.
>
>
>
> The path to both test network locations goes through the same IP transit
> provider.
>
>
>
> We are running NI-MLX-MR with 2GB RAM, NI-MLX-10Gx4 connect to the Foundry
> FLS648 by XFP-10G-LR, NI-MLX-1Gx20-GC was used for directly connecting the
> server.  A separate NI-MLX-10Gx4 connects to our upstream BGP providers.
> Customer’s Juniper EX3200 connects to the same NI-MLX-10Gx4 as the FLS648.
> We take default routes plus full tables from three providers by BGP, but
> filter out most of the routes.
>
>
>
> The fiber and optics on everything look fine.  CPU usage is less than 10%
> on the MLX and all line cards and CPU usage at 1% on the FLS648.  ARP table
> on the MLX is about 12K, and BGP table is about 308K routes.
>
>
>
> Any assistance would be appreciated.  I suspect there is a setting that
> we’re missing on the MLX that is causing this issue.
>
>
> _______________________________________________
> foundry-nsp mailing listfoundry-nsp at puck.nether.net <javascript:_e(%7B%7D,'cvml','foundry-nsp at puck.nether.net');>http://puck.nether.net/mailman/listinfo/foundry-nsp
>
>
>
> --
>
> Jeroen Wunnink
> IP NOC Manager - Hibernia Networks
> Main numbers (Ext: 1011): USA +1.908.516.4200 | UK +44.1704.322.300
> Netherlands +31.208.200.622 | 24/7 IP NOC Phone: +31.20.82.00.623jeroen.wunnink at hibernianetworks.com <javascript:_e(%7B%7D,'cvml','jeroen.wunnink at hibernianetworks.com');>www.hibernianetworks.com
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20150213/7c121570/attachment-0001.html>


More information about the foundry-nsp mailing list