Hey,<div><br></div><div>this sounds like a good tip. We are seeing a issue very similar to the one reported in this thread.</div><div>Speed to local servers is fine, but to remote servers the speed decreases depending on the latency to them (while having no overloaded links or something like that). </div><div>In our case the core devices are also MLX(e) boxes, but the servers are not directly terminating in the MLX, the path to them includes a Cisco with sup720 for routing and HP switching stuff on the L2 path to the servers.</div>Jeroen, could you share a bit more insight on the issue you had with dynamic MSS adjustments? Have you been able to find a way to change the behaviour of the switches? Have you seen such a issue also with your other Brocade equipment you have at Hibernia?<div><br></div><div>Thanks,</div><div>Chris<br><div><div><br>Am Freitag, 13. Februar 2015 schrieb Jeroen Wunnink | Hibernia Networks :<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>The FLS switches do something weird
with packets. I've noticed they somehow interfere with changing
the MSS window size dynamically, resulting in destinations further
away having very poor speed results compared to destinations close
by. <br>
<br>
We got rid of those a while ago.<br>
<br>
<br>
On 12/02/15 17:37, <a href="javascript:_e(%7B%7D,'cvml','nethub@gmail.com');" target="_blank">nethub@gmail.com</a> wrote:<br>
</div>
<blockquote type="cite">
<div>
<p class="MsoNormal">We are having a strange issue on our MLX
running code 5.6.00c. We are encountering some throughput
issues that seem to be randomly impacting specific networks.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">We use the MLX to handle both external BGP
and internal VLAN routing. Each FLS648 is used for Layer 2
VLANs only.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">From a server connected by 1 Gbps uplink to
a Foundry FLS648 switch, which is then connected to the MLX on
a 10 Gbps port, running a speed test to an external network is
getting 20MB/s.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Connecting the same server directly to the
MLX is getting 70MB/s.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Connecting the same server to one of my
customer's Juniper EX3200 (which BGP peers with the MLX) also
gets 70MB/s.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Testing to another external network, all
three scenarios get 110MB/s.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">The path to both test network locations
goes through the same IP transit provider.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">We are running NI-MLX-MR with 2GB RAM,
NI-MLX-10Gx4 connect to the Foundry FLS648 by XFP-10G-LR,
NI-MLX-1Gx20-GC was used for directly connecting the server.
A separate NI-MLX-10Gx4 connects to our upstream BGP
providers. Customer’s Juniper EX3200 connects to the same
NI-MLX-10Gx4 as the FLS648. We take default routes plus full
tables from three providers by BGP, but filter out most of the
routes.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">The fiber and optics on everything look
fine. CPU usage is less than 10% on the MLX and all line
cards and CPU usage at 1% on the FLS648. ARP table on the MLX
is about 12K, and BGP table is about 308K routes.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Any assistance would be appreciated. I
suspect there is a setting that we’re missing on the MLX that
is causing this issue.<u></u><u></u></p>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
foundry-nsp mailing list
<a href="javascript:_e(%7B%7D,'cvml','foundry-nsp@puck.nether.net');" target="_blank">foundry-nsp@puck.nether.net</a>
<a href="http://puck.nether.net/mailman/listinfo/foundry-nsp" target="_blank">http://puck.nether.net/mailman/listinfo/foundry-nsp</a></pre>
</blockquote>
<br>
<br>
<pre cols="72">--
Jeroen Wunnink
IP NOC Manager - Hibernia Networks
Main numbers (Ext: 1011): USA +1.908.516.4200 | UK +44.1704.322.300
Netherlands +31.208.200.622 | 24/7 IP NOC Phone: +31.20.82.00.623
<a href="javascript:_e(%7B%7D,'cvml','jeroen.wunnink@hibernianetworks.com');" target="_blank">jeroen.wunnink@hibernianetworks.com</a>
<a href="http://www.hibernianetworks.com" target="_blank">www.hibernianetworks.com</a>
</pre>
</div>
</blockquote></div></div></div>