<html><body>
<p>Hi Ben,<br>
<br>
Thanks for your suggestion.<br>
<br>
I had performed the iPerf UDP test, do you think is it normal for the 100Mbps link?<br>
<br>
Host A (10.16.xx.58) <-----> EX 4200 Switch <------> J6350 (MTU 9018) <----- Fiber circuit 100Mbps (from West to East coast ~80ms) ------> (MTU 9018) J6350 <------ EX 4200 Switch ------> Host B (10.26.xx.60)<br>
<br>
<br>
root@xxxxxxx bin]# ./iperf -c 10.26.xx.60 -t 60 -u -b 100M<br>
------------------------------------------------------------<br>
Client connecting to 10.26.xx.60, UDP port 5001<br>
Sending 1470 byte datagrams<br>
UDP buffer size: 126 KByte (default)<br>
------------------------------------------------------------<br>
[ 3] local 10.16.xx.58 port 48543 connected with 10.26.xx.60 port 5001<br>
[ ID] Interval Transfer Bandwidth<br>
[ 3] 0.0-60.0 sec 719 MBytes 101 Mbits/sec<br>
[ 3] Sent 512816 datagrams<br>
[ 3] Server Report:<br>
[ 3] 0.0-60.0 sec 656 MBytes 91.7 Mbits/sec 0.206 ms 45108/512815 (8.8%)<br>
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order<br>
<br>
[root@xxxxxxx bin]# ./iperf -c 10.26.xx.60 -t 60 -u -b 70M<br>
------------------------------------------------------------<br>
Client connecting to 10.26.xx.60, UDP port 5001<br>
Sending 1470 byte datagrams<br>
UDP buffer size: 126 KByte (default)<br>
------------------------------------------------------------<br>
[ 3] local 10.16.xx.58 port 25968 connected with 10.26.xx.60 port 5001<br>
[ ID] Interval Transfer Bandwidth<br>
[ 3] 0.0-60.0 sec 501 MBytes 70.0 Mbits/sec<br>
[ 3] Sent 357143 datagrams<br>
[ 3] Server Report:<br>
[ 3] 0.0-60.0 sec 501 MBytes 70.0 Mbits/sec 0.276 ms 0/357142 (0%)<br>
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order<br>
<br>
[root@xxxxxxx bin]# ./iperf -c 10.26.xx.60 -t 60 -u -b 80M<br>
------------------------------------------------------------<br>
Client connecting to 10.26.xx.60, UDP port 5001<br>
Sending 1470 byte datagrams<br>
UDP buffer size: 126 KByte (default)<br>
------------------------------------------------------------<br>
[ 3] local 10.16.xx.58 port 31085 connected with 10.26.xx.60 port 5001<br>
[ ID] Interval Transfer Bandwidth<br>
[ 3] 0.0-60.0 sec 572 MBytes 80.0 Mbits/sec<br>
[ 3] Sent 408164 datagrams<br>
[ 3] Server Report:<br>
[ 3] 0.0-60.0 sec 568 MBytes 79.4 Mbits/sec 0.221 ms 2961/408163 (0.73%)<br>
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order<br>
<br>
Thanks<br>
- Harris<br>
<br>
<img width="16" height="16" src="cid:1__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt="Inactive hide details for Ben Dale ---04/10/2010 07:56:24 AM---Hi Harris, However, increasing the MTU size on both the J6350s m"><font color="#424282">Ben Dale ---04/10/2010 07:56:24 AM---Hi Harris, However, increasing the MTU size on both the J6350s may not be able to get a better TCP throughput, because the Host</font><br>
<br>
<table width="100%" border="0" cellspacing="0" cellpadding="0">
<tr valign="top"><td width="1%"><img width="96" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2" color="#5F5F5F">From:</font></td><td width="100%"><img width="1" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2">Ben Dale <bdale@comlinx.com.au></font></td></tr>
<tr valign="top"><td width="1%"><img width="96" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2" color="#5F5F5F">To:</font></td><td width="100%"><img width="1" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2">Harris Hui/Hong Kong/IBM@IBMHK</font></td></tr>
<tr valign="top"><td width="1%"><img width="96" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2" color="#5F5F5F">Cc:</font></td><td width="100%" valign="middle"><img width="1" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2">juniper-nsp@puck.nether.net</font></td></tr>
<tr valign="top"><td width="1%"><img width="96" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2" color="#5F5F5F">Date:</font></td><td width="100%"><img width="1" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2">04/10/2010 07:56 AM</font></td></tr>
<tr valign="top"><td width="1%"><img width="96" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2" color="#5F5F5F">Subject:</font></td><td width="100%"><img width="1" height="1" src="cid:2__=C7BBFD21DFA2A8058f9e8a93df93@hk1.ibm.com" border="0" alt=""><br>
<font size="2">Re: [j-nsp] J6350 Jumbo frame MTU and OSPF setting</font></td></tr>
</table>
<hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br>
<br>
<br>
<font size="4">Hi Harris,</font>
<ul>
<ul><font size="4">However, increasing the MTU size on both the J6350s may not be able to get a better TCP throughput, because the Host NICs and Switchport are also using MTU 1500 right? Should I change the MTU size on Host NICs and Juniper EX switches to MTU 9018 in order to prevent the frame fragmentation happened below 9018?</font></ul>
</ul>
<font size="4">There should be no more drops if your end devices are 1500 MTU and the "core" network is 9018. As for your throughput, that is a little harder to calculate, but the figures you are quoting seem quite low even with 80 ms latency. </font><br>
<br>
<font size="4">Latency aside, you should be able to easily saturate a 100Mbps pipe with 1500 byte frames on a J6350 without issue (in terms of PPS). I don't believe adjusting the MTU size is going to make that much difference, but it is worth trying. I would be inclined to kick off iperf with a UDP test with 1500 byte frames to see what throughput you can get out of the pipe first, then start investigating TCP/MSS issues.</font><br>
<br>
<font size="4">Cheers,<br>
<br>
Ben</font><br>
<br>
<br>
</body></html>