[c-nsp] Prove it's not the network!

David Freedman david.freedman at uk.clara.net
Mon May 19 08:13:54 EDT 2008


With regards to TCP MSS, I've had to do this multiple times before, the 
problem is almost always down to Microsoft Windows (TM) :

- Having a ridiculous MSS (larger than MTU)

- Not doing pMTUd by default

- not honouring registry set MSS despite repeated reboots

I've had to go out of my way a few times now to demonstrate this to 
customers who are baffled by the fact that a product they pay good money 
for could possibly perform an such a suboptimal way :)

A good way of demonstrating network capability I find is using iperf to
send a stream of UDP packets across the network where the IP packet size 
is all the way up to the MTU (UDP+IP headers = 28 , so UDP payload size 
of MTU-28 should suffice) and the DF bit set (providing nothing along 
the path interferes with DF like some DSL implementations like to these 
days)

Saturating links as well is generally a good idea to demonstrate that 
traffic can raise to such a level.


Dave.



Chris Riling wrote:
> Last time I had to solve a similar problem, it ended up being related to one
> application not honoring the TCP window size in the OS. Turns out the
> application would only use X K regardless of what you set the window to in
> the OS. It took many webex school bus sessions demonstrating the differences
> in iperf before they understood.. Essentially if was proving that the
> network itself was capable of pushing the data, and that the problem must
> lie at an upper layer... Still had to go way above and beyond normal duties;
> I'm not even remotely a systems admin... :)
> 
> Chris
> 
> 
> On 5/14/08, Joe Loiacono <jloiacon at csc.com> wrote:
>>
>> NetQoS SA is an appliance. It can be placed anywhere but typically
>> connects to a data center switch and aggreagte ports are SPAN'd to it.
>> Among other graphs which are also valuable, the keys one for exonerating
>> the network fall into the Server Response Time group. Here you will get
>> four individual graphs and one composite of the four. The transactions
>> being broken down into four components:
>>
>> Network RTT
>> Retransmission time
>> Data Transfer time
>> Server Response time
>>
>> In a particular problem we were looking at, the Data Transfer and Server
>> Response times radically dominated the composite graph. From this
>> information, the problem was isolated to the internal client-server
>> interaction of a web-portal load balancing application. The network was
>> exonerted :-)
>>
>> Might be a similar situation for the Outlook configuration as an earlier
>> post mentioned.
>>
>> Joe
>>
>> "Aaron R" <aaronis at people.net.au> wrote on 05/14/2008 07:04:34 AM:
>>
>> > I have heard of NetQoS. Is this an appliance or a piece of software?
>> Where
>> > does it run? The site does not give much away.
>> >
>> > Cheers,
>> >
>> > Aaron.
>> >
>> > -----Original Message-----
>> > From: cisco-nsp-bounces at puck.nether.net
>> > [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Joe Loiacono
>> > Sent: Tuesday, May 13, 2008 11:56 PM
>> > To: Rick Martin
>> > Cc: cisco-nsp-bounces at puck.nether.net; cisco-nsp at puck.nether.net
>> > Subject: Re: [c-nsp] Prove it's not the network!
>> >
>> > Two things might help.
>> >
>> > 1) Active performance monitoring
>> >
>> > Set up iperf on both ends of your link. Periodically (e.g., for 30
>> seconds
>> > every hour) burst as high as you can (large windows, etc.). Graph this
>> > continually. That will show the actuall capacity achievable. You can
>> even
>> > set up multiple client-server iperf pairs and use comparisons betwen
>> them
>> > to isolate problems to different network segments. See, for example:
>> > http:ensight.eos.nasa.gov (this is custom, so you'd have to develop your
>>
>> > own :-)
>> >
>> > 2) Application performance monitoring
>> >
>> > NetQoS has a sharp tool called SuperAgent (SA). SA installs in your data
>>
>> > center and can track performance from all clients to any specified
>> > application (e.g., Outlook). What is neat about it is you don't have to
>> > instrument the clients to be able to understand their performance - it
>> is
>> > all determined by examing the TCP traffic flow traversing the single
>> point
>> > where SA is installed. The reports break the performance down into
>> several
>> > segments, one of which is the network. This can eliminate the network as
>> a
>> > source of performance problems (if that is the case.)
>> >
>> > I don't work work for NetQoS, and there are other similar products.
>> >
>> > Joe
>> >
>> >
>> >
>> >
>> >
>> >
>> > "Rick Martin" <rick.martin at arkansas.gov>
>> > Sent by: cisco-nsp-bounces at puck.nether.net
>> > 05/13/2008 11:15 AM
>> >
>> > To
>> > <cisco-nsp at puck.nether.net>
>> > cc
>> >
>> > Subject
>> > [c-nsp] Prove it's not the network!
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >  I know this is not really a Cisco specific question but it is
>> > definitely in support of Cisco hardware.
>> >
>> >  How do most of you folks prove that "the problem" is not the network?
>> > We utilize CA Spectrum and eHealth for availability and statistical
>> > analysis but in some instances that does not cut it. We don't typically
>> > have much trouble proving that a T1 is serving up 1.5 meg of bandwidth.
>> > Customers complain that their access is slow, we show that they are
>> > using all available bandwidth and eventually sell them more bandwidth
>> > and the problem is resolved.
>> >
>> >  The more difficult effort is when there is plenty of available
>> > bandwidth and a particular application is slow (Outlook in the case I am
>> > involved in now). This is a very high level political official and we
>> > must come to a resolution. All tools we have available to us today
>> > indicate that there is not a problem with the network. Typical
>> > utilization on the T1 is about 500 to 600K peak during the day. Certain
>> > management continues to point the finger at the network. We have used
>> > Internet based speed tests that at times show less than 1.5Meg download
>> > speeds, I explain the variables in the Internet and the particular tool
>> > in use as well as local contention for the bandwidth etc to no avail,
>> > once they see less than 1.5 meg speed the finger points to the network.
>> > I still must somehow "prove" that the network is not the issue.
>> >
>> >  I am interested in an Internet speed test like tool to install at the
>> > core of our network that would provide a sustained upload or download
>> > test that would run for longer periods of time than a regular speed
>> > test. I would like to fill the pipe while graphing in Ehealth or as part
>> > of the selected tool to prove that the contracted bandwidth is available
>> > in both directions.
>> >
>> >  Any recommendations for products would be appreciated. We are currently
>> > looking at SolarWinds WAN Killer and a traffic generator from Omnicore
>> > LanTraffic V2. I am also open to different "types" of solutions to point
>> > to where the problem is actually located.
>> >
>> > Thanks in advance for any suggestions
>> >
>> > Rick Martin
>> > Network Engineer
>> > State of Arkansas, Department of Information Systems
>> > _______________________________________________
>> > cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> > https://puck.nether.net/mailman/listinfo/cisco-nsp
>> > archive at http://puck.nether.net/pipermail/cisco-nsp/
>> >
>> > _______________________________________________
>> > cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> > https://puck.nether.net/mailman/listinfo/cisco-nsp
>> > archive at http://puck.nether.net/pipermail/cisco-nsp/
>> >
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 



More information about the cisco-nsp mailing list