[j-nsp] Miercom Competitive Performance Testing Results: Cisco ASR9000 vs Juniper MX960

Kevin Oberman oberman at es.net
Mon Sep 28 11:41:59 EDT 2009


> From: Mark Tinka <mtinka at globaltransit.net>
> Date: Mon, 28 Sep 2009 13:47:30 +0800
> Sender: juniper-nsp-bounces at puck.nether.net
> 
> On Monday 28 September 2009 12:25:43 am Derick Winkworth 
> wrote:
> 
> > 2) We put no stock in vendor testing from anyone,
> > including Juniper.  When you start poking and prodding
> > for details, you start hearing.. "Well this is the
> > thing..." and "About that, yeah, basically that isn't
> > exactly..." and then you realize in every case that these
> > tests are total bullsh*t.  Indeed, they rig the tests to
> > make their products appear more favorable.
> 
> Agree - same here.
> 


Just for a "real world" example of how this may work, back a couple of
years ago we were evaluating routers for our next round of upgrades. We
received a note from a sales rep that one of the possible candidates to
win the procurement had a major issue with dropping packets at far below
line rate. This was completely contrary to our experience, so we asked
for details and were provided with on of these "independent" studies.

It had lots of details on the testing and lots of pretty graphs and,
sure enough, the testing showed that the other guy's product was
dropping packets at a high rate well before the line was saturated.

We went to the lab and tried to reproduce the results. Lo and behold, we
saw the same issue, except...

The sales guy's unit was configured for tail drop out of the box while
the other one defaulted to RED with some fairly conservative defaults on
when to start discarding. When we had it configured for tail-drop, it
ran at full line rate, exactly as advertised.

To their credit, when I pointed this out to the sales guy, he pushed it
up the chain and, within a couple of days, the study in question was
removed from their web site.

I am not naming either party in this and I really do believe that the
sales guy was being honest and it is even possible that everyone at the
vendor in question thought the test was valid, but the vendor paid for
performance tests and the testing company was either lazy or was
deliberately tilting the field to make their customer happy and produced
a "study" that was clearly bogus. It happened at least three years ago,
so this is not anything new.

In all cases, when a paper like this is paid for by either side, it's
probably not worth the time to read it and, in the end, you are best off
testing for yourself if something looks too one-sided.
-- 
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: oberman at es.net			Phone: +1 510 486-8634
Key fingerprint:059B 2DDF 031C 9BA3 14A4  EADA 927D EBB3 987B 3751


More information about the juniper-nsp mailing list