[j-nsp] RFC2544 on Juniper MX960 10G ports
Chris Evans
chrisccnpspam2 at gmail.com
Sun Mar 14 09:48:58 EDT 2010
So we obtained a MX480 to eval in our lab with the 2 x 10GigE and 20x GigE
DPC-R card.. I did a simple 64byte line rate test and got somewhat similar
results.. I could only run about 94.75% line rate (using an IXIA appliance)
with full-duplex flows @ 64byte frame size. In a L2 config it was even
worse, I could only get about 53% line rate before drops started..
I have a JTAC case open on it and thye state that this is a known PR and
should have been fixed in 9.6R3 which I am testing. The PR# is 469135. I
tested the same test on an EX4200 and had no packet loss. Now it's just
getting JTAC to believe the issue is there..
On Thu, Feb 18, 2010 at 9:55 PM, Judah Scott <judah.scott.iam at gmail.com>wrote:
> Yes what you see is correct behavior (for those MX DPCs). I doubt it's a
> cell size issue or you would see a saw-tooth. Instead what you can infer
> is
> that each of the 4 PFE's are limited to the packets-per-second they can
> process depending on the transport type involved. I.E. VPLS is really bad
> at low packet sizes but pure L3 is good.
>
> -J Scott
>
>
> On Thu, Feb 18, 2010 at 5:08 PM, OBrien, Will <ObrienH at missouri.edu>
> wrote:
>
> > We have been running 10G R cards exclusively in our pair of MX960s - so
> far
> > we have had no issues with vpn tunnels coming in/out and we have many of
> > them. We don't run voip over that particular connection either. In fact,
> > we've really seen no problems with traffic going through them at all. We
> do
> > run them exclusively at the edge of our network as border routers for I1
> and
> > I2 traffic.
> >
> > Typical I1 load is near a Gb and I2 usually has a few.
> >
> > Will
> >
> > On Feb 18, 2010, at 6:28 PM, Serge Vautour wrote:
> >
> > > Hello,
> > >
> > > We recently used a traffic generator to run RFC2544 tests against a
> > Juniper MX960. The 1G ports work flawlessly. 0% packet loss at all frame
> > sizes.
> > >
> > > The 10G ports (4x10G "R" card) didn't do as well. They dropped up to
> 25%
> > packets with certain small frames (ex: 70 byte frames). The packet loss
> goes
> > away almost completely for frames larger than 100 bytes. Our SE tells us
> > this is normal and is due to how the MX chops the frames up into 64 byte
> > cells inside the PFE. The 4x10G cards have 4 separate PFEs (1 per 10G
> port)
> > and each of them has 10G of bandwidth. 10G of small frames essentially
> > creates more than 10G of traffic inside the PFE. That explanation may not
> be
> > 100% correct but I think it paints the right picture.
> > >
> > > Now the questions. Is this a problem on production networks with real
> > world traffic? What about on VPN networks with alot of small frames like
> > VoIP? Has anyone seen this problem creep it's head in production?
> > >
> > > It seems very unlikely to me that a maxed 10Gbps link would carry
> 7.5Gbps
> > of frame sizes less than 100 byte. I would expect larger frames to use up
> > the majority of the bandwidth. Can anyone correlate this with real world
> > traffic?
> > >
> > > As usual, the help received on this distribution list is invaluable.
> > Thanks in advance to anyone who replies.
> > >
> > > Serge
> > >
> > >
> > > __________________________________________________________________
> > > Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark
> your
> > favourite sites. Download it now
> > > http://ca.toolbar.yahoo.com.
> > > _______________________________________________
> > > juniper-nsp mailing list juniper-nsp at puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
More information about the juniper-nsp
mailing list