[j-nsp] RFC2544 on Juniper MX960 10G ports

Chris Evans chrisccnpspam2 at gmail.com
Sun Mar 14 17:13:53 EDT 2010


Alex,

Thanks for you input. While what you stay is true, there are still issues
with the MX platform. As stated in my previous postings, I can only push
line rate or 94.78% max before drops start.. This is that % on a full duplex
stream.. If I shutdown one of the streams I can then push 100% line rate,
packet loss is only there when there are full duplex flows.. It's really
either down to a bug or just the PFE cannot forward the PPS. If you take the
frame size up to 69 bytes, you can do 100% line rate full duplex.

Putting the MX into a layer2 mode (meaning setting it up as a typical
ethernet switch) brings another issue with even much larger % of packet loss
unfortunately.

I am using an IXIA XM12 appliance and IXNetwork.

On Sun, Mar 14, 2010 at 5:04 PM, Alex <alex.arseniev at gmail.com> wrote:

> Chris,
> Line rate on one 10G port could be different from line-rate on another 10G
> port because Ethernet is not bit-synchronous.
> LAN-PHY 10GBASE-S allowed transmitter clock tolerance is 10.3125Gbd +/1
> 100ppm (parts per million) and LAN-PHY 10GBASE-S allowed receiver clock
> tolerance is also 10.3125Gbd +/- 100ppm. See IEEE 802.3-2008 spec sections
> 52.5.1 and 52.5.2.
> So in reality the Rx on ingress port could run at 10.31353125Gbd and Tx on
> egress port could run at 10.31146875Gbd. This 0.0020625Gbd=2.0625Mbd
> difference causes ingress buffer to grow until there is no more room and
> eventually an "overspill" and packet drop. Please run Your tests at 99% line
> rate, I am sure there will be no packet loss at all.
> Rgds
> Alex
>
>
> ----- Original Message ----- From: "Chris Evans" <chrisccnpspam2 at gmail.com
> >
> To: "Joerg Staedele" <js at tnib.de>
> Cc: "juniper-nsp" <juniper-nsp at puck.nether.net>
> Sent: Sunday, March 14, 2010 7:28 PM
>
> Subject: Re: [j-nsp] RFC2544 on Juniper MX960 10G ports
>
>
>  Joerg,
>>
>> The hardware we have in our lab is the 20xSFP + 2x10Gig.. JTAC says this
>> 'should' work but obviously it doesn't.. I tested it on an EX switch and
>> it
>> had no issues.. In a simple L2 mode the MX lost about 47% packets at
>> 64byte
>> 10Gig line rates. In L3 mode is lost about 5.2%.. This is when testing
>> full
>> duplex flows. This was with 9.6R3.8.. There is a known PR related to this
>> issue.
>>
>> Hope to have some resolution sometime this week..
>>
>> On Sun, Mar 14, 2010 at 3:14 PM, Joerg Staedele <js at tnib.de> wrote:
>>
>>  Hi,
>>>
>>> so this means that this Linecard is not able to do line-rate forwarding
>>> with small frame sizes? What about other cards (20xSFP+2x10G) .. I guess
>>> they use exactly the same PFE hardware? So they have this limitation
>>> aswell?
>>>
>>> I am really confused now because in every document you read that the
>>> DPCE's
>>> are able to do line-rate at any frame-size?
>>>
>>> Regards,
>>>  Joerg
>>>
>>> -----Original Message-----
>>> From: juniper-nsp-bounces at puck.nether.net [mailto:
>>> juniper-nsp-bounces at puck.nether.net] On Behalf Of Jonathan Lassoff
>>> Sent: Sunday, March 14, 2010 6:55 PM
>>> To: Serge Vautour
>>> Cc: juniper-nsp
>>> Subject: Re: [j-nsp] RFC2544 on Juniper MX960 10G ports
>>>
>>> Excerpts from Serge Vautour's message of Thu Feb 18 16:28:44 -0800 2010:
>>> > Hello,
>>> >
>>> > We recently used a traffic generator to run RFC2544 tests against a
>>> Juniper MX960. The 1G ports work flawlessly. 0% packet loss at all frame
>>> sizes.
>>> >
>>> > The 10G ports  (4x10G "R" card) didn't do as well. They dropped up to >
>>> 25%
>>> packets with certain small frames (ex: 70 byte frames). The packet loss
>>> goes
>>> away almost completely for frames larger than 100 bytes. Our SE tells us
>>> this is normal and is due to how the MX chops the frames up into 64 byte
>>> cells inside the PFE. The 4x10G cards have 4 separate PFEs (1 per 10G
>>> port)
>>> and each of them has 10G of bandwidth. 10G of small frames essentially
>>> creates more than 10G of traffic inside the PFE. That explanation may not
>>> be
>>> 100% correct but I think it paints the right picture.
>>> >
>>> > Now the questions. Is this a problem on production networks with real
>>> world traffic? What about on VPN networks with alot of small frames like
>>> VoIP? Has anyone seen this problem creep it's head in production?
>>>
>>> Isn't the minimum Ethernet frame size 64 bytes? I think Ethernet II /
>>> Ethernet 802.3 requires this.
>>>
>>> Wouldn't this make the problem moot if you're just running Ethernet?
>>>
>>> Might be a problem with small ATM cells?
>>>
>>> Cheers,
>>> jof
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>
>>>
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>
>>>  _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>>
>


More information about the juniper-nsp mailing list