[j-nsp] MX960 with 3 RE's?

Kevin Wormington kworm at sofnet.com
Fri Jan 15 08:24:49 EST 2016


So what are any bandwidth limitations of an all DPCE-R system with original SCB?

Thanks,

Kevin


> On Jan 15, 2016, at 6:59 AM, Christopher E. Brown <chris.brown at acsalaska.net> wrote:
> 
> 
> The MPC2-Q is an advanced per unit queueing card and has QX, it also
> runs against the lower/original fabric rate.
> 
> The 16XGE rate is a port mode card with no QX, and it supports the first
> of the fabric speed increases.
> 
> The published capacity of the MPC2-Q is 30G per MIC with a supplied by
> Juniper actual worst-case figure of 31.7G, climbing to about 39G with
> larger frames.
> 
> This matches with my own test results and it does not change with SCB model.
> 
> 
> On 1/15/16 03:47, Adam Vitkovsky wrote:
>>> From: Saku Ytti [mailto:saku at ytti.fi]
>>> Sent: Friday, January 15, 2016 10:18 AM
>>> On 15 January 2016 at 03:13, Christopher E. Brown
>>> <chris.brown at acsalaska.net> wrote:
>>>> When the same folks were asked about the 16XGE card and the 120G (and
>>>> later 160G) performance it was indicated that there was an additional
>>>> layer of logic/asics used to tie all 4 trios in the 16XGE to the bus
>>>> and that these ASICs offloaded some of the bus related overhead
>>>> handling from the TRIOs, freeing up enough capacity to allow each TRIO in
>>> the 16XGE to provide a full 40G duplex after jcell/etc overhead.
>>> 
>>> 
>>> Sorry Christopher for being suspicious, but I think you must have done some
>>> mistake in your testing.
>>> 
>>> Only difference that I can think of, on top of the multicast replication, is that
>>> 16XGE does not have TCAM. But that does not matter, as the TCAM isn't
>>> used for anything in MPC1/MPC2, it's just sitting there.
>>> MPC1/MPC2 can be bought without QX. You specifically metnion '3D-Q'.
>>> If you were testing with QX enabled, then it's wholly different thing.
>>> QX was never dimensioned to push all traffic in every port via QX, it's very
>>> very much underdimensioned for this. If MQ can do maybe ~70Gbps
>>> memory BW, QX can't do anywhere near 40Gbps. So if you enable QX or
>>> ingress+egress, you're gonna have very very limited performance.
>>> 
>> Saku is right, if you do the math then 40.960Gbps is a theoretical maximum for QX as a whole.
>> 
>> adam
>> 
>> 
>> 
>>        Adam Vitkovsky
>>        IP Engineer
>> 
>> T:      0333 006 5936
>> E:      Adam.Vitkovsky at gamma.co.uk
>> W:      www.gamma.co.uk
>> 
>> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of this email are confidential to the ordinary user of the email address to which it was addressed. This email is not intended to create any legal relationship. No one else may place any reliance upon it, or copy or forward all or any of it in any form (unless otherwise notified). If you receive this email in error, please accept our apologies, we would be obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or email postmaster at gamma.co.uk
>> 
>> Gamma Telecom Limited, a company incorporated in England and Wales, with limited liability, with registered number 04340834, and whose registered office is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
>> 
>> 
>> 
> 
> 
> -- 
> ------------------------------------------------------------------------
> Christopher E. Brown   <chris.brown at acsalaska.net>   desk (907) 550-8393
>                                                     cell (907) 632-8492
> IP Engineer - ACS
> ------------------------------------------------------------------------
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp


More information about the juniper-nsp mailing list