[j-nsp] MX960 with 3 RE's?

Colton Conor colton.conor at gmail.com
Thu Jan 14 20:52:31 EST 2016


Thanks for the replies, but I am not interested in the 16XGE card. The
configuration I am referencing would be this:
http://www.ebay.com/itm/Juniper-MX960BASE-AC-10x-DPC-R-4XGE-XFP-MS-DPC-1yrWrnty-Free-Ship-40x-10G-MX960-/351615023814?hash=item51dde37ac6:g:WUcAAOSwLVZViHc~

Should I run into any problems with that configuration? I believe
everything would run at line rate.

On Thu, Jan 14, 2016 at 7:13 PM, Christopher E. Brown <
chris.brown at acsalaska.net> wrote:

>
> We actually spent a couple years spooling up for a greenfield build of a
> network required
> to provide very tight SLAs on a per service bases.
>
> We were working with the MX BU directly for almost 2 years and constantly
> exchanging test
> results and loaner/testing cards.
>
> Per specs provided by the designers of the MPC2 and verified by our
> testing the worst case
> throughput per trio for a MPC2-3D per or 3D-Q is 31.7G, that is full
> duplex with every
> packet going to or from bus and every packet being min length. That is
> actual packet,
> after all internal overhead.
>
> Larger packets bring this closer but not all the way to 40G, and traffic
> hairpinning on
> same trio can take all the way to 40G.
>
> We did not investigate any impact of MCAST replication on this as MCAST
> replication is not
> part of the use case for us.
>
>
> When the same folks were asked about the 16XGE card and the 120G (and
> later 160G)
> performance it was indicated that there was an additional layer of
> logic/asics used to tie
> all 4 trios in the 16XGE to the bus and that these ASICs offloaded some of
> the bus related
> overhead handling from the TRIOs, freeing up enough capacity to allow each
> TRIO in the
> 16XGE to provide a full 40G duplex after jcell/etc overhead.
>
>
> On 1/14/2016 3:27 PM, Saku Ytti wrote:
> > On 15 January 2016 at 01:39, Christopher E. Brown
> > <chris.brown at acsalaska.net> wrote:
> >> The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to
> the MPC1 and 2 cards
> >> but the quad trio interconnect in the 16XGE is wired up diff with
> additional helpers and
> >> can do the full 40G per bank.
> >
> >
> > Pretty sure this is not true. There are three factors in play
> >
> > a) memory bandwidth of trio (mq)
> > b) lookup performance of trio (lu)
> > c) fabric capacity
> >
> > With SCBE 16XGE isn't limited by fabric, but it's still limited by
> > memory bandwidth and lookup performance.
> >
> > Memory bandwidth is tricksy, due to how packets are split into cells,
> > if you have unlucky packet sizes, you're wasting memory bandwidth
> > sending padding, and can end up having maybe even that 30Gbps
> > performance, perhaps even less.
> > If you have lucky packet sizes, and you can do 40Gbps. For me, I work
> > with half-duplex performance of 70Gbps, and that's been safe bet for
> > me, in real environments.
> >
> > Then there is lookup performance, which is like 50-55Mpps, out of
> > 60Mpps of maximum possible.
> >
> >
> >
> > Why DPCE=>MPC is particularly bad, is because even though MQ could
> > accept 20Gbps+20Gbps from the fabrics DPCE is sending, it's not, it's
> > only accepting 13+13, but DPCE isn't using the last fabric to send the
> > remaining 13Gbps.
> > So if your traffic flow from single DPCE card (say 4x10GE LACP) is out
> > to single trio, you'll be limited to just 26Gbps out of 40Gbps.
> >
> > If your chassis is SCBE or just MPC1/MPC2 you can 'set chassis fabric
> > chassis redundancy-mode redundant', which causes MPC to use only two
> > of the fabrics, in which case it'll accept 20+20 from DPCE, allowing
> > you to get that full 40Gbps from DPCE.
> > But this also means, if you also have 16XGE and SCB, you can pretty
> > much use just half of the ports of 16XGE.
> >
> > If this is confusing, just don't run mixed box, or buy SCBE and run
> > redundant-mode.
> >
> >
> >
>
>
> --
> ------------------------------------------------------------------------
> Christopher E. Brown   <chris.brown at acsalaska.net>   desk (907) 550-8393
>                                                      cell (907) 632-8492
> IP Engineer - ACS
> ------------------------------------------------------------------------
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>


More information about the juniper-nsp mailing list