[j-nsp] MX960 with 3 RE's?

Christopher E. Brown chris.brown at acsalaska.net
Thu Jan 14 18:03:57 EST 2016


On 1/14/2016 1:48 PM, Jeff wrote:
> Am 14.01.2016 um 23:19 schrieb Christopher E. Brown:
>>
>>
>> Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
>> with, but nobody in their right mind mixes DPCs and MPCs.
>>
> 
> Why is that? The mentioned 16x 10G card actually sounds interesting but is still quite
> expensive compared to the older DPCEs with 4x 10GE so we had them in mind for our growth
> and are planning to use them together with the existing DPCEs. Why wouldn't you do it?
> 
> 
> Thanks,
> Jeff
> 

DPCs and MPCs have different engines with different features available.  Running even one
DPC prevents running in enhanced mode limites features on the MPCs.  Historically, mixing
DPCs and MPCs was also asking to get bitten by "bug of the week".


I worked with and tested all MX hardware but downchecked MX for CarrierE use until the
trio based MPC2-Q cards were available due to limited traffic control/queueing features.


I deployed with and still use the 16XGE cards.  Initial build (early 2012) was with
RE2000/SCB and we used 3 out of 4 10G in each bank.  The NxXGE dist boxes were later
re-fit with SCBE cards and opened up for 16 XGE active per card.



If you are comfy with DPCs for say normal Inet backbone use (not say CarrierE service
termination and similar where the MPC2/MPC-3 features can be critical) and just want
capacity I guess your would be fine, barring any remaining "mix bugs".  The 16XGE is just
a basic port mode card without a whole bunch of extra features.

If on the other hand you are doing edge feeding with hundreds to thousands of units per
card, lots of filters, etc...  Mixing MPC and DPC is likely still a very bad idea.

-- 
------------------------------------------------------------------------
Christopher E. Brown   <chris.brown at acsalaska.net>   desk (907) 550-8393
                                                     cell (907) 632-8492
IP Engineer - ACS
------------------------------------------------------------------------


More information about the juniper-nsp mailing list