[j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

Emanuel Popa emanuel.popa at gmail.com
Fri Jan 21 06:06:12 EST 2011


Hi,

This is a pretty old thread, but the input might be useful.

In the last couple of months we've been trying to mix DPCE-R-4XGEXFP
with MPC-3D-16XGE-SFPP-R-B inside a MX960 chassis. The reason for
using MPCs was scalability. The final result agreed upon together with
Juniper was using the MX series platform either with DPCs or MPCs
exclusively when expecting wire rate speed on all FPCs, 40Gbps/slot
for the DPCs and 120Gbps/slot for the MPCs. Even mixing the different
linecards with small traffic figures proved to be dangerous and ruined
one of our maintenance windows.

We are curious to find out what is the actual performance limit on the
fixed MPC. Our highest record shows almost 60Gbps of traffic one way
and 55Gbps the other way on a single MPC.

Regards,
Manu


On Mon, Aug 30, 2010 at 1:56 AM, Richard A Steenbergen <ras at e-gerbil.net> wrote:
> On Sun, Aug 29, 2010 at 12:00:01PM -0700, Derick Winkworth wrote:
>> so the possibility does exist that with a combination of newer fabric
>> and newer line card (a line card with better MQ memory bandwidth),
>> that MX might be able to push more traffic per slot...
>
> Sure, the chassis backplane is electrically capable of quite a bit, so
> if you keep upgrading the fabric and the cards you should be able to
> keep increasing bandwidth without forklifting the chassis for a long
> time.
>
> BTW, one more point of clarification on the MQ bandwidth limit. The 70G
> limit is actually the for bandwidth crossing the PFE in any direction,
> so the previously mentioned "35G fabric 35G wan" example is actually
> based on the assumption of bidirectional traffic. To calculate the MQ
> usage you only want to count the packet ONCE per PFE crossing (so for
> example, you can just count every ingress packet), but you need to
> include traffic coming from the fabric interfaces too.
>
> So for example:
>
> * A single 10Gbps stream coming in one port on the PFE counts as 10Gbps
> of MQ usage, regardless of whether it is destined for the fabric or for
> a local port.
>
> * A bidirectional 10Gbps stream between two ports on the same PFE counts
> as 20Gbps of MQ usage, since you have 2 ports each receiving 10Gbps.
>
> * 30Gbps of traffic coming in over the fabric (ignoring any fabric
> limitations for the moment, as they are a separate calculation) and
> going out the WAN interfaces counts as 30G of MQ usage, which means you
> still have another 40G available to receive packets from the WAN
> interfaces and locally or fabric switch them.
>
> This is quite a bit more flexible than just thinking about it as "35G
> full duplex", since you're free to use the 70G in any direction you see
> fit.
>
> --
> Richard A Steenbergen <ras at e-gerbil.net>       http://www.e-gerbil.net/ras
> GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>



More information about the juniper-nsp mailing list