[j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

Richard A Steenbergen ras at e-gerbil.net
Sun Aug 29 02:39:18 EDT 2010


On Sun, Aug 29, 2010 at 02:29:29AM +0400, Pavel Lunin wrote:
> My hypotheses is MQ can actually do twice as much: 65 Mpps from the 
> interfaces to back-plane and 65 backwards. Otherwise you'll never get 
> 30 Gbps FD with MPC1. But this knowledge is too burdensome for sales 
> people, because if you don't know it, you can just multiply 65 by the 
> number of chips in a box and get the right pps number. One could 
> hardly understand that each MQ actually does twice as much work but 
> each packet passes two MQ and you need multiply and than divide by 2 
> accordingly.

I got some replies off-list which helped shed some light on the Trio 
capabilities, so with their permission I will summarize the major points 
for the archives:

* Each Trio PFE is composed of the following ASICs:

  - MQ: Handles the packet memory, talks to the chassis fabric and the 
    WAN ports, handles port-based QoS, punts first part of the packet 
    to the LU chip for routing lookups.
  - LU: Lookup ASIC which does all IP routing lookups, MAC lookups, 
    label switching, firewall matching, policing, accounting, etc.
  - QX: (optional) Implements the fine grained queueing/HQoS stuff.
    NOT included on the 16-port 10GE MPC.
  - IX: (optional) Sits in front of the MQ chip to handle GigE ports.

* The Trio PFE is good for around 55Mpps of lookups, give or take, 
  depending on the exact operations being performed.

* The MQ chip can do around 70Gbps, give or take depending on the 
  packet size. Certain packet sizes can make it all the way to 80Gbps, 
  inconvenient packet sizes can bring it down below 70G by the time you 
  figure in overhead, but the jist is around 70Gbps. This limit is set 
  by the bandwidth of the packet memory. The quoted literature capacity 
  of 60Gbps is intended to be a "safe" number that can always be met.

* The 70G of MQ memory bandwidth is shared between the fabric facing 
  and WAN facing ports, giving you a bidirectional max of 35Gbps each 
  if you run 100% fabric<->wan traffic. If you do locally switched wan->
  wan traffic, you can get the full 70Gbps. On a fabricless chassis like 
  the MX80, that is how you get the entire amount.

* The MX960 can only provide around 10Gbps per SCB to each PFE, so it 
  needs to run all 3 SCBs actively to get to 30Gbps. If you lose an SCB, 
  it drops to 20Gbps, etc. This is pre cell overhead, so the actual 
  bandwidth is less (for example, around 28Gbps for 1500 byte packets).

* The MX240 and MX480 provide 20Gbps of bandwidth per SCB to each PFE, 
  and will run both actively to get to around 40Gbps (minus the above 
  overhead). Of course the aforementioned 35Gbps memory limit still 
  applies, so even though you have 40Gbps of fabric on these chassis 
  you'll still top out at 35Gbps if you do all fabric<->wan traffic.

* Anything that is locally switched counts against the LU capacity and 
  the MQ capacity, but not the fabric capacity. As long as you don't 
  exhaust the MQ/fabric, you can get line rate out of the WAN 
  interfaces. For example, 30Gbps of fabric switched + 10Gbps of locally 
  switched traffic on a MX240 or MX480 will not exceed the MQ or fabric 
  capacity and will give you bidirectional line rate.

* I'm still hearing mixed information about egress filters affecting 
  local switching, but the latest and most authorative answer is that 
  it DOESN'T actually affect local switching. Everything that can be 
  locally switched supposedly is, including tunnel encapsulation, so if 
  you receive a packet, tunnel it, and send it back out locally, you get 
  100% free tunneling with no impact to your other capacity.

I think that was everything. And if they aren't planning to add it 
already, please join me in asking them to add a way to view fabric 
utilization, as it would really make managing the local vs fabric 
capacities a lot easier.

-- 
Richard A Steenbergen <ras at e-gerbil.net>       http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)


More information about the juniper-nsp mailing list