[j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback
Derick Winkworth
dwinkworth at att.net
Sun Aug 29 15:00:01 EDT 2010
so the possibility does exist that with a combination of newer fabric and newer
line card (a line card with better MQ memory bandwidth), that MX might be able
to push more traffic per slot...
________________________________
From: Richard A Steenbergen <ras at e-gerbil.net>
To: Derick Winkworth <dwinkworth at att.net>
Cc: "juniper-nsp at puck.nether.net" <juniper-nsp at puck.nether.net>
Sent: Sun, August 29, 2010 1:34:00 PM
Subject: Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards -
coexistance of old DPCs and new Cards in same chassis -- looking for experience
feedback
On Sun, Aug 29, 2010 at 07:03:59AM -0700, Derick Winkworth wrote:
> Has this always been the case with the SCBs? Will there not be newer
> SCBs that can run faster? I've always heard that the MX series could
> potentially run 240gbps per slot but would require SCB upgrade and
> newer line cards... We're not there yet, but I'm wondering if its
> true. it sounds like below that we are talking about existing SCBs
> which means the MX is limited to 120G per slot.
Until now each PFE has only needed 10G total bandwidth (per I-chip, * 4
per DPC), so the fabric has been more than sufficient while still
providing N+1. My understanding is that even with a new fabric card
you'll still be limited to the 35G from the MQ memory bandwidth limit
(just like you are with MX240/MX480), so the only difference will be a)
you'll get fabric redundancy back, and b) you'll get support for future
cards (like 100GE, etc).
Another thing I forgot to mention is that the old ADPC I-chip cards can
still only talk to the same number of SCB's that they did originally (2x
on MX960, 1x on MX240/480). This means that when you're running mixed
I-chip and Trio cards in the same chassis, in say for example an MX960,
all traffic going to/from an I-chip card will stay on 2 out of 3 SCBs,
and only the Trio-to-Trio traffic will be able to use the 3rd SCB. If
all of your traffic is going between a Trio card and other I-chip cards,
this will obviously bottleneck your Trio capacity at 20G per PFE (minus
overhead). Supposedly there is an intelligent fabric request/grant
system, so hopefully the Trio PFEs are smart enough to use more capacity
on the 3rd SCB for trio-to-trio traffic is the first 2 are being loaded
up with I-chip card traffic.
You can also use the hidden command "show chassis fabric statistics" to
monitor fabric utilization and drops. The output is pretty difficult to
parse, you have to look at it per-plane, and it isn't in XML so you
can't even easily write an op script for it, but it's still better than
nothing.
Hopefully Juniper will add a better fabric utilization command, ideally
with something that tracks the peak rate ever seen too (like Cisco
does), for example:
cisco6509#show platform hardware capacity fabric
Switch Fabric Resources
Bus utilization: current: 13%, peak was 54% at 08:47:31 UTC Fri Jun 25 2010
Fabric utilization: Ingress Egress
Module Chanl Speed rate peak rate peak
1 0 20G 1% 6% @21:14 06Apr10 1% 10% @20:14 13Feb10
2 0 20G 10% 33% @21:15 21Mar10 0% 31% @20:10 24May10
2 1 20G 2% 52% @03:48 30Apr10 14% 98% @10:20 09Jun10
3 0 20G 19% 40% @20:38 21Mar10 14% 25% @01:02 09Jul10
3 1 20G 4% 37% @10:42 09Jan10 1% 61% @02:52 20Dec09
4 0 20G 27% 51% @20:30 14Jul10 1% 9% @17:04 03May10
4 1 20G 2% 60% @12:12 13May10 34% 82% @01:33 29Apr10
5 0 20G 0% 5% @18:51 14Feb10 0% 21% @18:51 14Feb10
6 0 20G 2% 17% @03:07 29Jun10 19% 52% @17:50 14Jul10
6 1 20G 0% 42% @10:22 20Apr10 0% 73% @02:25 28Mar10
7 0 20G 6% 33% @10:20 09Jun10 26% 58% @02:25 19Aug10
7 1 20G 35% 51% @19:38 14Jul10 1% 6% @16:55 03May10
Or at least expose and XML-ify the current one so we can script up
something decent.
--
Richard A Steenbergen <ras at e-gerbil.net> http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
More information about the juniper-nsp
mailing list