[j-nsp] M10i FPC PIC Throughput Questions

Christopher E. Brown chris.brown at acsalaska.net
Sat Feb 23 23:37:28 EST 2013


Bus is _shared_, with CFEB you have guaranteed 3.2Gbit shared by up to 4
PICs, with E-CFEB is non issue single PIC limit is 1G and E-CFEB will do
full 1G per no matter what.

If you try to handle more than 3.2Gbit on a CFEB bus (X-0/X/X or
X-1/X/X) you may see bus contention depending on packet size.

Load 4xGE and maybe.  Load 3xGE + 4xDS3 is pushing limit but OK.

With E-CFEB, non issue.

With CFEB summ the bandwith make make sure is 3200Mbit or less, and 3200
is shared by all 4 PICs.


On 2/23/13 6:51 PM, Matt Bentley wrote:
> Thanks!  So it would be correct to say you should NEVER see
> oversubscription on a channelized DS3 card right?  Obviously, you can
> overdrive a single T1, but you'd never see drops due to the PIC itself?
>  I guess what I'm asking is whether or not the bandwidth availalble on a
> FPC is allocated equally per PIC, or if everyone sort of shares it.
> 
> On Sat, Feb 23, 2013 at 8:13 PM, Christopher E. Brown
> <chris.brown at acsalaska.net <mailto:chris.brown at acsalaska.net>> wrote:
> 
> 
>     With the std cfeb after internal overhead per bus capacity is 3.2Gbit of
>     traffic, this is worst case minimum small packets, etc.
> 
>     Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.
> 
>     Unless you are doing all small packet, actual limit is higher than 3.2.
> 
>     Enhanced CFEB bumps the raw bus capacity to something around 5Gbit, and
>     the after all overheads forwarding capacity to 4Gbit (based on the 1G
>     per PIC limit).
> 
>     Summ...
> 
>     CFEB
>             Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small packet_
> 
>     E-CFEB
>             Up to 1Gbit per PIC
> 
> 
>     These figures are
>     On 2/23/13 6:01 PM, Matt Bentley wrote:
>     > OK - so there has been a lot of discussion around this that I've
>     seen, but
>     > I've searched for hours and still can't find concrete answers.
>      Can someone
>     > help?
>     >
>     > 1.  Does the 3.2 Gbps throughput limitation include overhead?  In
>     other
>     > words, Is the "raw" throughput 4 Gbps with effective throughput of 3.2
>     > Gbps?  Or is it 3.2 Gbps of raw throughput with effective
>     throughput of 2.5
>     > Gbps?
>     > 2.  Is this throughput per PIC on the FPC?  So let's say I have
>     three 4x
>     > GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get
>     bandwidth
>     > allocated equally between them?  So is it 800 Mbps per PIC, and
>     the PICs
>     > can't "steal" bandwidth from another one?
>     > 3.  Where, and based on what, is traffic dropped with  Juniper
>     head of line
>     > blocking (ie where multiple high speed input interfaces try to go
>     out the
>     > same lower speed exit interface)?
>     >
>     > Thanks very much!
>     > _______________________________________________
>     > juniper-nsp mailing list juniper-nsp at puck.nether.net
>     <mailto:juniper-nsp at puck.nether.net>
>     > https://puck.nether.net/mailman/listinfo/juniper-nsp
>     >
> 
> 
>     --
>     ------------------------------------------------------------------------
>     Christopher E. Brown   <chris.brown at acsalaska.net
>     <mailto:chris.brown at acsalaska.net>>   desk (907) 550-8393
>     <tel:%28907%29%20550-8393>
>                                                          cell (907)
>     632-8492 <tel:%28907%29%20632-8492>
>     IP Engineer - ACS
>     ------------------------------------------------------------------------
>     _______________________________________________
>     juniper-nsp mailing list juniper-nsp at puck.nether.net
>     <mailto:juniper-nsp at puck.nether.net>
>     https://puck.nether.net/mailman/listinfo/juniper-nsp
> 
> 


-- 
------------------------------------------------------------------------
Christopher E. Brown   <chris.brown at acsalaska.net>   desk (907) 550-8393
                                                     cell (907) 632-8492
IP Engineer - ACS
------------------------------------------------------------------------


More information about the juniper-nsp mailing list