[f-nsp] BigIron 4k with JetCore
Jeroen Wunnink
jeroen at easyhosting.nl
Mon Jul 14 06:16:44 EDT 2008
Might want to take a look at the free cam entries and check if
they're not very low (sh cam-part det)
We've had some issues with the L3 cam getting very low and cpu
spiking. Re-assigning some L2 cam space to L3 cam space resolved
this. ('cam-partition l2 7 l3 68 l4 25' for example is what worked
out good for us)
Also look if you're not running low on the max. amount of ip routes
or hitting the max. amount of ip cache settings ('sh ip route' and
'sh ip cache'), the default is set to something like 140000, setting
those 2 to 400000 helps a lot. ('system-max ip-cache 400000' and
'system-max ip-route 400000'), this can be another reason for high CPU spikes.
Take note that every 4 gigabit ports on a Bigiron Jetcore modules
share the same CAM space, so if you're running low on CAM on a
certain set of ports it may indeed be a good idea to share the load
between more port sets.
From the foundry CAM whitepaper:
JetCore ASIC:
The JetCore products are composed of the JetCore "building blocks,"
the IGC and the IPC. Each IGC supports four Gigabit Ethernet ports,
and each IPC supports 24 Fast Ethernet ports plus one Gig port. As
an illustration of how the IGC and IPC are used, the FWS4802 consists
of two IPCs linked together. An 8-port Gigabit module has two IGCs,
and the FI2404 module has one IGC and one IPC. Each IGC or IPC has
its own CAM space.
Each IPC or IGC has 1Mbit of CAM for FastIron modules or 2Mbit for
BigIron modules. Therefore, a J-BxG has 4Mbit total, a J-FI48E has
2Mbit, and a J-B16GC has 8Mbit.
At 00:18 14-7-2008, you wrote:
>Hello,
>
>I am running into a issue with my BI4k CPU spiking and hold timers
>expiring on my BGP sessions with the border routers. My network
>consists of 2 Juniper M7i's acting as border routers, each with a
>transit attached and receiving full routes. Both M7is are connected
>to the BI4k and run IBGP. Then i have a 16 port copper gige blade,
>also JetCore. All of my top of rack switches connect to the 16 port
>card. The top of rack switches are just L2 with tagged uplink ports.
>Now each customer gets there own VLAN and /29 or larger block. So i
>have a bunch of ve's so the BI4k acts as the default gateway for the
>customers ip blocks. My normal traffic aggregate traffic is about 50
>meg or so max, but last night i have a 50 meg traffic spike and
>exactly at that time i lost my session between the M7i and the BI4k.
>
>I have no idea what limitation i am hitting, but this box should be
>good for much more than this.
>
>The local Foundry SE says i should spread my uplink ports from my L2
>switches across the 16 port blade. He seems to think because i have
>all 5 switches plugged right next to each other that i am exhausting
>CAM resources of the card. The card has 4 ground of 4 ports, he
>wants me to plug one uplink per group of 4. But i cant believe thats
>all this card can take.
>
>Can anyone shed any light on this?
>
>Thanks,
>
>Brendan
>_______________________________________________
>foundry-nsp mailing list
>foundry-nsp at puck.nether.net
>http://puck.nether.net/mailman/listinfo/foundry-nsp
Met vriendelijke groet,
Jeroen Wunnink,
EasyHosting B.V. Systeembeheerder
systeembeheer at easyhosting.nl
telefoon:+31 (035) 6285455 Postbus 48
fax: +31 (035) 6838242 3755 ZG Eemnes
http://www.easyhosting.nl
http://www.easycolocate.nl
More information about the foundry-nsp
mailing list