[c-nsp] C3560 as CPE, possible TCAM contention
Tassos Chatzithomaoglou
achatz at forthnet.gr
Tue Apr 29 11:42:42 EDT 2008
Hi Peter,
I usually use the following:
sh controllers cpu-interface
sh platform ip unicast counts
sh platform ip unicast failed
sh ip cef switching statistics feature
But from your tcam output, i see "IPv4 unicast indirectly-connected routes" are close to max (1921/2176). You also said about 2300
prefixes.
The routing template should help you, as you said:
3750#sh sdm prefer default | i indirect
number of indirect IPv4 routes: 2K
3750#sh sdm prefer routing | i indirect
number of indirect IPv4 routes: 8K
Of course the best solution (if not in a hurry) would be to open a tac case.
--
Tassos
Peter Rathlev wrote on 29/4/2008 5:10 μμ:
> Hi,
>
> I'm looking at some C3560s acting CPEs. One of them has 13 VRFs in a VRF
> Lite configuration, 36 BGP neighbors and around 2300 prefixes. (It's not
> a pretty design, but that's out of my hands.)
>
> It has started doing software switching, with very degraded performance
> of course. I can see the following:
>
> CPE_1#show platform tcam utilization
>
> CAM Utilization for ASIC# 0 Max Used
> Masks/Values Masks/values
>
> Unicast mac addresses: 784/6272 23/110
> IPv4 IGMP groups + multicast routes: 144/1152 6/26
> IPv4 unicast directly-connected routes: 784/6272 23/110
> IPv4 unicast indirectly-connected routes: 272/2176 252/1921
> IPv4 policy based routing aces: 0/0 0/0
> IPv4 qos aces: 528/528 31/31
> IPv4 security aces: 1024/1024 27/27
>
> Note: Allocation of TCAM entries per feature uses
> a complex algorithm. The above information is meant
> to provide an abstract view of the current TCAM utilization
>
> CPE_1#show platform ip unicast statistics
> Global Stats:
> HWFwdLoc:0 HWFwdSec:194077183 UnRes:0 UnSup:0 NoAdj:0
> EncapFail:0 CPUAdj:150183381 Null:0 Drop:0
>
> Prev Global Stats:
> HWFwdLoc:0 HWFwdSec:194077183 UnRes:0 UnSup:0 NoAdj:0
> EncapFail:0 CPUAdj:150183381 Null:0 Drop:0
>
> CPE_1#show platform ip unicast table
> Platform unicast IPv4 Table dump (# of entries 14)
> Name ID Label Mask
> IPv4:Default 0 0 0x7F
> IPv4:VRF01281 1 64 0x7F
> IPv4:VRF02401 2 65 0x7F
> IPv4:VRF02402 3 66 0x7F
> IPv4:VRF02403 4 67 0x7F
> IPv4:VRF02404 5 68 0x7F
> IPv4:VRF02405 6 69 0x7F
> IPv4:VRF02406 7 70 0x7F
> IPv4:VRF02419 8 71 0x7F
> IPv4:VRF02433 9 72 0x7F
> IPv4:VRF02434 10 73 0x7F
> IPv4:VRF02436 11 74 0x7F
> IPv4:VRF02438 12 75 0x7F
> IPv4:VRF02439 13 76 0x7F
> CPE_1#
> CPE_1#show platform ip unicast failed route
> Total of 0 covering fib entries
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 2 entries covered by 0.0.0.0/0 Tbl:2
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 2 entries covered by 0.0.0.0/0 Tbl:3
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 5 entries covered by 0.0.0.0/0 Tbl:5
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 115 entries covered by 0.0.0.0/0 Tbl:6
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 29 entries covered by 0.0.0.0/0 Tbl:9
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 34 entries covered by 0.0.0.0/0 Tbl:10
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 128 entries covered by 0.0.0.0/0 Tbl:11
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 94 entries covered by 0.0.0.0/0 Tbl:12
> Entries covered by Actual default route(0.0.0.0/0)
> <cut>
> Total of 96 entries covered by 0.0.0.0/0 Tbl:13
> CPE_1#
>
> (I've left out the specific prefixes and changed the CPE name.)
>
> It's running "desktop default" SDM template, and the best option so far
> seems to change to the "routing" template. (Should've been done from the
> beginning, it's only doing routing, with customer L3 equipment on the
> LAN side.)
>
> The problem is: How can I _know_ if TCAM contention is the problem? It
> doesn't give me any log messages or anything, it just starts building
> CPU adjacencies. TCAM utilisation is high, but not 99-100%, only around
> 90% for IPv4 unicast indirect. I'm not sure what to debug -- performance
> is bad now, at it'll probably only get worse when debugging, so I'd
> rather not do anything random...
>
> Thank you,
> Peter
>
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
More information about the cisco-nsp
mailing list