[c-nsp] Best practice - Core vs Access Router

Manu Chao linux.yahoo at gmail.com
Tue Feb 9 08:51:04 EST 2010


For sure it may be possible to reduce/optimise the routing

But in all case you will hit the platform limit ;(

Full Internet Routing cost a lot

On Tue, Feb 9, 2010 at 2:43 PM, Church, Charles
<Charles.Church at harris.com>wrote:

> Is it possible the NDE on the SP is the issue?  I assume it's configured to
> export?  What does a 'sh proc cpu hist' tell you on the RP and SP?
>
> Chuck
>
> -----Original Message-----
> From: cisco-nsp-bounces at puck.nether.net
> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Andy B.
> Sent: Tuesday, February 09, 2010 8:09 AM
> To: Phil Mayers
> Cc: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] Best practice - Core vs Access Router
>
>
> On Tue, Feb 9, 2010 at 1:50 PM, Phil Mayers <p.mayers at imperial.ac.uk>
> wrote:
> >> CPU load is fairly normal at 20-30%
> >
> > Is this average or during a performance event? What about the SP and any
> DFC
> > CPUs?
>
> This is average. Performance would go up to 99% if the BGP scanner is
> busy, but this does not happen very often.
>
> >
> > What linecards do you have in the box?
>
> #sh mod
> Mod Ports Card Type                              Model              Serial
> No.
> --- ----- -------------------------------------- ------------------
> -----------
>  2   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX
> SAD082XXXXX
>  5    2  Supervisor Engine 720 (Active)         WS-SUP720-3BXL
> SAD084XXXXX
>  8    4  CEF720 4 port 10-Gigabit Ethernet      WS-X6704-10GE
> SAD114XXXXX
>  9    4  CEF720 4 port 10-Gigabit Ethernet      WS-X6704-10GE
> SAL110XXXXX
>
> Mod MAC addresses                       Hw    Fw           Sw
> Status
> --- ---------------------------------- ------ ------------ ------------
> -------
>  2  0012.435e.07f8 to 0012.435e.0827   2.0   12.2(14r)S5  12.2(18)SXF1 Ok
>  5  0011.21b9.ba54 to 0011.21b9.ba57   4.1   8.1(3)       12.2(18)SXF1 Ok
>  8  0001.0002.0003 to 0001.0002.0006   1.6   12.2(14r)S5  12.2(18)SXF1 Ok
>  9  001a.6c97.d074 to 001a.6c97.d077   2.5   12.2(14r)S5  12.2(18)SXF1 Ok
>
> Mod  Sub-Module                  Model              Serial       Hw
> Status
> ---- --------------------------- ------------------ ----------- -------
> -------
>  2  Centralized Forwarding Card WS-F6700-CFC       SAL083XXXXX  2.0    Ok
>  5  Policy Feature Card 3       WS-F6K-PFC3BXL     SAD084XXXXX  1.4    Ok
>  5  MSFC3 Daughterboard         WS-SUP720          SAD084XXXXX  2.2    Ok
>  8  Centralized Forwarding Card WS-F6700-CFC       SAL114XXXXX  2.0    Ok
>  9  Centralized Forwarding Card WS-F6700-CFC       SAL110XXXXX  2.1    Ok
>
> Mod  Online Diag Status
> ---- -------------------
>  2  Pass
>  5  Pass
>  8  Pass
>  9  Pass
>
>
>
> >
> >
> > sh mls cef maximum-routes
> > sh mls cef summary
>
> #sh mls cef maximum-routes
> FIB TCAM maximum routes :
> =======================
> Current :-
> -------
>  IPv4 + MPLS         - 512k (default)
>  IPv6 + IP Multicast - 256k (default)
>
>
> #sh mls cef summary
>
> Total routes:                     317940
>    IPv4 unicast routes:          315089
>    IPv4 Multicast routes:        3
>    MPLS routes:                  0
>    IPv6 unicast routes:          2848
>    IPv6 multicast routes:        59
>    EoM routes:                   0
>
> >
> > You say "so that the new router can handle these many MAC addresses"; do
> you
> > have any reason to believe that MAC or adjacency table size is the
> problem?
> > The 6500 can handle 64k MAC addresses at layer2 and variable numbers of
> > ARP/layer3 adjacencies.
>
> No, I have no reason. I is just a desperate measure, because despite
> plenty of research I could not find out what is causing my core to
> become so unresponsive at management and BGP/OSPF level.
>
>
> > It could be ICMP redirects, or layer2 loops downstream.
>
> How would I detect that?
>
> >
> > How often are these performance problems occurring? Is anything logged on
> > the router at the time? What does the output of:
>
> It's at peak times, usually in the evening hours when there is a lot
> of traffic. It never happens in the afternoon or late at night -
> really only when we reached a certain amount of traffic or packets.
>
> > sh proc cpu | ex 0.00
> > remote command switch sh proc cpu | ex 0.00
> > sh platform hardware capacity forwarding
> >
> > ...say after a window of poor performance? How long do the events last?
>
> It's not peak time yet, but here the current results:
>
> #sh proc cpu sort | e 0.00
> CPU utilization for five seconds: 19%/7%; one minute: 35%; five minutes:
> 32%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>  286    91421068  67890635       1346  0.71%  4.54%  3.63%   0 BGP Router
>  322       27520     15152       1816  0.71%  0.33%  0.27%   1 SSH Process
>  281    84729936 609049960        139  0.55%  0.20%  0.21%   0 Port manager
> per
>  175    83539116  11590722       7207  0.47%  0.27%  0.25%   0 IPC LC
> Message H
>  169    98408344   5822966      16900  0.31%  0.31%  0.31%   0 Adj Manager
>  180    64247088  51118007       1256  0.23%  0.21%  0.19%   0 CEF process
>   9    92311304 220943432        417  0.15%  0.29%  0.35%   0 ARP Input
>  320    18664520 124379650        150  0.15%  2.57%  1.67%   0 BGP I/O
>
> #remote command switch sh proc cpu | ex 0.00
>
> CPU utilization for five seconds: 56%/16%; one minute: 45%; five minutes:
> 51%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>  42  11658287002654567248        439  3.51%  3.87%  3.84%   0 slcp process
>  102   575122192  14545925      39538  1.75%  1.87%  1.93%   0 Vlan
> Statistics
>  106   184036308  36158906       5089  1.19%  0.62%  0.61%   0 FIB Control
> Time
>  127    37679084 135489087        278  0.07%  0.10%  0.11%   0 Spanning
> Tree
>  187    12308164   3092196       3980  0.07%  0.03%  0.05%   0 v6fib stat
> colle
>  232    60786688  23931437       2540  0.15%  0.16%  0.17%   0 Env Poll
>  243    11847844   2874615       4121  0.07%  0.04%  0.05%   0 Const MPLS
> Stats
>  248  3799960368 673218956       5644 12.23% 13.87% 16.79%   0 NDE - IPV4
>  254    10876832 145705655         74  0.07%  0.06%  0.06%   0 DiagCard9/-1
>  257    79331296  46446985       1707  0.23%  0.19%  0.21%   0 CEF process
>
> #sh platform hardware capacity forwarding
> L2 Forwarding Resources
>           MAC Table usage:   Module  Collisions  Total       Used
> %Used
>                              5                0  65536       3386
> 5%
>
>             VPN CAM usage:                       Total       Used
> %Used
>                                                    512          0
> 0%
> L3 Forwarding Resources
>             FIB TCAM usage:                     Total        Used
> %Used
>                  72 bits (IPv4, MPLS, EoM)     524288      315005
> 60%
>                 144 bits (IP mcast, IPv6)      262144        2911
> 1%
>
>                     detail:      Protocol                    Used
> %Used
>                                  IPv4                      315005
> 60%
>                                  MPLS                           0
> 0%
>                                  EoM                            0
> 0%
>
>                                  IPv6                        2849
> 1%
>                                  IPv4 mcast                     3
> 1%
>                                  IPv6 mcast                    59
> 1%
>
>            Adjacency usage:                     Total        Used
> %Used
>                                               1048576        5045
> 1%
>
>     Forwarding engine load:
>                     Module       pps   peak-pps
> peak-time
>                     5        4440416   10849623  12:44:28 CEST Mon Dec 21
> 2009
>
>
>
>
>
> Thanks!
>
> Andy
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>


More information about the cisco-nsp mailing list