[c-nsp] 6500 SUP720 High Latency and Jitter issues

Anthony D Cennami acennami at neupath.com
Tue May 24 12:49:58 EDT 2005


Well, the 40% BGP spike would appear to be "something" to me.

Are your IGP's stable/neighbors reachable during this time?

My initial thoughts would be an unreachable neighbor, either because there
is no static route or IGP route to reach the destination, and the
subsequent invalidation and reconvergance of routes received through this
peer.

The overall increase in latency might be attributed to an available next
hop through a stable neighbor, while the increased CPU load of the BGP
process invalidating and repropagating routes is increasing the latency.

Are you seeing any invalidated/unreachable neighbors; BGP flaps, etc?


> Nothing seems to be occuring on the proccessor when my latency hits.
> Please see below:
>
> DCA-BV-RTR#sho processes cpu | exclude 0.00
> CPU utilization for five seconds: 70%/28%; one minute: 40%; five
> minutes: 39%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>   42       39388     19169       2054  0.77%  0.39%  0.77%   1 SSH
> Process
>  116   140093140 524624769        267  1.08%  2.62%  2.64%   0 IP
> Input
>  243     5750964  26741621        215  0.69%  0.19%  0.17%   0 Port
> manager per
>  286     3242496   9389779        345  0.07%  0.20%  0.13%   0 BGP
> Router
>  288    89258168    505910     176438 39.50%  4.92%  3.53%   0 BGP
> Scanner
> DCA-BV-RTR#sho processes cpu | exclude 0.00
> CPU utilization for five seconds: 35%/24%; one minute: 39%; five
> minutes: 39%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>    4    20002060   1015216      19702  7.51%  1.01%  0.75%   0 Check
> heaps
>   42       39408     19175       2055  0.23%  0.38%  0.76%   1 SSH
> Process
>  116   140093420 524626794        267  1.43%  2.53%  2.62%   0 IP
> Input
>  286     3242516   9389823        345  0.23%  0.20%  0.13%   0 BGP
> Router
>  288    89258304    505911     176438  1.59%  4.66%  3.50%   0 BGP
> Scanner
> DCA-BV-RTR#sho processes cpu | exclude 0.00
> CPU utilization for five seconds: 30%/26%; one minute: 37%; five
> minutes: 38%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>    4    20002184   1015223      19702  1.91%  0.94%  0.74%   0 Check
> heaps
>  116   140094004 524630072        267  1.83%  2.36%  2.57%   0 IP
> Input
>  286     3242532   9389906        345  0.23%  0.17%  0.13%   0 BGP
> Router
> DCA-BV-RTR#sho processes cpu | exclude 0.00
> CPU utilization for five seconds: 32%/28%; one minute: 36%; five
> minutes: 38%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>  116   140094584 524631994        267  3.75%  2.54%  2.60%   0 IP
> Input
>  243     5751020  26742179        215  0.63%  0.17%  0.16%   0 Port
> manager per
>  286     3242536   9389952        345  0.07%  0.15%  0.12%   0 BGP
> Router
> DCA-BV-RTR#sho processes cpu | exclude 0.00
> CPU utilization for five seconds: 32%/28%; one minute: 36%; five
> minutes: 38%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>   42       39488     19193       2057  0.23%  0.28%  0.70%   1 SSH
> Process
>  116   140094876 524632948        267  3.43%  2.61%  2.62%   0 IP
> Input
>  276     1897896  15219954        124  0.15%  0.02%  0.01%   0 IP
> SNMP
>  281     3642248   7947776        458  0.23%  0.09%  0.10%   0 SNMP
> ENGINE
> DCA-BV-RTR#sho processes cpu | exclude 0.00
> CPU utilization for five seconds: 35%/31%; one minute: 36%; five
> minutes: 38%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>   42       39504     19199       2057  0.31%  0.29%  0.69%   1 SSH
> Process
>  116   140095204 524634004        267  3.19%  2.66%  2.63%   0 IP
> Input
>  243     5751056  26742365        215  0.63%  0.19%  0.17%   0 Port
> manager per
> DCA-BV-RTR#
>
> My spanning tree looks good, and I am not seeing any BGP or OSPF
> withdrawls during this period.   Thanks for the thoughts.  //db
>
>
> Anthony D Cennami wrote:
>
>>Are you seeing any BGP or OSPF route withdrawls during this period?
>>
>>Have you validated your spanning tree configuration is loop free?  Are
>> you
>>using RSTP?
>>
>>Are you using any non-standard timers on BGP or OSPF?
>>
>>Could you paste a sh proc cpu during the latency increse?
>>
>>
>>
>>
>>
>>
>>>Hello all, I am in dire need of some assistance with a new problem that
>>>has started occurring in one of my networks.  We are seeing very high
>>>Latency and Jitter traversing one of our 6509's that has a sup720 with
>>>the PFC3BXL and a GIG or Ram.  This period occur every 10 to 15 seconds,
>>>and last for about 2 seconds.  This is causing a massive amount of
>>>troubles for our customers and I am at a loss as to what could be the
>>>cause.
>>>
>>>Hardware:
>>>
>>> Cisco 6509, Fan2, Sup720,PFC3BXL, 1GIGe of Ram, 1x8 port GBIC gige,
>>>1x48 port FE.
>>>
>>>Config:
>>>
>>> IOS Version 12.2(18)SXD3, running 2 full BGP routing tables to ISP's
>>>VIA FE and GIGe.  OSPF running for core network with 5 neighbors.  2
>>>VLans with ports assigned to them.  6 GRE tunnels for private access to
>>>other offsite POPs.  NAT running with overload to a public VLAN
>>>interface.  Router CPU average load is 50% per 24 hours, peaks around
>>>60%, lows around 20%.  Mem usage is at 20%.
>>>
>>>Issue:
>>>
>>>Every 5 to 15 seconds we are seeing pings like this traversing the box:
>>>
>>>64 bytes from 147.135.0.16: icmp_seq=2099 ttl=63 time=1.000 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2100 ttl=63 time=1.000 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2101 ttl=63 time=1.033 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2102 ttl=63 time=1.037 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2103 ttl=63 time=1.001 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2104 ttl=63 time=86.345 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2105 ttl=63 time=179.171 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2106 ttl=63 time=178.301 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2107 ttl=63 time=108.800 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2108 ttl=63 time=33.387 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2109 ttl=63 time=1.018 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2110 ttl=63 time=1.014 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2111 ttl=63 time=1.064 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2112 ttl=63 time=1.023 ms
>>>64 bytes from 147.135.0.16: icmp_seq=2113 ttl=63 time=1.042 ms
>>>
>>>I am used to the 60 sec. ICMP de-pri BGP update CPU load pinging the box
>>>directly, but I have never seen this effect devices on the far side of
>>>the router.  Any input on this is greatly appreciated.  //db
>>>
>>>
>>>
>>>
>>>
>>>_______________________________________________
>>>cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>>https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>>
>>>
>>
>>
>>
>>
>
>
>



More information about the cisco-nsp mailing list