[c-nsp] 6500 SUP720 High Latency and Jitter issues
Phil Rosenthal <
pr at isprime.com
Tue May 24 11:37:02 EDT 2005
Just a guess, but .... If CPU is running that high, it's possible
it's spiking to 100% and you're not noticing, and any packets that
are cpu forwarded would be delayed. With CPU load that high, I would
guess a lot of your packets are process switched. Do you know what
is using up all of your CPU?
As a "for-example" I have some Sup720-pfc3bxl boxes doing >10gbits
and >1mpps (vague numbers, I know) of L3 forwarding with multiple
full tables, each with cpu below 5% measured on a 5 minute average.
We don't use any GRE tunnels or NAT, however I was under the belief
that both were "hardware accelerated".
Also, what are your traffic levels? If you're doing more than a few
gbits, it's possible you're maxing out the 8gbit shared bus. What
model 8port gig and 48 port faste are you using?
-P
On May 24, 2005, at 10:16 AM, Dan Benson wrote:
> Hello all, I am in dire need of some assistance with a new problem
> that
> has started occurring in one of my networks. We are seeing very high
> Latency and Jitter traversing one of our 6509's that has a sup720 with
> the PFC3BXL and a GIG or Ram. This period occur every 10 to 15
> seconds,
> and last for about 2 seconds. This is causing a massive amount of
> troubles for our customers and I am at a loss as to what could be the
> cause.
>
> Hardware:
>
> Cisco 6509, Fan2, Sup720,PFC3BXL, 1GIGe of Ram, 1x8 port GBIC gige,
> 1x48 port FE.
>
> Config:
>
> IOS Version 12.2(18)SXD3, running 2 full BGP routing tables to ISP's
> VIA FE and GIGe. OSPF running for core network with 5 neighbors. 2
> VLans with ports assigned to them. 6 GRE tunnels for private
> access to
> other offsite POPs. NAT running with overload to a public VLAN
> interface. Router CPU average load is 50% per 24 hours, peaks around
> 60%, lows around 20%. Mem usage is at 20%.
>
> Issue:
>
> Every 5 to 15 seconds we are seeing pings like this traversing the
> box:
>
> 64 bytes from 147.135.0.16: icmp_seq=2099 ttl=63 time=1.000 ms
> 64 bytes from 147.135.0.16: icmp_seq=2100 ttl=63 time=1.000 ms
> 64 bytes from 147.135.0.16: icmp_seq=2101 ttl=63 time=1.033 ms
> 64 bytes from 147.135.0.16: icmp_seq=2102 ttl=63 time=1.037 ms
> 64 bytes from 147.135.0.16: icmp_seq=2103 ttl=63 time=1.001 ms
> 64 bytes from 147.135.0.16: icmp_seq=2104 ttl=63 time=86.345 ms
> 64 bytes from 147.135.0.16: icmp_seq=2105 ttl=63 time=179.171 ms
> 64 bytes from 147.135.0.16: icmp_seq=2106 ttl=63 time=178.301 ms
> 64 bytes from 147.135.0.16: icmp_seq=2107 ttl=63 time=108.800 ms
> 64 bytes from 147.135.0.16: icmp_seq=2108 ttl=63 time=33.387 ms
> 64 bytes from 147.135.0.16: icmp_seq=2109 ttl=63 time=1.018 ms
> 64 bytes from 147.135.0.16: icmp_seq=2110 ttl=63 time=1.014 ms
> 64 bytes from 147.135.0.16: icmp_seq=2111 ttl=63 time=1.064 ms
> 64 bytes from 147.135.0.16: icmp_seq=2112 ttl=63 time=1.023 ms
> 64 bytes from 147.135.0.16: icmp_seq=2113 ttl=63 time=1.042 ms
>
> I am used to the 60 sec. ICMP de-pri BGP update CPU load pinging
> the box
> directly, but I have never seen this effect devices on the far side of
> the router. Any input on this is greatly appreciated. //db
>
>
>
>
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
More information about the cisco-nsp
mailing list