[c-nsp] 1 gbit/sec limit on cat6k vlan interfaces?
Church, Chuck
cchurch at multimax.com
Fri Oct 6 22:25:17 EDT 2006
First thought would be the etherchannel load balancing algorithm. Are
there many source/destination MAC address combos involved, or just a
few? What do the physical interfaces look like?
Chuck Church
Network Engineer
CCIE #8776, MCNE, MCSE
Multimax, Inc.
Enterprise Network Engineering
Home Office - 864-335-9473
Cell - 864-266-3978
cchurch at multimax.com
> -----Original Message-----
> From: cisco-nsp-bounces at puck.nether.net
> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Darrell Root
> Sent: Friday, October 06, 2006 9:26 PM
> To: cisco-nsp at puck.nether.net
> Cc: Darrell Root
> Subject: [c-nsp] 1 gbit/sec limit on cat6k vlan interfaces?
>
>
> cisco-nsp,
>
> I've got a pair of cat6k's with sup720-3b running s72033-
> ipservicesk9_wan-vz.122-18.SXF5
> working as a L2/L3 distribution router. Uplinks are 2x2gig
> L3 etherchannels.
> Downlinks to switches are 2x2gig L2 etherchannels. We route
> downstream on vlan interfaces.
>
> The "show run int" and "show int" from one of our downstream
> vlan interfaces are below. During peak time we hit a 1-gig
> input rate (according to "show int").
> I believe we would be exceeding 1-gig if we could. We're
> showing significant drops/flushes. In addition the bandwidth
> metric is set to 1 gig (default).
>
> Are we dropping packets due to a 1-gig limit on a vlan
> interface? If yes, what can we do to get more than 1-gig
> routing capability on a vlan interface in native-ios? Would
> changing the bandwidth parameter improve things or is that
> just for routing protocol metrics (as I believe)?
>
> Thanks!
>
> Darrell Root
> darrellrootjunk at nospam.mac.com
>
> interface Vlan300
> ip address 10.2.2.2 255.255.252.0 secondary ip address
> 10.1.1.2 255.255.254.0 no ip redirects no ip unreachables no
> ip proxy-arp mls rp vtp-domain censored mls rp ip standby 10
> ip 10.2.2.1 standby 10 preempt standby 90 ip 10.1.1.1 standby
> 90 preempt end
>
> mac0#sh int vl300
> Vlan300 is up, line protocol is up
> Hardware is EtherSVI, address is 000a.421f.0000 (bia
> 000a.421f.0000)
> Internet address is 10.1.1.2/23
> MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
> reliability 255/255, txload 36/255, rxload 135/255
> Encapsulation ARPA, loopback not set
> Keepalive not supported
> ARP type: ARPA, ARP Timeout 04:00:00
> Last input 00:00:00, output 00:00:00, output hang never
> Last clearing of "show interface" counters never
> Input queue: 0/75/216102/187872 (size/max/drops/flushes);
> Total output drops: 0
> Queueing strategy: fifo
> Output queue: 0/40 (size/max)
> 5 minute input rate 901905000 bits/sec, 119162 packets/sec
> 5 minute output rate 144555000 bits/sec, 51058 packets/sec
> L2 Switched: ucast: 8667753272 pkt, 3752889966020 bytes - mcast:
> 5448507 pkt, 423127315 bytes
> L3 in Switched: ucast: 7129066713 pkt, 6608608952977 bytes -
> mcast: 0 pkt, 0 bytes mcast
> L3 out Switched: ucast: 2986770591 pkt, 1065254245050
> bytes mcast:
> 0 pkt, 0 bytes
> 7134340235 packets input, 6609029700244 bytes, 0 no buffer
> Received 5226971 broadcasts (927 IP multicasts)
> 0 runts, 0 giants, 1043 throttles
> 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
> 2987090260 packets output, 1065321310317 bytes, 0 underruns
> 0 output errors, 0 interface resets
> 0 output buffer failures, 0 output buffers swapped out
>
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
More information about the cisco-nsp
mailing list