[c-nsp] Problems with bandwidth on interface tunnel

Saso Pirnat saso.pirnat at amis.net
Sat Mar 3 14:40:25 EST 2007


Church, Charles wrote:
> Sasa,
>
> 	The router can do IPSec at high speed, but you're also wrapping an IPinIP tunnel around it.  That's most likely what's killing it, since that's done in software.  Try just native IPSec, not inside a tunnel. 
>
>
> Chuck Church
> Network Engineer
> CCIE #8776, MCNE, MCSE
> Multimax, Inc.
> Enterprise Network Engineering
> Home Office - 864-335-9473 
> Cell - 864-266-3978
> cchurch at multimax.com
>
> -----Original Message-----
> From: cisco-nsp-bounces at puck.nether.net [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Saso Pirnat
> Sent: Saturday, March 03, 2007 12:29 PM
> To: cisco-nsp at puck.nether.net
> Subject: [c-nsp] Problems with bandwidth on interface tunnel
>
> Does anybody knows what is bandwidth limitation of interface tunnel on
> cisco 1841 routers, because I have tree different locations connected
> together in vpn network with IPSec using interfaces tunnel, with 3 cisco
> 1841 routers. WAN connections are on 100Mb/s optic fiber, but I can´t get
> more speed then 8Mb/s on vpn connections, even if I increase tunnel
> bandwidth to 40Mbp/s - that max ipsec passthrough for this vpn modul.
>
> default tunnel configuration:
>
> interface Tunnel0
>  description VPN site1
>  ip address 192.168.78.2 255.255.255.252
>  no ip split-horizon
>  tunnel source FastEthernet0/0
>  tunnel destination xxx.xxx.xxx.xxx
>  tunnel mode ipip
>  crypto map do-centrale
> end
>
>
> Tunnel0 is up, line protocol is up
>   Hardware is Tunnel
>   Description: VPN site1
>   Internet address is 192.168.78.2/30
>   MTU 1514 bytes, BW 9 Kbit, DLY 500000 usec,
>      reliability 255/255, txload 112/255, rxload 81/255
>   Encapsulation TUNNEL, loopback not set
>   Keepalive not set
>   Tunnel source xxx.xxx.xxx.xxx (FastEthernet0/0), destination
> xxx.xxx.xxx.xxx
>   Tunnel protocol/transport IP/IP
>   Tunnel TTL 255
>   Fast tunneling enabled
>   Tunnel transmit bandwidth 8000 (kbps)
>   Tunnel receive bandwidth 8000 (kbps)
>   Last input 00:00:02, output 00:00:02, output hang never
>   Last clearing of "show interface" counters never
>   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 22466
>   Queueing strategy: fifo
>   Output queue: 0/0 (size/max)
>   5 minute input rate 46000 bits/sec, 30 packets/sec
>   5 minute output rate 44000 bits/sec, 19 packets/sec
>      898355732 packets input, 4135541064 bytes, 0 no buffer
>      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
>      742454820 packets output, 1498580302 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 output buffer failures, 0 output buffers swapped out
>
>
> increased bandwidth configuration:
>
> interface Tunnel0
>  description VPN site1
>  bandwidth 40000
>  ip address 192.168.78.2 255.255.255.252
>  no ip split-horizon
>  tunnel source FastEthernet0/0
>  tunnel destination xxx.xxx.xxx.xxx
>  tunnel mode ipip
>  tunnel bandwidth transmit 40000
>  tunnel bandwidth receive 40000
>  crypto map do-centrale
> end
>
>
> Tunnel0 is up, line protocol is up
>   Hardware is Tunnel
>   Description: VPN site1
>   Internet address is 192.168.78.2/30
>   MTU 1514 bytes, BW 40000 Kbit, DLY 500000 usec,
>      reliability 255/255, txload 1/255, rxload 1/255
>   Encapsulation TUNNEL, loopback not set
>   Keepalive not set
>   Tunnel source xxx.xxx.xxx.xxx (FastEthernet0/0), destination
> xxx.xxx.xxx.xxx
>   Tunnel protocol/transport IP/IP
>   Tunnel TTL 255
>   Fast tunneling enabled
>   Tunnel transmit bandwidth 40000 (kbps)
>   Tunnel receive bandwidth 40000 (kbps)
>   Last input 00:00:00, output 00:00:01, output hang never
>   Last clearing of "show interface" counters never
>   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 22463
>   Queueing strategy: fifo
>   Output queue: 0/0 (size/max)
>   5 minute input rate 21000 bits/sec, 17 packets/sec
>   5 minute output rate 41000 bits/sec, 14 packets/sec
>      898343729 packets input, 4132936756 bytes, 0 no buffer
>      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
>      742446090 packets output, 1496397797 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 output buffer failures, 0 output buffers swapped out
>
>
>
> br, saso
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>   
Thanks for advice, I was already thinking about it, but I wasn´t sure
if  this is limitation of interface tunnel or something else. But now I
tested with lowering bandwidth parameters, and if I do limit on 6Mbps
that exactly what I get. So I presume that 8Mbps is the upper limit for
bandwidth on interface tunnel and also default setting.

br saso





More information about the cisco-nsp mailing list