[c-nsp] SA-VAM2+ usage problem?

Luan Nguyen luan at netcraftsmen.net
Tue Sep 30 15:21:23 EDT 2008


Oh yeah,
Fragmentation definitely is problematic.  When a packet has to be splitted
into two fragments to accommodate a smaller interface MTU and one of these
fragment packets is large enough that it needs to be fragmented again after
it has been encrypted. The IPSec peer has to reassemble this packet before
decryption. This "double fragmentation" increases latency and lowers
throughput. Also, reassembly is process-switched, so there is a CPU hit on
the receiving router whenever this happens.
I usually put ip mtu 1420 on the tunnel interface to compensate for GRE +
IPSEC tunnel mode, and that seems to work great.  But one of my senior
engineer, Marty, told me that ip tcp adjust-mss works better because it also
compensates when the host implements PMTUD (sets DF) but then ignores the
ICMP packet-too-big response from the router.  And only the TCP SYN packets
have to be modified, not every packet.  Moreover, you don't have to worry
much about UDP-based apps since almost all of them always select a segment
size much smaller than a 1500 MTU.  The old default was 512 bytes (576 IP
packet).  Some apps improve throughput by upping that to 1024 bytes.
The byte sizes are true for TCP as well.  The smaller packet size you go,
the worse throughput gets.  If your traffic is around 100 - 200 bytes or
less, you are lucky to get 20Mbps at 90% CPU :)

Luan

----------------------------------------------------------------------------
-------------------------------------------------------------------------
Luan Nguyen
Senior Network Engineer
Chesapeake NetCraftsmen, LLC.
www.NetCraftsmen.net
----------------------------------------------------------------------------
-------------------------------------------------------------------------


-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net
[mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Peter Rathlev
Sent: Tuesday, September 30, 2008 2:07 PM
To: Nemeth Laszlo
Cc: cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] SA-VAM2+ usage problem?

Hi Laszlo,

On Tue, 2008-09-30 at 15:55 +0200, Nemeth Laszlo wrote:
> I have two 7201 (c7200p-advipservicesk9-mz.124-15.T3.bin) routers with 
> SA-VAM2+ modules.
> 
> I have a tunnel interface between this routers. If I make a ~24Mbit/sec 
> traffic into this tunnel, the routers CPU's goes to 90%. It was the 
> performance without VAM2+ too. So the VAM2+ modul doesn't use?

We currently have a NPE-G1 with SA-VAM2 (not +) doing more or less the
same thing, and it uses ~20% CPU doing about 20 mbit/s through the
tunnel. As far as I can see it's 50/50 interrupt and process routing,
probably the GRE part that's handled in the slow path. I'm not sure, but
a GRE configuration like this and CEF might not be best friends.

When you send the 24mbit/s traffic, what does you "show cpu" say? The
7201 should be an NPE-G2, so you shouldn't get worse results than the
above.

We use 12.4 mainline (IP IPSEC 3DES) by the way, that may make a
difference.

> Our routers config same, only the IP addresses different. The Tunnel 
> interface very important, because i run an OSPF protokoll into them.
> 
> vpn0# sh pas vam interface
> VPN Acceleration Module Version II+ in slot : 1
> 	Statistics for Hardware VPN Module since the last clear
> 	of counters 4294967 seconds ago
>     988980327 packets in                   988980327 packets out
> 302199518411 bytes in                  318057273220 bytes out
>           230 paks/sec in                        230 paks/sec out
>           562 Kbits/sec in                       592 Kbits/sec out
>             0 pkts compressed                      0 pkts not compressed
>             0 bytes before compress                0 bytes after compress
>         1.0:1 compression ratio                1.0:1 overall
>        526096 commands out                    526096 commands acknowledged
> 	Last 5 minutes:
>          2854900 packets in                     2854900 packets out 
>             9516 paks/sec in                       9516 paks/sec out 
>         24058078 bits/sec in                   25240088 bits/sec out 
> 
> In this last line the 24058078 bit/s traffic is normal, it is the 
> aggregated traffic on my tunnel0 interface. But the "562 Kbit/sec in" 
> and "592 Kbits/sec out" is to small, i think it should ~24000 Kbit/sec.

I think the small numbers are the averages since you last cleared
counters. Are they still too small?

> interface Tunnel0
>   description VPN0-VPN1
>   ip address 10.0.0.1 255.255.255.252
>   ip ospf cost 100
>   load-interval 30
>   keepalive 2 2
>   tunnel source 192.168.0.1
>   tunnel destination 192.168.1.1
> !
> interface GigabitEthernet0/1.2
>   description VPN1
>   encapsulation dot1Q 2
>   ip address 192.168.0.1
>   no ip redirects
>   no ip proxy-arp
>   ip nat outside
>   no ip virtual-reassembly
>   crypto map vpnmap
> !

Fragmetation could be problematic too, so we use "ip tcp adjust-mss" on
both the inside interface and the tunnel interface to compensate for the
GRE + IPSec overhead.

Regards,
Peter


_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list