[c-nsp] How to terminate 100.000 IPsec VPN clients?

P C pc50000 at gmail.com
Tue Sep 6 20:01:14 EDT 2011


Off topic:  anyone have a VPN load generator?  I've always had a
useful application for such.

Anyways, if you use cisco products and you need RA VPN, your best bet
is probably a Cisco 5540/5580 which is either 5k or 10k sessions per
unit.  If you need stateful failover, buy 2 and run active/passive
stateful.  If you don't, save money and let the client connect to an
alternate box.

Run 8.0(5)(interim-~16) code or later.  I've submitted and had
corrected a fair number of "scalability" issues with ipsec (mostly
cpu-hogs and timer issues with 4k+ connections) and now that they're
all fixed it runs great.

With that number of connections I assume we're looking at some sort of
machine-to-machine connection and not users, and therefore I'm
assuming your limit is sessions, not bandwidth.  These boxes will hold
a hell of a lot more sessions if they'd unlock the session limit a bit
(I've got 5580s with 6k vpns using ~3% cpu).
                           Active : Cumulative : Peak Concurrent : Inactive
  IPsec Remote Access   :    5808 :    2110153 :            5843
CPU utilization for 5 seconds = 3%; 1 minute: 3%; 5 minutes: 3%

So don't worry about CPU usage...

But I digress... the cisco BU wants to sell more hardware on this one.
 I used to terminate ~1500 per PIX 515E (stack 'em cheap and it worked
great for low bandwidth applications) and then the ASA5510 (it's
replacement) was license-locked to ~150 or so.  Cost per VPN shot up
600% overnight.  Nasty.

Clients authenticate via radius (aaa-server) based on the tunnel group
they hit.  You can download IP assignments via Radius.  Alternatively,
you can use certificates.  Those are really your only two scalable
options.

With your quantity of sessions I would advise against using the Cisco
active vpn load balancing as part of the cisco ASA to balance between
units.  It'll melt if you've ever put in a scenario with thousands of
concurrent phase 1 SAs going on.  Been there, done that, won't do it
again.  The ASA's itself however will handle quite a few simultaneous
negotiations (debug menu ike 28 1).  This is where the ASA platform
really shines, while on the other hand IOS does this crypto
negotiation in it's slow MIPS CPU and while it handles the crypto
traffic just fine once they are established, it melts when all those
SAs are trying to establish at once such as after an
outage/maintenance, as it's a CPU matter.  This is more of an issue if
your clients are high-latency and run a "long" negotiation (IE: GPRS),
but was a significant problem in using IOS for the deployment I worked
during our trials.

# debug menu ike 28 1
IKE simultaneous P1 negotiations Stats:
  current negotiation count   = 0
  device current limit        = 1000 (via debug override)
  device default limit        = 2000
  highwater negotiation count = 1178

Load distribute clients with a dedicated LB, or better yet, if your
deployment permits such, use round robin DNS or randomization client
side and ditch the LB.  Consider issues with IP numbering.  Your life
is made much easier for IP distribution if you can summarize and
assign an IP pool to each ASA, and pluck dynamics out of that pool.
If you run statics, then the client must connect to the ASA that has
that summary (or the alternate ASAs be configured for RRI if that
client hits it).  Reverse route injecting 100,000 /32 routes doesn't
scale, either.

Those are the major issues.  It does work, but it will take a lot of
boxes to do it.  Maybe talk to the BU about getting the vpn limit
raised to 20k or more on those 5580s?  I know it'd do it just fine in
this application.

Good luck!

On Fri, Sep 2, 2011 at 7:55 AM, Florian Bauhaus
<f.bauhaus at portrix-systems.de> wrote:
> Hello,
>
> What would be the best way to terminate 100k IPsec VPN clients?
>
> Use a 6500/7600 with appropriate modules? Put 10 ASA5580-20 in a rack?
> How to manage the whole thing?
> The clients won't make a lot of traffic so throughput isn't really a matter.
>
> I already got a few ideas on how to do this but I would like to know if
> someone else got experience with this and could help me out a bit.
>
>
> Best regards,
> Florian
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>



More information about the cisco-nsp mailing list