[c-nsp] DMVPN scalability question on the 28XX ISR's

Engelhard engel.labiro at gmail.com
Mon Apr 19 20:06:06 EDT 2010


Any suggestion for 2000+ spokes with 4 headends? Headends will be  
ASR100x. We think to put Loadbalancer (ACE) in front of ASR to spread  
DMVPN traffic. Is it design wise?


Sent from my iPhone

On 2010/04/19, at 23:28, Rodney Dunn <rodunn at cisco.com> wrote:

> My suggestion is to run code that support dynamic BGP neighbors at  
> the hub and run BGP over the mGRE to the spokes. ..or followed by  
> EIGRP.
>
> Rodney
>
>
> On 4/18/10 7:14 AM, Anton Kapela wrote:
>>
>> On Apr 17, 2010, at 8:54 PM, Erik Witkop wrote:
>>
>>> We are considering DMVPN for a WAN network with (92) Cisco 870  
>>> remote routers and (2) Cisco 2851 headend routers. My concern is  
>>> around the scalability of the 92 connections to each 2851.  
>>> Assuming we have AIM modules in each 2851 router, do you think  
>>> that would be sized properly.
>>
>> While you have a chance, it'd be wise to toss in as much DRAM as  
>> the 2851 can take. The reasons are many, but mostly you'll want  
>> plenty (i.e. 20+ megabytes) of free ram to "cover" your needs  
>> during transient conditions -- i.e. when all the ipsec endpoints  
>> flap, timeout, then re-establish, or perhaps when 400 ospf "spoke"  
>> neighbors timeout, flap, and re-stablish. If memory serves,  
>> advipservices 12.4t and 15.0 on 28xx leaves a bit less than 100  
>> megs free after booting (on a 256m box); expect another 20 to 30m  
>> consumed when you have protocols + ipsec endpoints + full config up  
>> and active. Probably safe with 256, but it's not worth risking a  
>> surprise reload (that more dram could have prevented).
>>
>> My overall experience using DMVPN (i.e. mGRE + ipsec tunnel  
>> protection) has been positive, and I find that usually boxes with  
>> AIM-VPN or SA's (on 7200's I've used the SA-VAM and its cousins) is  
>> the first 'wall' often hit -- i.e. max number of concurrent crypto  
>> sessions is reached *well before* the platform maximum IDB limit is  
>> reached. This means the first thing you should investigate is how  
>> many sessions your installed AIM can support -- it may be far less  
>> than you expected, and less than you require.
>>
>> As for GRE and encaps processing on the 28xx, this seems to be  
>> nearly the same perf (without fragment processing considered) as  
>> native IP forwarding on the box. In practice, I see 80+ mbits  
>> usable (or 9 to 12 kpps) out of an 1841 doing GRE or IPIP encaps  
>> without crypto -- and 2851 will usually push 100mbit+ doing same.  
>> Again, the per-session crypto performance and max-session count  
>> will be determined by the AIM, so YMMV, etc.
>>
>> Generally, the Cisco guidelines for DMVPN are sane, and my  
>> experiences don't (so far) run counter to them. One definite wall  
>> that I'd recommend you find before deployment is how many protocol  
>> neighbors you can have up (i.e. ospf, isis, or eigrp neighbors),  
>> flap, and re-establish in a timeframe you're happy with. That is to  
>> say, I highly recommend lab'ing up a config that emulates 100, 200,  
>> 300, etc OSPF neighbor sessions between the 28xx's -- you'll want  
>> to know for certain that your routers can both support/hold up the  
>> number of neighbors you need, *and* recover in a timely fashion  
>> after they flap. So, while your platform may be more than adequate  
>> for your given WAN-facing bandwidth needs to the spoke sites, you  
>> may actually find that your 2851 cpu is under-whelming when  
>> endpoints flap/register/converge -- depending, again, on the scale  
>> you're taking things to.
>>
>> -Tk
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list