[c-nsp] ARP behavior with HSRP and static NAT
Anton Kapela
tkapela at gmail.com
Thu Oct 4 21:46:11 EDT 2012
On Thu, Oct 4, 2012 at 4:17 PM, <evan at kisbey.net> wrote:
> The routers involved have HSRP on both the WAN and the LAN-side
> interfaces, and NAT across the pair with identical NAT statements on each.
> To force a full failover in case link is lost on a single interface,
> there's a track running for each HSRP interface on its opposite (LAN
> versus WAN) on the primary router, decrementing its priority and letting
> the standby router know that it's time to preempt when the primary goes
[snip]
I've tried as much as you describe, and never got it to work right.
Having de-sync'd nat state tables is never any fun, ever -- for either
inbound and outbound originated sessions. I wouldn't recommend anyone
roll two autonomous hosts doing NAT in such a fashion.
I'd recommend checking out something else, IOS SNAT. We've used this
in active-active configs, with routing protocols, bfd, etc. cranked
up, and it was mostly great. Details here:
http://www.cisco.com/en/US/products/sw/iosswrel/ps1839/products_white_paper09186a0080118b04.shtml
http://www.cisco.com/en/US/docs/ios/12_3t/12_3t7/feature/guide/gtsnatay.html
http://www.cisco.com/en/US/docs/ios/12_4/12_4_mainline/snatsca.html
In my use case, we did HSRP floating addr facing 'ISP' side, and
originated 0/0 via various protocols towards 'inside' gear/links/etc
-- we did not use HSRP in any capacity facing the 'inside.' There's no
real issue in doing hsrp on inside + outside, but it's jankier than it
needs to be. If your ISP can do ebgp/private AS stuff, and let you
originate a given bit of address space, I'd strongly suggest that
ahead of HSRP at all.
One takeaway from our lab/test work is worth special mention: the SNAT
state sync traffic seems to have higher cpu priority than HSRP, but
not higher than BGP, OSPF, and BFD. That is, if one had to 'order' the
relative CPU priority, it looked like: bfd, ospf, bgp, snat, hsrp --
which is kinda 'eh.' We tested 15.0, 15.1M and T, and 15.2T on a broad
set of hardware (isr 2800/2900 g1's, g2's, npe-g1, and the 7201).
Net result -- when 'flow dense' (i.e. icmp/tcp/etc scan the entire
internet, etc) or other abusive levels of state-inducing traffic was
sourced from test systems on the 'inside,' the SNAT replication
activity would consume appropriate CPU, but would block reception and
processing of HSRP helos between the active/standby routers.
As all the IGP's would stay up, everything looked ok 'inside,' and so
the routing topology was stable.
Of course, bouncing HSRP active/standby events facing the 'ISP'
outside network had a pretty horrible result. Durring such abuse
tests, here would be rolling/cycling instability while both routers
'claimed' they were 'the active master' towards the ISP gear; this
caused the usual nonsense one might see with >1 host claiming ARP
responses for a given layer 3 address. YMMV, AMFYOY, etc.
Some relief was found with stuff like the slightly misleading name of
"Rate Limiting NAT Translation" -- it's not a RATE at all, just a
simple state limit of 'max translations:' allowed for a given "inside"
or "outside" source IP:
http://www.cisco.com/en/US/docs/ios/12_3t/12_3t4/feature/guide/gt_natrl.html#wp1027129
In practice, a limit of a few tens of k flows per source IP kept
things reasonably stable under high-rate nat churn.
Perhaps if there were a "nat table miss packets per second per source
IP" knob (like a microflow exceptions policer in CoPP or a policy
map), we'd have had better luck under abusive workloads with SNAT, but
alas, we're not offering to pay for one, and it would seem nobody else
has yet.
All in all, for the typical case, SNAT is pretty great -- nothing
breaks with a link/box/route is down, and nobody has to know things
migrated between border devices as such.
Best,
-Tk
More information about the cisco-nsp
mailing list