[c-nsp] GRE and NHRP; avoiding routing through the hub

Bob Tinkelman bob at tink.com
Tue Apr 15 09:19:30 EDT 2008


> Why not nail up a separate GRE tunnel between the two spokes and let the
> spoke routers handle the routing, completely separate of your MP GRE?

Sorry, I should have mentioned this.  Unlike my lab environment,
The real-life spokes will have dynamic ip addresses.  I need a
way for them to find each other.  That's why I was using the
multipoint tunnel with NHRP to a hub with a fixed ip address.

- Bob




> Bob Tinkelman wrote:
> > This is a cry for some design advice.
> >
> > I have an existing configuration using multipoint GRE
> > tunnels and NHRP to implement backup connections from
> > customer sites through dsl and cable-modem networks.
> >
> > It's straightforward and has been fullfilling its purpose.
> > There is no encryption or IPsec involved.  There is almost
> > no spoke-to-spoke traffic.
> >
> >
> > I have a new requirement that I'm trying to figure out how
> > to implement.  My first two tries failed, and so I decided
> > to ask for advice.
> >
> >
> > I have a customer with two sites, each with (among other
> > things) a LAN with private ip addresses, and we'd like to
> > route between these nets "behind the firewalls", and without
> > any NAT-ing.
> >
> > A standard solution for this type of configuration seems to
> > be to use IPsec, as in:
> >   http://www.cisco.com/warp/public/471/dcmvpn.html
> >
> > This is almost what I want.  Traffic from spoke to spoke
> > will go over a dynamic tunnel set up between the spokes,
> > when the hub recognizes the need.
> >
> > That's great.  But it depends on the hub's knowing where to
> > route these packets, which it does by having a copy of the
> > routing table being used by the spokes.  In the referenced
> > example it participates in the ospf routing.  That's a non-
> > starter; my hub router shouldn't get involved in customer-
> > private routing.
> >
> > I thought of having a separate multi-point tunnel for each
> > customer, possibly each in its own routing VRF-lite routing
> > table, but that seems like a lot of work, and I'd still
> > prefer to keep the customer private routes off my router,
> > even if they're segregated in separate tables.
> >
> >
> >
> >
> > In a lab environment, I connected two 2651XM routers to DSL/
> > Cable-modem services and configured a multipoint tunnel
> > using a central server elsewhere on our net.
> >
> > Spoke 151
> >
> >   | interface Tunnel202
> >   |  description Dynamic multi-point tunnel
> >   |  ip address 69.48.189.151 255.255.255.0
> >   |  ip nhrp map 69.48.189.1 165.254.97.2
> >   |  ip nhrp authentication xxxxxxxxxxxxxxxx
> >   |  ip nhrp map multicast 165.254.97.2
> >   |  ip nhrp map multicast 165.254.147.2
> >   |  ip nhrp map 69.48.189.1 165.254.97.2
> >   |  ip nhrp map 69.48.189.2 165.254.147.2
> >   |  ip nhrp network-id xxxxxxxxxxxxxxxxx
> >   |  ip nhrp holdtime 300
> >   |  ip nhrp nhs 69.48.189.1
> >   |  ip nhrp nhs 69.48.189.2
> >   |  ip nhrp server-only
> >   |  ip virtual-reassembly
> >   |  no ip route-cache cef
> >   |  no ip route-cache
> >   |  no ip mroute-cache
> >   |  delay 1000
> >   |  tunnel source FastEthernet0/1
> >   |  tunnel mode gre multipoint
> >   |  tunnel key xxxxxxxxxxxxxxxxxxx
> >   ...
> >   | ip route 165.254.97.2 255.255.255.255 FastEthernet0/1 dhcp
> >   | ip route 165.254.147.2 255.255.255.255 FastEthernet0/1 dhcp
> >
> > Spoke 152
> >
> >   | interface Tunnel202
> >   |  description Dynamic multi-point tunnel
> >   |  ip address 69.48.189.152 255.255.255.0
> >   .... the rest is almost the same ...
> >
> >
> > The above tunnel works.  I can ping from 69.48.189.151 to .152.
> >
> > Then I added LANs for testing
> >
> > Spoke 151:
> >
> >   | interface FastEthernet0/0.151
> >   |  description Test LAN - 10.151.0.0/24
> >   |  encapsulation dot1Q 151
> >   |  ip address 10.151.0.1 255.255.255.0
> >
> > Spoke 152:
> >
> >   | interface FastEthernet0/0.152
> >   |  description Test LAN - 10.152.0.0/24
> >   |  encapsulation dot1Q 152
> >   |  ip address 10.152.0.1 255.255.255.0
> >
> >
> > Note that traceroute from one spoke to the other actually
> > goes though the hub:
> >
> >   | test-151-westel#tr 69.48.189.152
> >   |
> >   | Type escape sequence to abort.
> >   | Tracing the route to test-152.tink.com (69.48.189.152)
> >   |
> >   |   1 tu-202.gw1.nycmnycz.ispnetinc.net (69.48.189.1) 32 msec 32 msec 32 msec
> >   |   2 test-152.tink.com (69.48.189.152) 48 msec *  48 msec
> >   | test-151-westel#
> >
> > So, the idea of using static routes of the form
> >    ip route 10.152.0.0 255.255.255.0 Tunnel202 69.48.189.152
> > won't work, as packets destined for the 10-networks would go
> > to the hub which wouldn't know what to do with them.
> >
> >
> > I tried to solve the problem by encapsulating this traffic
> > in packets which the hub would know how to handle.  I
> > figured I could use either IPsec or GRE, running over the
> > existing GRE tunnel.
> >
> > There's my (failed) attempt using gre.
> >
> > I defined a new tunnel to run inside the existing tunnel:
> >
> > Spoke 151:
> >
> >   | interface Tunnel303
> >   |  description Point-to-Point tunnel between Sites
> >   |  ip address 10.255.255.1 255.255.255.252
> >   |  tunnel source Tunnel202
> >   |  tunnel destination 68.49.189.152
> >
> >   | ip route 10.152.0.0 255.255.255.0 Tunnel303 10.255.255.2
> >
> > Spoke 152:
> >
> >   | interface Tunnel303
> >   |  description Point-to-Point tunnel between Sites
> >   |  ip address 10.255.255.2 255.255.255.252
> >   |  tunnel source Tunnel202
> >   |  tunnel destination 68.49.189.151
> >
> >   | ip route 10.151.0.0 255.255.255.0 Tunnel303 10.255.255.1
> >
> > And right there I was stymied.  The above doesn't work.
> >
> > The tunnel looks OK,
> >
> >   | test-151-westel#sho int t303
> >   | Tunnel303 is up, line protocol is up
> >   |   Hardware is Tunnel
> >   |   Description: Point-to-Point tunnel between Sites
> >   |   Internet address is 10.255.255.1/30
> >   |   MTU 1514 bytes, BW 100 Kbit, DLY 500000 usec,
> >   |      reliability 255/255, txload 1/255, rxload 1/255
> >   |   Encapsulation TUNNEL, loopback not set
> >   |   Keepalive not set
> >   |   Tunnel source 69.48.189.151 (Tunnel202), destination 68.49.189.152
> >   |   Tunnel protocol/transport GRE/IP
> >   |     Key disabled, sequencing disabled
> >   |     Checksumming of packets disabled
> >   |   Tunnel TTL 255
> >   |   Fast tunneling enabled
> >
> > but, I can't even ping from one end to the other, let alone
> > betwen the 10.*.0.0 nets.
> >
> >   Ping my end
> >
> >   | test-151-westel#ping 10.255.255.1
> >   |
> >   | Type escape sequence to abort.
> >   | Sending 5, 100-byte ICMP Echos to 10.255.255.1, timeout is 2 seconds:
> >   | !!!!!
> >   | Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
> >
> >   Ping the other end
> >
> >   | test-151-westel#ping 10.255.255.2
> >   |
> >   | Type escape sequence to abort.
> >   | Sending 5, 100-byte ICMP Echos to 10.255.255.2, timeout is 2 seconds:
> >   | .....
> >   | Success rate is 0 percent (0/5)
> >
> >
> >
> >
> > So, the usual questions:
> >
> >  o  Did I just make some stupid mistake in detail ?
> >  o  Am I going about this in some totally flawed way ?
> >     (e.g., tunnel-in-tunnel)
> >  o  Am I totally off the wall ?  :-)
> >  o  What's the right way to do this?  And, is there a
> >     published example :-)  ?
> >
> >
> > --
> > Bob Tinkelman          <bob at tink.com>
> > ISPnet, Inc.  http://www.ispnetinc.net
> >
> > +1 (718) 464-4747  office
> > +1 (800) 806-NETS  toll free
> > +1 (718) 217-9407  fax
> > _______________________________________________
> > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/
> >



More information about the cisco-nsp mailing list