[c-nsp] GRE tunnel to do span vlan across two datacenters?

Tony Varriale tvarriale at comcast.net
Wed Jul 6 13:31:22 EDT 2011


On 7/6/2011 11:08 AM, Jason Gurtz wrote:
> A firm has proposed creating a GRE tunnel between two datacenters (using a
> 3750X stack at each) to create the spanned vlans needed for VMWare
> failover application.
>
> Clearly there is tunnel overhead but I sense there are other failure modes
> here that aren't so clear to me--I am familiar in concept with GRE tunnels
> but don't have a heck of a lot of opex. Can anyone share more insight on
> the merit (or lack of) with this proposed design? I am aware (via this
> list, thanks!) of several shortcomings surrounding 3750 based stacks, but
> cisco alternatives seem pricier still or too big. There is dark fiber
> available, what about VPLS w/ LDP or L2TP solution?
>
> Current network is L3 at the access layer w/ OSPF (4507-sup6 access, 4900M
> cores):
>
>       A1
>       /\
>     /    \
>   C1------C2
>     \    /
>       \/
>       A2
>
> Maybe it is better to just overlay stp back on to the network w/root and
> alt-root at C1/C2 (V1 and V2 are the proposed 3750X stacks)? Scary to me,
> but an an argument can be made for less complexity -vs.- tunnling/vpn
> based approach.
>
>       A1     .V1
>       /\ . ' /
>     /. ' \ /
>   C1------C2
>     \` . / \
>       \/ ' . \
>       A2     'V2
>
> OTOH, by the time this actually gets done maybe TRILL will be out ;)
> Hopefully this enterprisy topic is not too OT!
>
> ~JasonG
>
>
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
I do not believe that GRE is supported on 3750x.  It wasn't on 3560/3750.

Previously you were able to config it but about 500pps would send the 
box to 100% CPU.

tv


More information about the cisco-nsp mailing list