[j-nsp] Best design can fit DC to DC

Ben Dale bdale at comlinx.com.au
Tue Nov 4 21:48:00 EST 2014


I recommend you take a good long look at E-VPN:

http://www.juniper.net/documentation/en_US/junos13.2/topics/concept/evpns-overview.html

https://conference.apnic.net/data/37/2014-02-24-apricot-evpn-presentation_1393283550.pdf

http://www.cisco.com/c/en/us/products/collateral/routers/asr-9000-series-aggregation-services-routers/whitepaper_c11-731864.html

It is supported in one form or another on MXs, ASRs and ALU boxes and presents P2MP services just like a VPLS, only with control-plane based MAC learning, active/active multi-homing and a bunch of other goodness which will probably spell the end for OTV (which under the hood is essentially MPLS over GRE with some smarts), and maybe even replace VPLS and L2VPN/EoMPLS longer term.

OTV limits you to a single platform (N7K only last I looked) from a single vendor and you have to anchor your tunnels on your Data Centre switch.  Now your interconnect is tied to your DC fabric.  How inconvenient.

Stretching L2 domains over 1000km of wouldn't be my first suggestion either.

I've had a bit of a play with VCF and it seems to work as advertised, but be aware of mixed-mode limitations if you need to bring in lots of 1GE ports (eg: EX4300) - they don't support TISSU.  Putting 1GE SFPs into a QFX5100 is a slightly more expensive workaround though.

Ben

On 5 Nov 2014, at 12:06 pm, Oliver Garraux <oliver at g.garraux.net> wrote:

> I don't think L2 extension is a good idea, but if you have to, OTV is the
> way to do it IMO.  Particularly since you have to support non virtualized
> infrastructure, and may have > 2 sites in the future.  OTV isn't going to
> have sub-second convergence (though I think it might support BFD in the
> future?)
> 
> 35km isn't too bad latency wise.  1000km will be much trickier.  Things
> like storage replication will be more difficult.  And while OTV can easily
> isolate FHRP's to send outbound traffic out the local default gateway,
> traffic tromboning with inbound traffic might be an issue.  (If inbound
> traffic is routed to the "wrong" data center, the extra latency from having
> to go across 1000km to get to the datacenter where the box actually is will
> suck).  I think LISP in theory could help with this, but would involve
> changes to how traffic gets *to* your data centers.
> 
> Also, in general don't overlook the integration of storage / network /
> system stuff.  IE, if you lose all the links between the data centers,
> storage / networking / systems all need to fail over to the same place.  If
> you can, do lots of disruptive testing before going into production.  A
> previous organization I've been with discovered (and fixed) many issues
> with L2 extension through pre-production testing of everything.
> 
> Oliver
> 
> -------------------------------------
> 
> Oliver Garraux
> Check out my blog:  blog.garraux.net
> Follow me on Twitter:  twitter.com/olivergarraux
> 
> On Fri, Oct 31, 2014 at 9:41 AM, R LAS <dim0sal at hotmail.com> wrote:
> 
>> Hi all
>> a customer of mine is thinking to renew DC infrastructure and
>> interconnection among the main (DC1) and secondary (DC2), with the
>> possibility in the future to add another (DC3).
>> 
>> Main goal are: sub-second convergence in case of a single fault of any
>> component among DC1 and DC2 (not DC3), to have the possibility to extend L2
>> and L3 among DCs, to provide STP isolation among DCs, to provide ports on
>> the server at eth 1/10Gbs speed.
>> 
>> DC1 and DC2 are 35 km away, DC3 around 1000 km away from DC1 and DC2.
>> 
>> Customer would like to design using Cisco or Juniper and at the end to
>> decide.
>> 
>> Talking about Juniper my idea was to build and MPLS interconnection with
>> MX240 or MX104 in VC among DC1 and DC2 (tomorrow will be easy to add DC3)
>> and to use QFX in a virtual chassis fabric configuration.
>> 
>> Does anybody have these kind of config ?
>> Is it stable QFX ?
>> Any other suggestion/improvement ?
>> 
>> And if you would go with Cisco, what do you propose in this scenario ?
>> 
>> Rgds
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp




More information about the juniper-nsp mailing list