[c-nsp] Latest iteration of core upgrade - questions

Mark Tinka mtinka at globaltransit.net
Thu Oct 29 07:48:34 EDT 2009


On Thursday 29 October 2009 03:50:19 pm Rick Ernst wrote:

> Recap/summary: border/core/aggregation design with A/B
> redundancy/multi-homing at each device.

You might want to add (paid/unpaid) peering in there, either 
public or private, in case you have such a scenario. 

We've found having a separate router for that makes life a 
lot easier as compared to doing "kinky" things on your 
border routers for the same.

> 7206VXR/G1 on the
> border as media converters and BGP end-points,

I'd say just go with an NPE-G2, full DRAM, full flash.

But if you're going to forward more than, say, an odd 
700Mbps or so, consider something meatier, e.g., an ASR1002 
or Juniper M7i.

> dual
> 7507/RSP16 as the core...

I think you might be better off using an NPE-G2 as your core 
router :-).

Seriously, not so sure about the 7500 as a core job here.

> and route-reflectors...

Would recommend running separate routers as route 
reflectors. The 7201's are great for this, we've been happy 
with them (if only they supported RFC 4684, but that's just 
a code upgrade away, hopefully).

If you're really kinky, you can run MPLS and keep your core 
BGP-free, but just for v4. You'll still need BGP for v6 in 
your core since there currently isn't any support from the 
vendors for a v6 control plane for MPLS.

But MPLS may be added complexity for some networks. Know 
your considerations.

> surrounded by
> a dual layer-2 sandwich, with various devices for
> customer aggregation below that.

What are you looking at for the core switches? We've been 
happy with the 6506|9/SUP720-3BXL with WS-X6724-SFP line 
cards in there for Gig-E connectivity. 

As a pure Layer 2 platform, they're rock-solid. With Layer 
3, search the archives for numerous horror stories.

> The 7507s and layer-2
> glue would be replaced by a pair of 7600s.

So what would your border routers connect to for Layer 2 
aggregation? A separate VLAN on the 7600's?

We've been happy delineating these functions both at Layer 3 
(border, peering, core, edge) as well as at Layer 2 (which 
boxes do IP forwarding, which boxes do Ethernet 
aggregation).

The only time where we've considered integrating Layer 2 
Ethernet aggregation with IP/MPLS forwarding is when we need 
a big Ethernet box at the edge, e.g., Juniper MX960, Cisco 
7609-S, e.t.c. But this is for the edge, not core.

> OSPF as the
> IGP.

We like IS-IS, but this is a matter of choice and comfort. 
The whole "must connect to the backbone Area" requirement in 
OSPF is a limitation for us. But again, this might not be a 
problem for you.

At any rate, we've been happy using our pure Layer 2 core 
switches as IS-IS DIS's, since there's not much else they're 
doing with those (rather slow) CPU's. Again, rock-solid 
DIS's.

You may want to consider using your core switches as a DR + 
BDR pair for OSPF, as they are central in your network. 
Watch out for IGP metric choices; defaults are not scalable 
if you're thinking long-term, and large networks. Customize 
as appropriate.

Furthermore, how do you plan to aggregate all your devices 
into the core switches? Copper, fibre? I'd say, if you can 
afford to, go for 850nm 50um multi-mode fibre. In the 
future, if 10Gbps Ethernet becomes your reality, there's kit 
out there that can re-use this fibre. Of course, this 
assumes internal cabling :-).

> A new wrinkle has been added to the mix and that is a
> local ("across the parking lot") facilities expansion and
> a remote facility that is turning into a POP as well. 

Sounds good :-).

> The local facility has dozens of strands of fiber
> available. The remote facility  has 4 strands (A/B, as
> well) and also lands an upstream provider, backhauled to
> our existing facility.  As part of the redesign, I need
> to make at least the new/local facility able to stand on
> its own for DR purposes.

Makes sense.

> The consensus I've seen for core routing/switching
> equipment is 7600s with Sup720-3BXL and various line
> cards.  I'm curious how integrated the switching fabric
> and routing engine are; e.g. if the switch fabric is
> provisioned and there is a Sup failure/failover, will the
> switch fabric continue to forward layer-2 traffic?

This is a question I've always asked myself, but never 
Cisco. I've asked the same question to Juniper about their 
new EX8200 series switches, which are following Cisco's path 
on the 6500/7600 and integrating their control and data 
planes into a single module as part of their 3-stage switch 
fabric.

Our 6500's have never suffered this fate, so no experience 
here. Perhaps others can comment.

Also, it makes a lot of sense for us to have two core 
switches each with a single supervisor module, than one or 
two with two supervisor modules each. But YMMV in your 
particular case.

> Additionally, if there are a group of ports provisioned
> in a VLAN, will the VLAN continue to forward layer-2
> traffic even if the SVI is down?

Maybe others can comment - we don't have this scenario. In 
our case, all Layer 3 stuff is done on routers.

The only SVI's we have on our 6500 core switches are for IS-
IS. And that's always up for as long as we have at least one 
port active for that VLAN (and one core switch deals with 
only one VLAN - yes, there's an 802.1Q trunk between both 
switches, but since all traffic between different VLAN's is 
handled at the IP layer, it's never used).

> >From a design perspective; I could extend layer-2 to the
> > new local facility
>
> and use the existing facility for all routing and
> transit.  This doesn't give any stand-alone survivability
> to the new building, though.

True, that'd be your limitation.

As you grow, if a new PoP is being used solely for border 
access to another upstream, you can run the routers there as 
collapsed core/border routers.

But in your case, you're looking for more now...

> I can swing telco/upstream
> entrance for one provider to the new building, but still
> need to integrate the layer-3 and IGP.  Ideally, I'd like
> to slow-start the new building without redundant cores
> and use the existing building for redundancy. I'd also
> like to use the new build as a template for future POPs
> where "lots of fiber" may not be available.

If budgets are tight, you can start off with a linear 
design, i.e., 1x border router, 1x core router, 1x core 
switch(es), 1x edge router, 1x route reflector e.t.c.

Alternatively, you can start off with collapsed functions, 
e.g., collapsed core + border + route reflector but a unique 
edge and core switch, e.t.c.

It's up to what your pockets can handle. The permutations 
are endless.

> I've considered having each building/POP as a BGP
> confederation, and also iBGP peering at either/both the
> core and border layers (with appropriate meshing and/or
> route-reflectors).

I'd recommend standard iBGP route reflection instead of BGP 
confederations.

> Am I going down the right path?  Pointers to additional
> information? Additional considerations I haven't
> mentioned?   Cisco's _Internet Routing Architectures_ and
> some other Cisco Press books are getting a workout, but
> I'm not getting a good feel for my particular situation.

You probably also want to design a comprehensive routing 
policy that suits the products and services you plan to 
offer your customers, whether they are eBGP or non-eBGP 
customers.

To the same end, use your IGP only for your Loopback 
addresses, and BGP for everything else. Makes implementing 
your routing policy easier and much more fun. Also, keeps 
things nice and lean.

Think about security - protect your control planes, block 
RFC 1918, block RFC 3330, use uRPF (strict and loose mode, 
as appropriate, and ensure your routing policy has the 
correct filters to/from upstreams, peers and customers.

Definitely implement v6 from Day One! This cannot be over-
emphasized. And keep as much parity in configuration both 
for your v4 and v6 in terms security, routing policies, 
features, e.t.c.

There's probably tons more :-).

Cheers,

Mark.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: <https://puck.nether.net/pipermail/cisco-nsp/attachments/20091029/2b3dc699/attachment.bin>


More information about the cisco-nsp mailing list