[c-nsp] Latest iteration of core upgrade - questions

Rick Ernst cnsp at shreddedmail.com
Thu Oct 29 12:21:02 EDT 2009


On Thu, Oct 29, 2009 at 4:48 AM, Mark Tinka <mtinka at globaltransit.net>wrote:

> On Thursday 29 October 2009 03:50:19 pm Rick Ernst wrote:
>
> > Recap/summary: border/core/aggregation design with A/B
> > redundancy/multi-homing at each device.
>
> You might want to add (paid/unpaid) peering in there, either
> public or private, in case you have such a scenario.
>
> We've found having a separate router for that makes life a
> lot easier as compared to doing "kinky" things on your
> border routers for the same.
>

- We do have some peering, but it was originally designed at the
customer/aggregation layer.   Making it an individual border, "upstream"
service may make more sense.

>
> > 7206VXR/G1 on the
> > border as media converters and BGP end-points,
>
> I'd say just go with an NPE-G2, full DRAM, full flash.But if you're going
> to forward more than, say, an odd
>
700Mbps or so, consider something meatier, e.g., an ASR1002
> or Juniper M7i.
>

- The idea for the 7206s is as "lightbulb" devices.  One upstream. One 7206.
Two downlinks to the core.  The single-point-of-failure remains within the
individual upstreams.  This keeps max possible traffic within the
CPU/performance envelope. It also allows us to grow horizontally as
additional upstreams come in.  I'm looking at going to 7201s(? the 1U NPE-G2
equivalent) as bandwidth needs increase.


>
> > dual
> > 7507/RSP16 as the core...
>
> I think you might be better off using an NPE-G2 as your core
> router :-).
>
> Seriously, not so sure about the 7500 as a core job here.
>

- You probably caught this later but just to be explicit the 7500s are what
are going away. :)

>
> > and route-reflectors...
>
> Would recommend running separate routers as route
> reflectors. The 7201's are great for this, we've been happy
> with them (if only they supported RFC 4684, but that's just
> a code upgrade away, hopefully).
>
> If you're really kinky, you can run MPLS and keep your core
> BGP-free, but just for v4. You'll still need BGP for v6 in
> your core since there currently isn't any support from the
> vendors for a v6 control plane for MPLS.
>
> But MPLS may be added complexity for some networks. Know
> your considerations.
>

- I was originally looking at an MLS-style router-on-a-stick connected to
core switching, but I haven't found any switching devices that that would
allow that topology other than the old Catalyst 5500s.  The hardware CEF on
the 6500s/7600s appear to be attached to the routing engine on the
Supervisors.

>
> > surrounded by
> > a dual layer-2 sandwich, with various devices for
> > customer aggregation below that.
>
> What are you looking at for the core switches? We've been
> happy with the 6506|9/SUP720-3BXL with WS-X6724-SFP line
> cards in there for Gig-E connectivity.
>
> As a pure Layer 2 platform, they're rock-solid. With Layer
> 3, search the archives for numerous horror stories.
>

- 7600/Sup720-3BXL is the top (currently only) contender for core
routing/switching.  Two concerns that keep showing up in threads about them
are netflow and uRPF.  I use uRPF in conjunction with a BGP route-injector
on the border for real-time blackholing.  Not really needed in the core, but
the functionality would be nice to have.  Netflow is the bigger concern. I
do a lot of traffic analysis with Netflow and we are currently pushing
~800Mbs aggregate (total in/out across all upstreams) at 200Kpps.
Increasing demand for > 50Mbs access is driving the ugprade.

>
> > The 7507s and layer-2
> > glue would be replaced by a pair of 7600s.
>
> So what would your border routers connect to for Layer 2
> aggregation? A separate VLAN on the 7600's?
>
> We've been happy delineating these functions both at Layer 3
> (border, peering, core, edge) as well as at Layer 2 (which
> boxes do IP forwarding, which boxes do Ethernet
> aggregation).
>

- I was planning on having an "core/border" and "core/aggregation" VLAN on
the 7600s.  Our customer TDM needs are drying up and eveverything is moving
to ethernet.  New customer aggregation is Catalyst 4948s with local-only BGP
and OSPF.  Customers requiring BGP ebgp-multi-hop to devices that are
full-table capable.

>
> The only time where we've considered integrating Layer 2
> Ethernet aggregation with IP/MPLS forwarding is when we need
> a big Ethernet box at the edge, e.g., Juniper MX960, Cisco
> 7609-S, e.t.c. But this is for the edge, not core.
>
> > OSPF as the
> > IGP.
>
> We like IS-IS, but this is a matter of choice and comfort.
> The whole "must connect to the backbone Area" requirement in
> OSPF is a limitation for us. But again, this might not be a
> problem for you.
>
> At any rate, we've been happy using our pure Layer 2 core
> switches as IS-IS DIS's, since there's not much else they're
> doing with those (rather slow) CPU's. Again, rock-solid
> DIS's.
>
> You may want to consider using your core switches as a DR +
> BDR pair for OSPF, as they are central in your network.
> Watch out for IGP metric choices; defaults are not scalable
> if you're thinking long-term, and large networks. Customize
> as appropriate.
>
> Furthermore, how do you plan to aggregate all your devices
> into the core switches? Copper, fibre? I'd say, if you can
> afford to, go for 850nm 50um multi-mode fibre. In the
> future, if 10Gbps Ethernet becomes your reality, there's kit
> out there that can re-use this fibre. Of course, this
> assumes internal cabling :-).
>
> - I'm actually trying to clean up our fiber plant and GBICs by moving to
1000BaseT.  Future growth will likely be 10G fiber for "infrastructure"
(hub-and-spoke to aggregation devices) with 10/100/1000 copper handoff to
customers.


> > A new wrinkle has been added to the mix and that is a
> > local ("across the parking lot") facilities expansion and
> > a remote facility that is turning into a POP as well.
>
> Sounds good :-).
>
> > The local facility has dozens of strands of fiber
> > available. The remote facility  has 4 strands (A/B, as
> > well) and also lands an upstream provider, backhauled to
> > our existing facility.  As part of the redesign, I need
> > to make at least the new/local facility able to stand on
> > its own for DR purposes.
>
> Makes sense.
>
> > The consensus I've seen for core routing/switching
> > equipment is 7600s with Sup720-3BXL and various line
> > cards.  I'm curious how integrated the switching fabric
> > and routing engine are; e.g. if the switch fabric is
> > provisioned and there is a Sup failure/failover, will the
> > switch fabric continue to forward layer-2 traffic?
>
> This is a question I've always asked myself, but never
> Cisco. I've asked the same question to Juniper about their
> new EX8200 series switches, which are following Cisco's path
> on the 6500/7600 and integrating their control and data
> planes into a single module as part of their 3-stage switch
> fabric.
>
> Our 6500's have never suffered this fate, so no experience
> here. Perhaps others can comment.
>
> Also, it makes a lot of sense for us to have two core
> switches each with a single supervisor module, than one or
> two with two supervisor modules each. But YMMV in your
> particular case.
>
> > Additionally, if there are a group of ports provisioned
> > in a VLAN, will the VLAN continue to forward layer-2
> > traffic even if the SVI is down?
>
> Maybe others can comment - we don't have this scenario. In
> our case, all Layer 3 stuff is done on routers.
>
> The only SVI's we have on our 6500 core switches are for IS-
> IS. And that's always up for as long as we have at least one
> port active for that VLAN (and one core switch deals with
> only one VLAN - yes, there's an 802.1Q trunk between both
> switches, but since all traffic between different VLAN's is
> handled at the IP layer, it's never used).
>
> > >From a design perspective; I could extend layer-2 to the
> > > new local facility
> >
> > and use the existing facility for all routing and
> > transit.  This doesn't give any stand-alone survivability
> > to the new building, though.
>
> True, that'd be your limitation.
>
> As you grow, if a new PoP is being used solely for border
> access to another upstream, you can run the routers there as
> collapsed core/border routers.
>
> But in your case, you're looking for more now...
>
> > I can swing telco/upstream
> > entrance for one provider to the new building, but still
> > need to integrate the layer-3 and IGP.  Ideally, I'd like
> > to slow-start the new building without redundant cores
> > and use the existing building for redundancy. I'd also
> > like to use the new build as a template for future POPs
> > where "lots of fiber" may not be available.
>
> If budgets are tight, you can start off with a linear
> design, i.e., 1x border router, 1x core router, 1x core
> switch(es), 1x edge router, 1x route reflector e.t.c.
>
> Alternatively, you can start off with collapsed functions,
> e.g., collapsed core + border + route reflector but a unique
> edge and core switch, e.t.c.
>
> It's up to what your pockets can handle. The permutations
> are endless.
>

> > I've considered having each building/POP as a BGP
> > confederation, and also iBGP peering at either/both the
> > core and border layers (with appropriate meshing and/or
> > route-reflectors).
>
> I'd recommend standard iBGP route reflection instead of BGP
> confederations.
>
> > Am I going down the right path?  Pointers to additional
> > information? Additional considerations I haven't
> > mentioned?   Cisco's _Internet Routing Architectures_ and
> > some other Cisco Press books are getting a workout, but
> > I'm not getting a good feel for my particular situation.
>
> You probably also want to design a comprehensive routing
> policy that suits the products and services you plan to
> offer your customers, whether they are eBGP or non-eBGP
> customers.
>

- Something the redesign/reimplentation will allow is "core is glue only.
Customers attach at the aggregation layer and everything is a customer"

>
> To the same end, use your IGP only for your Loopback
> addresses, and BGP for everything else. Makes implementing
> your routing policy easier and much more fun. Also, keeps
> things nice and lean.
>

- I'm using IGP for loopback addresses, but also local routing.  Not all
devices can handle either BGP, or full-tables.  That is a different upgrade
project, but I need to keep existing/legacy services running as I go
forward.

>
> Think about security - protect your control planes, block
> RFC 1918, block RFC 3330, use uRPF (strict and loose mode,
> as appropriate, and ensure your routing policy has the
> correct filters to/from upstreams, peers and customers.
>

- Yup. Hardware control-plane policing is definitely on my list of required
features.  uRPF and various peer groups are already in place to protect as
much as I can with the current platforms.

>
> Definitely implement v6 from Day One! This cannot be over-
> emphasized. And keep as much parity in configuration both
> for your v4 and v6 in terms security, routing policies,
> features, e.t.c.
>

- I'm on the fence with IPv6.  Of our current "name brand" providers, only
one of them even sort-of supports v6.  v6 is also on my feature requirements
list, but I'm planning on going dual-stack later rather than earlier; both
to change as little as possible while upgrading and also to give me more
time to digest how v6 really works and what it means.



> There's probably tons more :-).
>
> Cheers,
>
> Mark.
>

Thanks!


More information about the cisco-nsp mailing list