[c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
Mark Tinka
mark.tinka at seacom.mu
Sun Mar 11 06:39:13 EDT 2018
On 5/Mar/18 14:22, adamv0025 at netconsultings.com wrote:
> No, hierarchical RR infrastructure is a bad idea altogether. It was not
> needed way back with c7200 as RRs in Tier1 SPs backbones and its certainly
> not needed now.
> Just keep it simple.
> You don't need full mesh between all your regional RRs.
>
> Think of it as two separate iBGP infrastructures:
> 1)
> Clients within a region peer with a particular Regional-RR-1 (to disseminate
> prefixes within a single region).
> And all Regional-RR-1s peer with each other, i.e. full-mesh between RR-1s
> (to disseminate prefixes between all regions).
> 2)
> A completely separate infrastructure using the same model as the above, just
> built using Regional-RR-2s (this is for redundancy purposes).
>
>
> If you run out of memory or CPU cycles on RRs, then I'd say migrate to vRR
> solution.
> Or if you insist on physical boxes (or don't want to have all eggs in one
> basket -region wise), then you can scale out by dividing regions into
> smaller pieces (addressing CPU/Session limit per RR) and by adding planes to
> the above simple 1) and 2) infrastructure (addressing memory/prefix limit
> per RR).
So we have plenty of major core PoP's across 2 continents. We've been
running the CSR1000v on top of ESXi on x86 boxes since 2014, quite
successfully, as our RR's, at each of those PoP's.
Each major PoP has been configured with its unique, global Cluster-ID.
This has been scaling very well for us.
I think the Multiple Cluster-ID is overkill.
> Also if you carry a mix of full internet prefixes and VPN prefixes across
> your backbone, then I suggest you carve up one iBGP infrastructure for VPN
> prefixes and a separate one for the Internet prefixes (for stability and
> security reasons -and helps with scaling too if that's a concern).
We use the same RR for all address families. Resources are plenty with a
VM-based RR deployment.
The RR's are out-of-path, so are not part of our IP/MPLS data plane.
Mark.
More information about the cisco-nsp
mailing list