[c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

Saku Ytti saku at ytti.fi
Mon Mar 12 06:43:28 EDT 2018


Hey,


RR1---RR2
|           |
PE1----+


1) PE1 sends 1M routes to RR2, RR2

CaseA) Same clusterID
1) RR1 and RR2 have 1M entries

CaseB) Unique clusterID
1) RR1 and RR2 have 2M entries



Cluster is promise that every client peers with exactly same set of
RRs, so there is no need to for RRs to share client routes inside
cluster, as they have already received it directly.


Of course if client1 loses connection to RR2 and client2 loses
connection to RR1, client<->client2 do not se each other's routes.

For same reason, you're not free to choose 'my nearest two RR' with
same cluster-id, as you must always peer with every box in same
cluster-id. So you lose topological flexibility, increase operational
complexity, increase failure-modes. But you do save that sweet sweet
DRAM.


Most blogs I read and even some vendor documents propose clusterID to
avoid loops, I think this is the real reason people use them, when RR
was setup, people didn't know what clusterID is for, and later stayed
committed on that initial false rationale and invented new rationales
to justify their position.

Premature optimisation is source of great many evil. Optimise for
simplicity when you can, increase  complexity when you must.



On 12 March 2018 at 12:34,  <adamv0025 at netconsultings.com> wrote:
>> Job Snijders
>> Sent: Sunday, March 11, 2018 12:21 PM
>>
>> Folks - i'm gonna cut short here: by sharing the cluster-id across
> multiple
>> devices, you lose in topology flexibility, robustness, and simplicity.
>>
>
> Gent's I have no idea what you're talking about.
> How can one save or burn RAM if using or not using shared cluster-IDs
> respectively???
> The only scenario I can think of is if your two RRs say RR1 and RR2 in a POP
> serving a set of clients (by definition a cluster btw) -if these two RRs
> have an iBGP session to each other - which is a big NONO when you are using
> out of band RRs, no seriously.
> Remember my previous example about separate iBGP infrastructures one formed
> out of all clients connecting to RR1 in local POP and then all RR1s in all
> POPs peering with each other in full mesh and then the same infrastructure
> involving RR2s?
> Well these two iBGP infrastructures should work as ships in the night. If
> one infrastructure breaks at some point you still get all your prefixes to
> clients/RRs in affected POPs via the other infrastructure.
> That said both of these iBGP infrastructures need to carry the same set of
> prefixes, so the memory and cpu resources needed are proportional to the
> amount of information carried only.
> -but none of these need to carry the set of prefixes twice, see below.
>
> Yes you could argue if A loses session to RR1 and B loses session to RR2
> then A and B can't communicate, but the point is PEs just don't lose
> sessions to RRs -these are iBGP sessions that can route around, so the only
> scenario where this happens is misconfiguration and trust me you'll know
> right away that you broke something.
> Then you can argue that ok what if I have A to RR1-pop1 to RR1-pop2 to B
> AND  A to RR2-pop1 to RR2-pop2 to B   AND  say RR1-pop1 as well as RR2-pop-2
> fail at the same then A and B can't communicate.
> Fair point that will certainly happen, but what is the likelihood of that
> happening? Well it's MTBF of RR1-POP-1 times MTBF of RR1-POP-1 which is fine
> for me and I bet for most folks out there.
>
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



-- 
  ++ytti


More information about the cisco-nsp mailing list