[c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

adamv0025 at netconsultings.com adamv0025 at netconsultings.com
Tue Mar 13 07:04:32 EDT 2018


Keeping RR1s separate from RR2s is all about memory efficiency,

I go with the premise that all I need is a full mesh between RR1s in order to distribute all the routing information across the whole backbone.
The exact mirror of RR1s’ infrastructure (same topology, same set of prefixes) but composed of RR2s is there just in case something happens to RR1s infrastructure (merely a backup). 
This model is *50% more efficient with memory on every RR in comparison with the model where RR1s and RR2s are in a full-mesh.  
*this is when Type 1 RDs are used, so there’s no state lost on RR to RR sessions. 
	
The same rationale is behind having sessions between RRs as non-client sessions.
- In combination with separate RR1 and RR2 infrastructures each RR (RR1/RR2) in the network learns only one path for any single homed prefix. 
- If I’d have RR1s and RR2s all mixed in a full-mesh then each RR would get 2 paths 
That is double the amount of state I’d need to keep on every RR in comparison with separate RR1 and RR2 infrastructures.

If sessions between RRs are configured as client sessions AND 
- You keep RR1s separate from RR2s, then each RR would learn N-1 paths for each single homed prefix, where N is the number of RR1s.
- You have full-mesh between RR1s and RR2s then each RR would learn N-1 paths for each single homed prefix, where N is the number of all RRs (RR1s+RR2s). 
This clearly does not scale as the amount of state on each RR is proportional to the number of RRs in the network.

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::




More information about the cisco-nsp mailing list