[c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

adamv0025 at netconsultings.com adamv0025 at netconsultings.com
Thu Mar 15 06:18:45 EDT 2018


Actually the fully-meshed RR's actually do "reflect" routes to each other.
They are still breaking the iBGP to iBGP rule, they're just exercising the "client to non-client" rule in 2) when "relying" route from a local cluster to RRs in other clusters and in turn those RRs in other clusters then employ the " non-client to client" in 1) to "relay" the route to their local-cluster clients:
1) A route from a Non-Client IBGP peer:
         Reflect to all the Clients.
2) A route from a Client peer:
         Reflect to all the Non-Client peers and also to the Client
         peers.  (Hence the Client peers are not required to be fully
         meshed.)
-But I see what you mean it's not client-to-client reflection -that happens only between clients within each individual local cluster.
CtoC---CtoNC--NCtoC  
C---->RR---->RR---->C
            |
C<----+


Anyways I understand your design choice and it makes sense in your environment as your iBGP infrastructure currently carries a single copy of the DFZ routing table and there's most likely some path-hiding going on as RRs are employed in best path selection process and on top of that there are plethora of resources on RRs, so I agree there's no need to deploy any scaling tuning as it would be just added complexity with no perceived benefit. 
Maybe you might start looking at some scaling techniques when you'll have a need to transport multiple paths for a prefix for load-sharing or primary-backup use cases, say to reduce internet convergence times form 2 mins down to less than 1ms (MX with 2M prefixes). 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

From: Mark Tinka [mailto:mark.tinka at seacom.mu] 
Sent: Tuesday, March 13, 2018 11:29 PM
To: adamv0025 at netconsultings.com; 'Saku Ytti'
Cc: 'Job Snijders'; 'Cisco Network Service Providers'
Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)


On 13/Mar/18 18:47, adamv0025 at netconsultings.com wrote:
Ok you’re still missing the point, let me ty with the following example. 

Now suppose we both have: 
pe1-cluster-1 sending prefix X to rr1-cluster1 and rr2-cluster1 and these are then reflecting it further to RRs in cluster2

Okay, so just to be pedantic, fully-meshed RR's don't "reflect" routes to each other (they can, but it's redundant). 

But I get what you're trying to say...



Now in your case: 
rr1-cluster2 receives prefix X from both rr1-cluster1 and rr2cluster1 –so how may paths will it keep? yes 2
rr2-cluster2 receives prefix X from both rr1-cluster1 and rr2cluster1 –so how may paths will it keep? yes 2

In my case:
rr1-cluster2 receives prefix X from rr1-cluster1 –so how may paths will it keep, yes 1
rr2-cluster2 receives prefix X from rr2-cluster1 –so how may paths will it keep, yes 1

Yes, understood.

So in our case, we are happy to hold several more paths this way within our RR infrastructure in exchange for a standard, simple configuration, i.e., a full-mesh amongst all RR's.

Having to design RR's such that RR1-Cluster-A only peers with RR1-Cluster-B_Z, rinse repeat for RR2-* is just operational complexity that requires too much tracking. Running the network is hard enough as it is.

We are taking full advantage of the processing and memory power we have on our x86 platforms to run our RR's. We don't have the typical constraints associated with purpose-built routers configured as RR's. It's been a long time since I had dedicated Juniper M120's running as RR's - I'm never going back to those days :-).

Mark.



More information about the cisco-nsp mailing list