[j-nsp] BGP full mesh or route reflector

Saku Ytti saku at ytti.fi
Fri Dec 5 11:46:38 EST 2025


Agree on 3, since there is a significant risk of operator mistake
taking out remaining RR during investigation of an organic problem
taking out the first one.

But of course Mark didn't suggest any specific design, just mentioned
>1 clients, he'd deploy RRs.

On Fri, 5 Dec 2025 at 18:33, Aaron1 via juniper-nsp
<juniper-nsp at puck.nether.net> wrote:
>
> I’m planning a migration to move off my dual RR hub architecture which is on aging asr9k’s to newer mx960’s… and during my planning i’m thinking of adding a 3rd.  Just makes me feel better.
>
> Aaron
>
> > On Dec 5, 2025, at 9:12 AM, Mark Tinka via juniper-nsp <juniper-nsp at puck.nether.net> wrote:
> >
> > 
> >
> >> On 05/12/2025 14:31, Johan Borch via juniper-nsp wrote:
> >>
> >> Hi!
> >>
> >> In an SR/MP-BGP underlay, will it have a significant impact on device
> >> performance if we use a full iBGP mesh instead of route reflectors or other
> >> drawbacks? Let’s say we will end up with around 100 PE routers. These
> >> routers will not carry an excessive number of prefixes (no full tables).
> >> We can ignore the configuration part as configuration is auto-generated.
> >
> > I'd say if you have 2 or more routers, go with RR's. You will always grow into more routers, and the RR-based iBGP routing will ease your scaling logistics.
> >
> > It certainly won't complicate your life.
> >
> > Mark.
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp



-- 
  ++ytti


More information about the juniper-nsp mailing list