[j-nsp] BGP full mesh or route reflector

James Bensley lists+junipernsp at bensley.me
Wed Jan 21 02:30:13 EST 2026


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On Friday, 5 December 2025 at 13:35, Johan Borch via juniper-nsp <juniper-nsp at puck.nether.net> wrote:
> Hi!
>
> In an SR/MP-BGP underlay, will it have a significant impact on device
> performance if we use a full iBGP mesh instead of route reflectors or other
> drawbacks? Let’s say we will end up with around 100 PE routers. These
> routers will not carry an excessive number of prefixes (no full tables).
> We can ignore the configuration part as configuration is auto-generated.

Hi Johan,

There are also a lot of variables to consider. But it can work for sure. To provide you with some data points of working setups...

Based on your description, if you're running a private MPLS network with only internal/private routes it's fine, 100 PEs in full mesh is fine. I've run a 100 PE network using Cisco ME3600s/ME3800s, in a full iBGP mesh, no public routing, just a fully private MPLS network providing private L3 VPNs, with maybe 1000 routes.

That's going back some years (Cisco ME3600/3800!) and they had/have tiny little single core PPC CPUs (if my memory is correct). CPU usage was consistently low though due to low route scale and minimal route churn. Also ran full mesh RSVP back then, also fine due to such low scale and low churn.

Later I remember running a similar sized network, again full mesh, with ASR9001s, quad-core PPCs. Absolutely rubbish CPUs by today's standards, but when you have so few routes and so little churn, it's no problem.

At my currently employer we run an ISP/carrier network, with 9M paths in BGP RIB. This is also nearly 100 PEs with a full iBGP mesh. We wanted to deploy this network with RRs from day 1, but we run everything in EVPN and we wanted to use ORR, and we had to wait for our vendor to support ORR for EVPN. They implemented it, and our migration to virtual RRs using EVPN ORR is nearly done now (all PE > RR iBGP sessions up, RR > RR sessions up), last step is to remove the band aid which makes the PE -> RR BGP sessions less preferable and start stripping away the iBGP full mesh.

But up until now, the devices have been fine. It's 2026 (just about), routers these days have 64GBs of RAM and a 6 core / 12 thread AMD Ryzen at > 3Ghz. They eat BGP routes for breakfast (we're an Arista shop, these are 7280R3 "-A" models).

With the number of paths we have rapidly increasing, and the number of PEs also rapidly increasing, in this case, this isn't going to scale for long, and we knew from day one we'd need to get to RRs, so RRs were always planned, we just had to wait for vendor support. But it's been fine until now. In the previous networks I mentioned, RRs were never planned due to low route scale and low route churn, and they were also fine.

Also considering some other factors; I have worked on networks handling emergency call routing, air traffic control, and other sensitive traffic. With a full iBGP mesh, convergence is at it's quickest (depending on the update groups); interface goes on router A, it informs router B directly, immediately. With RRs, a withdraw is processed as fast as your slowest RR because all RRs need to withdraw the route before the PE finally drops the route. On the current network I work, due to the large geographical scale (and exhausting the number of ORR groups our vendor supports) we have multiple RR groups peered together, and with millions of paths, convergence won't be fast.

There are always pros and cons, just gotta find the gotchas (or turn your pager off!).

Cheers,
James.
-----BEGIN PGP SIGNATURE-----
Version: ProtonMail

wsG5BAEBCgBtBYJpcIB2CRCoEx+igX+A+0UUAAAAAAAcACBzYWx0QG5vdGF0
aW9ucy5vcGVucGdwanMub3JnxT15AhT40hHyKYLX7KMqjtHuixVlCXI43DKy
iLEgdeQWIQQ+k2NZBObfK8Tl7sKoEx+igX+A+wAA680QAKqcD/c9XPW6qsNn
h1ZsgOfM0LsOh4xFhqkSE+zw5Fmzngj8h1OR9XuZR7/ytJZUJUEla/BJduzg
/mwYGUZodAwfpFaB5cDQRiBxg/E+D7j/wi0xo9nPGcskwkNNZZiZ+gy2Cujv
AXWAcM0YrtJKxDeW3iMQD6zy/7NFGtiB61b6B0ET9cv1g9modsXXpnKqG26e
pE1UGqlYfF5l6OpP/JclxRYLaPPahJJ8Rty3X7hoxmUNDqhPdDmyE7OcZUAa
S9wDZ6a0Ni0bfuMhwm0Y7ZkBZTLXWiJ/a0+2s1oNRIPV4modulCyTsgv3UYy
r/mwPat8HUwWApIY46+2W5dFVDjCYSE07vX409L2d9mQ78s78z2PQuJUGAQL
e7f+xS3+LO9XTNCK57Lub4wyDKwHbDrqT+d8WQlBNaQAtamuPyvGJxeGi8Pt
R+f9FJzbcTGUgcYHJOfRTMOneE4Jgh3w0o9PivNSHOlYTkfZ/4dLr0Pdg3xr
orhjx6PKUebnrd6Sk12zqRbVNchqiIsooivax9x5nGmXbR/w0xqceSmjwhqg
YPGVyhwxwZodcIQd/zxBrVz09ggUjVhd6kEyTNXR4qNQNY3aQEyqVuD0NXH7
xhAf1iieytLCH9LN608DRjDUBOzD+oBwZK9y/5zNswf6/DaBVdj2sYDVaL48
XT+lLj8Y
=xScD
-----END PGP SIGNATURE-----
-------------- next part --------------
A non-text attachment was scrubbed...
Name: publickey - lists+junipernsp at bensley.me - 0x3E936359.asc.sig
Type: application/pgp-signature
Size: 636 bytes
Desc: not available
URL: <https://puck.nether.net/pipermail/juniper-nsp/attachments/20260121/4e2a073e/attachment.sig>


More information about the juniper-nsp mailing list