[j-nsp] Hardware configuration for cRPD as RR

Vincent Bernat bernat at luffy.cx
Fri Feb 9 14:07:34 EST 2024


Juniper does not have a lot of guidelines on this. This is a bit 
surprising to us too. I would have expect some guidelines about IRQ and 
CPU pinning. It seems they think this does not matter much for a RR.

However, cRPD comes with better performance than vRR and therefore, 
Juniper pushes to cRPD instead of vRR.

On 2024-02-08 08:50, Roger Wiklund via juniper-nsp wrote:
> Hi
> 
> I'm curious, when moving from vRR to cRPD, how do you plan to manage/setup
> the infrastructure that cRPD runs on?
> 
> BMS with basic Docker or K8s? (kind of an appliance approach)
> VM in hypervisor with the above?
> Existing K8s cluster?
> 
> I can imagine that many networking teams would like an AIO cRPD appliance
> from Juniper, rather than giving away the "control" to the server/container
> team.
> 
> What are your thoughts on this?
> 
> Regards
> Roger
> 
> 
> On Tue, Feb 6, 2024 at 6:02 PM Mark Tinka via juniper-nsp <
> juniper-nsp at puck.nether.net> wrote:
> 
>>
>>
>> On 2/6/24 18:53, Saku Ytti wrote:
>>
>>> Not just opinion, fact. If you see everything, ORR does nothing but adds
>> cost.
>>>
>>> You only need AddPath and ORR, when everything is too expensive, but
>>> you still need good choices.
>>>
>>> But even if you have resources to see all, you may not actually want
>>> to have a lot of useless signalling and overhead, as it'll add
>>> convergence time and risk of encouraging rare bugs to surface. In the
>>> case where I deployed it, having all was not realistic possibly, in
>>> that, having all would mean network upgrade cycle is determined when
>>> enough peers are added, causing RIB scale to demand triggering full
>>> upgrade cycle, despite not selling the ports already paid.
>>> You shouldn't need to upgrade your boxes, because your RIB/FIB doesn't
>>> scale, you should only need to upgrade your boxes, if you don't have
>>> holes to stick paying fiber into.
>>
>> I agree.
>>
>> We started with 6 paths to see how far the network could go, and how
>> well ECMP would work across customers who connected to us in multiple
>> cities/countries with the same AS. That was exceedingly successful and
>> customers were very happy that they could increase their capacity
>> through multiple, multi-site links, without paying anything extra and
>> improving performance all around.
>>
>> Same for peers.
>>
>> But yes, it does cost a lot of control plane for anything less than 32GB
>> on the MX. The MX204 played well if you unleased it's "hidden memory"
>> hack :-).
>>
>> This was not a massive issue for the RR's which were running on CSR1000v
>> (now replaced with Cat8000v). But certainly, it did test the 16GB
>> Juniper RE's we had.
>>
>> The next step, before I left, was to work on how many paths we can
>> reduce to from 6 without losing the gains we had made for our customers
>> and peers. That would have lowered pressure on the control plane, but
>> not sure how it would have impacted the improvement in multi-site load
>> balancing.
>>
>> Mark.
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp


More information about the juniper-nsp mailing list