[rbak-nsp] CGNAT performance issues
Mariusz K. Grzeca
mgrzeca at jmdi.pl
Mon Jan 27 07:30:09 EST 2020
Slot 1, Ingress:
Microblock counters:
used count : 7186
unassigned count : 48148
free count : 10202
Slot 3, Ingress:
Microblock counters:
used count : 6906
unassigned count : 48191
free count : 10439
Slot 5, Ingress:
Microblock counters:
used count : 5703
unassigned count : 49470
free count : 10363
Slot 6, Ingress:
Microblock counters:
used count : 7824
unassigned count : 47772
free count : 9940
Slot 9, Ingress:
Microblock counters:
used count : 6557
unassigned count : 48701
free count : 10278
Slot 10, Ingress:
Microblock counters:
used count : 3915
unassigned count : 51661
free count : 9960
Slot 11, Ingress:
Microblock counters:
used count : 6539
unassigned count : 48708
free count : 10289
Slot 12, Ingress:
Microblock counters:
used count : 6295
unassigned count : 48943
free count : 10298
Slot 13, Ingress:
Microblock counters:
used count : 6393
unassigned count : 48752
free count : 10391
There are currently no more than 3k NAT subscribers per line card. Less than 15k microblocks used during the evening traffic peaks.
Mariusz
> Wiadomość napisana przez Grzegorz Czarnota - Beskid Media Sp. z o.o. <grzegorz.czarnota at beskidmedia.pl> w dniu 27.01.2020, o godz. 13:24:
>
> Hello,
> check usage of microblock on linecard:
>
> sh card 1 nat allocation
>
> Slot 1, Ingress:
> Microblock counters:
> used count : 34492
> unassigned count : 25062
> free count : 5982
>
>
> W dniu 27.01.2020 o 12:54, Mariusz K. Grzeca pisze:
>> Hi,
>>
>> We are currently experiencing some major peformance issues with one of our SEs.
>>
>> Our platform is SE1200 with 2xXCRP4 and 9 10ge-4-port cards, SEOS-12.1.1.12p15-Release. 7 BGP4 peers with a total of ~40Gbps throughput during the evening traffic peaks. Around 30k active CLIPS subscribers of which ~25k have a NAT policy attached (enhanced NAT with logging and paired mode).
>>
>> Initially we had 2 line cards intended for BGP sessions only, 5 cards for CLIPS sessions and 2 cards reserved for other low throughput purposes. The first symptoms we experienced were reduced bandwidths for subscribers with 1Gbps service plans (500-600 Mbps instead of the usual 941 Mbps). The problem only affected NAT clients. A couple of weeks later BGP sessions started flapping. At first it seemed like we were hitting the 20Gbps per card limit and rising tail drop counters on the BGP cards seemed to confirm it.
>>
>> So we fiddled with the wires and came out with a different setup - 7 cards with at most 2 ports connected, one for BGP peer and the other for CLIPS sessions. And it actually made the situation worse - rising tail drop counters on each card and even lower bandwidths during the evening traffic peaks and BGP sessions kept flapping. In order to prevent flapping BGP sessions we moved most of the BGP peers to another SmartEdge router and added 2 more 10ge-4-port cards for CLIPS sessions. Result - no more flapping but nothing has changed in terms of either bandwidth or tail drop counters.
>>
>> I would be grateful for any suggestion as to the possible causes of this situation.
>>
>>
>> Thanks.
>>
>> _______________________________________________
>> redback-nsp mailing list
>> redback-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/redback-nsp
>
> _______________________________________________
> redback-nsp mailing list
> redback-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/redback-nsp
More information about the redback-nsp
mailing list