[j-nsp] Cut through and buffer questions

james list jameslist72 at gmail.com
Fri Nov 19 04:49:54 EST 2021


Hi
I mentioned MX and QFX (output related QFX5100) in first email because
traffic pattern spread both.
I never mentioned internet.

I also understood cut through cannot help but obviously I cannot change QFX
switches because we loss few udp packets for a single application, the idea
could be to change shared buffers for unused queues and add to used one,
correct ?
Based on the output provided what you suggest to change ?
I also understand this kind of change is traffic affecting.

I also need to understand how shared buffer queues on QFX are attached to
COS queues.

Thanks, cheers
James



Il giorno ven 19 nov 2021 alle ore 10:07 Saku Ytti <saku at ytti.fi> ha
scritto:

> On Fri, 19 Nov 2021 at 10:49, james list <jameslist72 at gmail.com> wrote:
>
> Hey,
>
> > I try to rephrase the question you do not understand: if I enable cut
> through or change buffer is it traffic affecting ?
>
> There is no cut-through and I was hoping after reading the previous
> email, you'd understand why it won't help you at all nor is it
> desirable. Changing QoS config may be traffic affecting, but you
> likely do not have the monitoring capability to observe it.
>
> > Regarding the drops here the outputs (15h after clear statistics):
>
> You talked about MX, so I answered from MX perspective. But your
> output is not from MX.
>
> The device you actually show has exceedingly tiny buffers and is not
> meant for Internet WAN use, that is, it does not expect significantly
> higher sender rate to receiver rate with high RTT. It is meant for
> datacenter use, where RTT is low and speed delta is small.
>
> In real life Internet you need larger buffers because of this
> senderPC => internets => receiverPC
>
> Let's imagine an RTT of 200ms and receiver 10GE and sender 100GE.
> - 10Gbps * 200ms = 250MB TCP window needed to fill it
> - as TCP windows grow exponentially in absence of loss, you could have
> 128MB => 250MB growth
> - this means, senderPC might serialise 128MB of data at 100Gbps
> - this 128MB you can only send at 10 Gbps rate, rest you have to take
> into the buffers
> - intentionally pathological example
> - 'easy' fix is, that sender doesn't burst the data at its own rate,
> but does rate estimation and sends window growth at estimated receiver
> rate, this practically removes buffering needs entirely
> - 'easy' fix is not standard behaviour, but some cloudyshops configure
> their linux like this thankfully (Linux already does bandwidth
> estimation, and you can ask 'tc' to shape the session to esimated
> bandwidth'
>
> What you need to do is change the device to a one that is intended for
> the application you have.
> If you can do anything at all, what you can do, is ensure that you
> have minimum amount of QoS classes and those QoS classes have maximum
> amount of buffer. So that unused queues aren't holding empty memory
> while used queue is starving. But even this will have only marginal
> benefit.
>
> Cut-through does nothing, because your egress is congested, you can
> only use cut-through if egress is not congested.
>
>
>
> --
>   ++ytti
>


More information about the juniper-nsp mailing list