[j-nsp] ex4500 best-effort drops nowhere near congested
joel jaeggli
joelja at bogus.com
Thu May 2 17:22:41 EDT 2013
On 5/2/13 1:24 PM, Benny Amorsen wrote:
> joel jaeggli <joelja at bogus.com> writes:
>
>> There's literally no options in between. so a 1/10Gb/s TOR like the
>> force10 s60 might have 2GB of shared packet buffer, while an like an
>> arista 7050s-64 would have 9MB for all the ports, assuming you run it
>> as all 10Gb/s rather than 100/1000/10000/40000 mixes of ports it can
>> cut-through-forward to every port which goes a long way toward
>> ameliorating your exposure to shallow buffers.
> Why does cut-through help so much? In theory it should save precisely
> one packets worth of memory, i.e. around 9kB per port. 500kB extra
> buffer for the whole 50-port switch does not seem like a lot.
Until there's contention for the output side, you should only have one
packet in the output queue at a time for each port on a cut through
switch. which is like 96K of buffer for 1500 byte frames on a 64 port switch
Store and forward means you hold onto the packet a lot longer
mechanically even if nominally you are able to forward at line rate so
long as there's always a packet in the ouput queue to put on the wire.
consider that the fastest cut-through 10Gb/s switches now are around
.4usec and your 1500 byte packet takes ~1.2usec to arrive.
when adding rate conversion, consider that when having a flow come from
a 10Gb/s to 1Gb/s port that another 1500byte packet can arrive every
~1.2usec but you can only clock them back out every 12usec. jumbos just
push the opportunities to queue for rate conversion out that much furthure
> Lots of people say that cut-through helps prevents packet loss due to
> lack of buffer, so something more complicated must be happening.
>
>
> /Benny
>
More information about the juniper-nsp
mailing list