[j-nsp] ex4500 best-effort drops nowhere near congested

ryanL ryan.landry at gmail.com
Tue May 7 16:01:52 EDT 2013


good discussion. the tl;dr - nothing i can do about it. right?

On Thu, May 2, 2013 at 2:51 PM, Michael Loftis <mloftis at wgops.com> wrote:
> I was finally able to get this explained via a third party who designs
> these things ...
>
> Basically in S&F you have an input and output queue, per port.  When
> port 1 sends to port 2 frames are moved from 1's input queue to 2's
> output queue. If 2's out queue fills, it blocks back into 1's input
> queue.  This causes drops not only for frames destined for port 2 but
> unrelated frames as well.  In CT mode they get rid of the input queue,
> and use that space for the output.  When a port's output queue fills,
> drops for that port still happen, but drops for other, unaffected
> ports, now do not happen.  CT mode also means the frame is transmitted
> much earlier in the 1G-1G and 10G-1G modes (as soon as the ethernet
> header is there) when there's no congestion.  So frames w/o an
> interframe gap aren't as problematic either (which is the case
> sometimes for microburst drops, insufficient interframe gap for the
> CRC computation and the switching to occur before buffers fill)
>
> Atleast now I understand how/why it improves things more than just
> deeper buffers.  Basically unrelated traffic is unaffected, whereas
> with S&F mode, unrelated traffic can get backed up and lots of frames
> get dropped that have nothing to do with the actual bottleneck.
>
>
>
>
> On Thu, May 2, 2013 at 2:22 PM, joel jaeggli <joelja at bogus.com> wrote:
>> On 5/2/13 1:24 PM, Benny Amorsen wrote:
>>>
>>> joel jaeggli <joelja at bogus.com> writes:
>>>
>>>> There's literally no options in between. so a 1/10Gb/s TOR like the
>>>> force10 s60 might have 2GB of shared packet buffer, while an like an
>>>> arista 7050s-64 would have 9MB for all the ports, assuming you run it
>>>> as all 10Gb/s rather than 100/1000/10000/40000 mixes of ports it can
>>>> cut-through-forward to every port which goes a long way toward
>>>> ameliorating your exposure to shallow buffers.
>>>
>>> Why does cut-through help so much? In theory it should save precisely
>>> one packets worth of memory, i.e. around 9kB per port. 500kB extra
>>> buffer for the whole 50-port switch does not seem like a lot.
>>
>>
>> Until there's contention for the output side, you should only have one
>> packet in the output queue at a time for each port on a cut through switch.
>> which is like 96K of buffer for 1500 byte frames on a 64 port switch
>>
>> Store and forward means you hold onto the packet a lot longer mechanically
>> even if nominally you are able to forward at line rate so long as there's
>> always a packet in the ouput queue to put on the wire. consider that the
>> fastest cut-through 10Gb/s switches now are around .4usec and your 1500 byte
>> packet takes ~1.2usec to arrive.
>>
>> when adding rate conversion, consider that when having a flow come from a
>> 10Gb/s to 1Gb/s port that another 1500byte packet can arrive every ~1.2usec
>> but you can only clock them back out every 12usec. jumbos just push the
>> opportunities to queue for rate conversion out that much furthure
>>
>>
>>
>>> Lots of people say that cut-through helps prevents packet loss due to
>>> lack of buffer, so something more complicated must be happening.
>>>
>>>
>>> /Benny
>>>
>>
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
>
> "Genius might be described as a supreme capacity for getting its possessors
> into trouble of all kinds."
> -- Samuel Butler
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp


More information about the juniper-nsp mailing list