[j-nsp] Cut through and buffer questions
Thomas Bellman
bellman at nsc.liu.se
Fri Nov 19 05:58:18 EST 2021
On 2021-11-19 09:49, james list via juniper-nsp wrote:
> I try to rephrase the question you do not understand: if I enable cut
> through or change buffer is it traffic affecting ?
On the QFX 5xxx series and (at least) EX 46xx series, the forwarding
ASIC needs to reset in order to change between store-and-forward and
cut-through, and traffic will be lost until the reprogramming has been
completed. Likewise, changing buffer config will need to reset the
ASIC. When I have tested it, this has taken at most one second, though,
so for many people it will be a non-event.
One thing to remember when using cut-through forwarding, is that packets
that have suffered bit errors or truncation, so the CRC checksum is
incorrect, will still be forwarded, and not be discarded by the switch.
This is usually not a problem in itself, but if you are not aware of it,
it is easy to get confused when troubleshooting bit errors (you see
ingress errors on one switch, and think it is the link to the switch
that has problems, but in reality it might just be that the switch on
the other end that is forwarding broken packets *it* received).
> Regarding the drops here the outputs (15h after clear statistics):
[...abbreviated...]
> Queue: 0, Forwarding classes: best-effort
> Transmitted:
> Packets : 6929684309 190446 pps
> Bytes : 4259968408584 761960360 bps
> Total-dropped packets: 1592 0 pps
> Total-dropped bytes : 2244862 0 bps
[...]> Queue: 7, Forwarding classes: network-control
> Transmitted:
> Packets : 59234 0 pps
> Bytes : 4532824 504 bps
> Total-dropped packets: 0 0 pps
> Total-dropped bytes : 0 0 bps
> Queue: 8, Forwarding classes: mcast
> Transmitted:
> Packets : 6553704 88 pps
> Bytes : 5102847425 663112 bps
> Total-dropped packets: 279 0 pps
> Total-dropped bytes : 423522 0 bps
These drop figures don't immediately strike me as excessive. We
certainly have much higher drop percentages, and don't see much
practical performance problems. But it will very much depend on
your application. The one thing I note is that you have much
more multicast than we do, and you see drops in that forwarding
class.
I didn't quite understand if you see actual application or
performance problems.
> show class-of-service shared-buffer
> Ingress:
> Total Buffer : 12480.00 KB
> Dedicated Buffer : 2912.81 KB
> Shared Buffer : 9567.19 KB
> Lossless : 861.05 KB
> Lossless Headroom : 4305.23 KB
> Lossy : 4400.91 KB
This looks like a QFX5100 or EX4600, with the 12 Mbyte buffer in the
Broadcom Trident 2 chip. You probably want to read this page, to
understand how to configure buffer allocation for your needs:
https://www.juniper.net/documentation/us/en/software/junos/traffic-mgmt-qfx/topics/concept/cos-qfx-series-buffer-configuration-understanding.html
In my network, we only have best-effort traffic, and very little
multi- or broadcast traffic (basically just ARP/Neighbour discovery,
DHCP, and OSPF), so we use these settings on our QFX5100 and EX4600
switches:
forwarding-options {
cut-through;
}
class-of-service {
/* Max buffers to best-effort traffic, minimum for lossless ethernet */
shared-buffer {
ingress {
percent 100;
buffer-partition lossless { percent 5; }
buffer-partition lossless-headroom { percent 0; }
buffer-partition lossy { percent 95; }
}
egress {
percent 100;
buffer-partition lossless { percent 5; }
buffer-partition lossy { percent 75; }
buffer-partition multicast { percent 20; }
}
}
}
(On our QFX5120 switches, I have moved even more buffer space to
the "lossy" classes.) But you need to tune to *your* needs; the
above is for our needs.
/Bellman
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <https://puck.nether.net/pipermail/juniper-nsp/attachments/20211119/2ba3e8bc/attachment-0001.sig>
More information about the juniper-nsp
mailing list