[j-nsp] Cut through and buffer questions
james list
jameslist72 at gmail.com
Fri Nov 19 03:49:01 EST 2021
Hi ytti
I try to rephrase the question you do not understand: if I enable cut
through or change buffer is it traffic affecting ?
Regarding the drops here the outputs (15h after clear statistics):
Physical interface: xe-0/0/19, Enabled, Physical link is Up
Interface index: 939, SNMP ifIndex: 626, Generation: 441
Description: xxx
Link-level type: Ethernet, MTU: 1514, MRU: 0, Speed: 10Gbps, BPDU Error:
None, MAC-REWRITE Error: None, Loopback: Disabled, Source filtering:
Disabled, Flow control: Disabled,
Media type: Fiber
Device flags : Present Running
Interface flags: SNMP-Traps Internal: 0x4000
Link flags : None
CoS queues : 12 supported, 12 maximum usable queues
Hold-times : Up 0 ms, Down 0 ms
Current address: 5c:45:27:a5:c6:36, Hardware address: 5c:45:27:a5:c6:36
Last flapped : 2021-03-21 02:40:21 CET (34w5d 06:49 ago)
Statistics last cleared: 2021-11-18 18:26:13 CET (15:03:31 ago)
Traffic statistics:
Input bytes : 3114439584439 746871624 bps
Output bytes : 4196208682119 871170072 bps
Input packets: 6583209468 204576 pps
Output packets: 6821793016 203445 pps
IPv6 transit statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Input errors:
Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Bucket drops: 0,
Policed discards: 0, L3 incompletes: 0, L2 channel errors: 0, L2 mismatch
timeouts: 0, FIFO errors: 0,
Resource errors: 0
Output errors:
Carrier transitions: 0, Errors: 0, Drops: 1871, Collisions: 0, Aged
packets: 0, FIFO errors: 0, HS link CRC errors: 0, MTU errors: 0, Resource
errors: 0, Bucket drops: 0
Egress queues: 12 supported, 5 in use
Queue counters: Queued packets Transmitted packets Dropped
packets
0 0 6810956602
1592
3 0 0
0
4 0 0
0
7 0 58647
0
8 0 6505305
279
Queue number: Mapped forwarding classes
0 best-effort
3 fcoe
4 no-loss
7 network-control
8 mcast
show interfaces queue xe-0/0/19
Physical interface: xe-0/0/19, Enabled, Physical link is Up
Interface index: 939, SNMP ifIndex: 626
Description:
Forwarding classes: 16 supported, 5 in use
Egress queues: 12 supported, 5 in use
Queue: 0, Forwarding classes: best-effort
Queued:
Packets : 0 0 pps
Bytes : 0 0 bps
Transmitted:
Packets : 6929684309 190446 pps
Bytes : 4259968408584 761960360 bps
Tail-dropped packets : Not Available
RL-dropped packets : 0 0 pps
RL-dropped bytes : 0 0 bps
Total-dropped packets: 1592 0 pps
Total-dropped bytes : 2244862 0 bps
Queue: 3, Forwarding classes: fcoe
Queued:
Packets : 0 0 pps
Bytes : 0 0 bps
Transmitted:
Packets : 0 0 pps
Bytes : 0 0 bps
Tail-dropped packets : Not Available
RL-dropped packets : 0 0 pps
RL-dropped bytes : 0 0 bps
Total-dropped packets: 0 0 pps
Total-dropped bytes : 0 0 bps
Queue: 4, Forwarding classes: no-loss
Queued:
Packets : 0 0 pps
Bytes : 0 0 bps
Transmitted:
Packets : 0 0 pps
Bytes : 0 0 bps
Tail-dropped packets : Not Available
RL-dropped packets : 0 0 pps
RL-dropped bytes : 0 0 bps
Total-dropped packets: 0 0 pps
Total-dropped bytes : 0 0 bps
Queue: 7, Forwarding classes: network-control
Queued:
Packets : 0 0 pps
Bytes : 0 0 bps
Transmitted:
Packets : 59234 0 pps
Bytes : 4532824 504 bps
Tail-dropped packets : Not Available
RL-dropped packets : 0 0 pps
RL-dropped bytes : 0 0 bps
Total-dropped packets: 0 0 pps
Total-dropped bytes : 0 0 bps
Queue: 8, Forwarding classes: mcast
Queued:
Packets : 0 0 pps
Bytes : 0 0 bps
Transmitted:
Packets : 6553704 88 pps
Bytes : 5102847425 663112 bps
Tail-dropped packets : Not Available
RL-dropped packets : 0 0 pps
RL-dropped bytes : 0 0 bps
Total-dropped packets: 279 0 pps
Total-dropped bytes : 423522 0 bps
{master:0}
show class-of-service shared-buffer
Ingress:
Total Buffer : 12480.00 KB
Dedicated Buffer : 2912.81 KB
Shared Buffer : 9567.19 KB
Lossless : 861.05 KB
Lossless Headroom : 4305.23 KB
Lossy : 4400.91 KB
Lossless Headroom Utilization:
Node Device Total Used Free
0 4305.23 KB 0.00 KB 4305.23 KB
1 4305.23 KB 0.00 KB 4305.23 KB
2 4305.23 KB 0.00 KB 4305.23 KB
3 4305.23 KB 0.00 KB 4305.23 KB
4 4305.23 KB 0.00 KB 4305.23 KB
Egress:
Total Buffer : 12480.00 KB
Dedicated Buffer : 3744.00 KB
Shared Buffer : 8736.00 KB
Lossless : 4368.00 KB
Multicast : 1659.84 KB
Lossy : 2708.16 KB
Cheers
James
Il giorno ven 19 nov 2021 alle ore 08:36 Saku Ytti <saku at ytti.fi> ha
scritto:
> On Thu, 18 Nov 2021 at 23:20, james list via juniper-nsp
> <juniper-nsp at puck.nether.net> wrote:
>
> > 1) is MX family switching by default in cut through or store and forward
> > mode? I was not able to find a clear information
>
> Store and forward.
>
> > 2) is in general (on MX or QFX) jeopardizing the traffic the action to
> > enable cut through or change buffer allocation?
>
> I don't understand the question.
>
> > I have some output discard on an interface (class best effort) and some
> UDP
> > packets are lost hence I am tuning to find a solution.
>
> I don't think how this relates to cut-through at all.
>
> Cut-through works when ingress can start writing frame to egress while
> still reading it, this is ~never the case in multistage ingress+egress
> buffered devices. And even in devices where it is the case, it only
> works if egress interface happens to be not serialising the packet at
> that time, so the percentage of frames actually getting cut-through
> behaviour in cut-through devices is low in typical applications,
> applications where it is high likely could have been replaced by a
> direct connection.
> Modern multistage devices have low single digit microseconds internal
> latency and nanoseconds jitter. One microsecond is about 200m in
> fiber, so that gives you the scale of how much distance you can reduce
> by reducing the delay incurred by multistage device.
>
> Now having said that, what actually is the problem. What are 'output
> discards', which counter are you looking at? Have you modified QoS
> configuration, can you share it? By default JNPR is 95% BE, 5% NC
> (unlike Cisco, which is 100% BE, which I think is better default), and
> buffer allocation is same, so if you are actually QoS tail-dropping in
> default JNPR configuration, you're creating massive delays, because
> the buffer allocation us huge and your problem is rather simply that
> you're offering too much to the egress, and best you can do is reduce
> buffer allocation to have lower collateral damage.
>
> --
> ++ytti
>
More information about the juniper-nsp
mailing list