[c-nsp] MTU and PMTUD

Saku Ytti saku at ytti.fi
Thu Dec 8 02:34:08 EST 2022


On Wed, 7 Dec 2022 at 22:20, Marcin Kurek <md.kurek at gmail.com> wrote:

> > I've seen Cisco presentations in the 90s and early 00s showing
> > significant benefit from it. I have no idea how accurate it is
> > today,nor why it would have made a difference in the past, like was
> > the CPU interrupt rate constrained?
>
> I'm sorry, I didn't get that part about constrained CPU interrupt rate?
> My simple way of looking into that is that if we bump up the MTU, we end
> up with fewer packets on the wire, so less processing on both sides.

To handle NIC received packets you can do two things

a) CPU can get interrupt, and handle the interrupt
b) Interrupts can be disabled, and CPU can poll to see if there are
packets to process

The mechanism a) is the norm and the mechanism b) is modernish. To
improve PPS performance under heavy rate, at cost of increasing jitter
and latency because it takes variable time to pick up packet. In
software based routers, like VXR, if you had precise enough (thanks
Creanord!) measurements of network performance, you could observe
jitter during rancid (Thanks Heas!) collections, because 'show run'
and 'write' raises interrupts, which stops packet forwarding.

So less PPS, less interrupt, might be one contributing factor. I don't
know what the overhead cost of processing packets is, but intuitively
I don't expect much improvement with large MTU BGP packets. And at any
rate, going above 4k would mean newish features you don't have. But I
don't have high confidence in being right.

> Testing using XR 7.5.2 and older IOS XE, resulting MSS depends on who is
> passive/active.

MSS is 'negotiated' to the smallest. Much like BGP timers are
'negotiated' to the smallest (so your customer controls your BGP
timers, not you). Does this help to explain what you saw?



-- 
  ++ytti


More information about the cisco-nsp mailing list