[j-nsp] Limit on interfaces in bundle
Adam.Vitkovsky at gamma.co.uk
Tue Nov 3 10:19:49 EST 2015
> From: Jesper Skriver [mailto:jesper at skriver.dk]
> Sent: Tuesday, November 03, 2015 10:34 AM
> On Mon, Nov 02, 2015 at 06:30:00PM +0000, Adam Vitkovsky wrote:
T: 0333 006 5936
E: Adam.Vitkovsky at gamma.co.uk
This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of this email are confidential to the ordinary user of the email address to which it was addressed. This email is not intended to create any legal relationship. No one else may place any reliance upon it, or copy or forward all or any of it in any form (unless otherwise notified). If you receive this email in error, please accept our apologies, we would be obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or email postmaster at gamma.co.uk
Gamma Telecom Limited, a company incorporated in England and Wales, with limited liability, with registered number 04340834, and whose registered office is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
> > > From: Jesper Skriver [mailto:jesper at skriver.dk]
> > > Sent: Monday, November 02, 2015 5:14 PM
> > >
> > >
> > > Right, on those types of platforms it can be done - assuming there
> > > are spare bits in the meta data that goes with the packet, enough
> > > free instruction space etc - but it will come at a performance
> > > impact as it will require more cycles per packet, unless on the
> > > particular platform there is still headroom for adding more cycles without
> affecting the ability to be linerate ...
> > Talking about lookup performance isn't the number of cycles negligible
> compared to memory access times please?
> Different architectures behave differently, but if you can arrange the new
> data such that it will fit the same cache line as data you are already reading,
> the additional read of the new data will come from the CPU data cache, and
> is typically very fast. You are right an uncached read is often 100's of
Just to make sure I mean the memory access to perform the Trie lookup in the SDRAM/SRAM/DRAM (not the instructions -which I agree is very important to fit into the on Chip memory).
Although I see there are several options on how to accomplish multiple read operations per cycle which makes the features coupling non-deterministic with regards to lookup performance.(like next-table vs FBF, I'm pretty sure next-table needs two lookups where FBF only needs one yet both have the same performance impact).
Please let me know your thoughts.
> > Even the high-end platforms of all vendors will cripple performance when
> features are enabled.
> Some more than others ...
I agree, I guess it all boils down to how mem-lookup efficient the lookup algo is and of course on how many read operations per cycle can be done.
What do you think?
> > > Which is why I in my original reply said that it would not be
> > > practical. For it to be useful all routers in the path needs to
> > > support it, otherwise as soon as it hits one that doesn't all the
> > > various MPLS sub-types will get mixed together again.
> > > Features which require network wide support are quite hard to get
> > > off the ground.
> > Maybe this could get shoehorned along with segment routing.
> Doubt it, SR specifications are very far along, and several vendors have
> working implementations.
Yeah I guess that ship has sailed a long time ago.
But I like your example with the Entropy label.
What if the topmost label value or even better the VPN/VC label could indicate how to parse the remainder of the packet.
And if a middle node or end node(in case of a VPN/VC label) doesn't understand the encoding it would just label switch the packet blindly.
It should be very simple to implement and also to adjust the label-space so that it doesn't conflict with anything else
So all nodes that support the feature would start allocating with:
And so forth..
More information about the juniper-nsp