[c-nsp] ASR9k Bundle QoS in 6.0.1
Adam Vitkovsky
Adam.Vitkovsky at gamma.co.uk
Fri Jun 10 08:00:12 EDT 2016
> Saku Ytti [mailto:saku at ytti.fi]
> Sent: Wednesday, June 08, 2016 1:18 PM
>
> On 8 June 2016 at 15:13, Adam Vitkovsky <Adam.Vitkovsky at gamma.co.uk>
> wrote:
>
> > I think that most of this 10us budget is consumed on r/w from/to
> > memories and then a tiny bit for lookup (if you are doing just pure IP
> lookup).
> > If the fabric is meant to be non-blocking the arbitration(fabric
> > routing) has to be negligible (buzz term: line-rate) that is within
> > one tick of the fabric clock-rate the maximal solution has to be found
> > and crossbar programed (I actually think the programing is done hand
> > in hand with each found match) and data transmitted.
>
> Let's assume this is true, that fabric query/grant is negligible part of the
> overall budget. Then clearly LC can requery for grant if it didn't receive it in X
> ns, and it will add negligible delay to overall delay-through-chassis?
> Giving us full redundancy over failover, without any between-fabric
> intelligence or duplicate requests.
>
Ok you got me there :).
Done some more research and now I'm confident that in systems with distributed arbiters a missing msg doesn't really matter, since there are several iterations of the arbitration anyways in order to find a maximal match. So if a request/grant/accept is missed here and there it would be no problem at all as it would only mean that maybe a suboptimal maximal match is found during the finite amount of iterations in that particular cycle.
In systems with central arbiters I think request and grant messages are exchanged only once per cycle -cause that's all that is needed for the central arbiter to have all the information it needs to run the matching algorithm. So failure of a central arbiter could ruin it for everyone and that's why there have to be a backup arbiter doing the same arbitration in parallel.
Question is though, for how long would these messages be lost, while the fabric is failing over. Since for every input/output whose messages got lost, there would have to be some cells buffered in the to-fabric queues(VOQs) waiting for next cycle. So nothing is dropped it's just the cells are not transported and have to be buffered in ingress VOQs and there's a risk of these queues to fill up resulting in taildrops in these queues if the failover takes too long or the queues are too full.
So to your question of what do we gain with a centralised arbiter.
I think it's simplicity and better efficiency.
I did couple quick case studies comparing distributed and centralized arbitration procedure and it's apparent straight away that distributed arbitration is very chatty.
Centralized arbiter can get to the same conclusion without all the chit-chat preluding every incremental iteration.
adam
adam
Adam Vitkovsky
IP Engineer
T: 0333 006 5936
E: Adam.Vitkovsky at gamma.co.uk
W: www.gamma.co.uk
This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of this email are confidential to the ordinary user of the email address to which it was addressed. This email is not intended to create any legal relationship. No one else may place any reliance upon it, or copy or forward all or any of it in any form (unless otherwise notified). If you receive this email in error, please accept our apologies, we would be obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or email postmaster at gamma.co.uk
Gamma Telecom Limited, a company incorporated in England and Wales, with limited liability, with registered number 04340834, and whose registered office is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
---------------------------------------------------------------------------------------
This email has been scanned for email related threats and delivered safely by Mimecast.
For more information please visit http://www.mimecast.com
---------------------------------------------------------------------------------------
More information about the cisco-nsp
mailing list