[c-nsp] ASR920 Opinions

Jason Lixfeld jason at lixfeld.ca
Tue Dec 19 21:43:01 EST 2017


Hi,

> On Dec 19, 2017, at 8:52 PM, James Jun <james at towardex.com> wrote:
> 
> Hey,
> 
> We have about 40 of ASR920's, mostly 24SZ-M and 24SZ-IM variants.  We're running mainly 03.16.04S and 03.16.05S.

…

> For layer-2 services, we use LDP signalled L2CKT and VPLS.  We tried testing layer-3 use case, but last time we 
> tested (it was on early SW versions though.. like 3.14.x something), control-plane protection like didn't even 
> work as we expected.

Are you saying that whatever L3 issues you had have been resolved in the versions you cited above?

> My overall experience with ASR920 are as follows.

...

> The Bad Stuff:
> - Weird behavior with 10G ports and optics:  Sometimes when upgrading SW, some of the SFPs (e.g. Bidi ones)
>   fail to come up.  Bouncing the interface with shut/no shut does nothing; dispatching field service crew to 
>   remove and re-insert the optic solves the problem. 
> 
>   We also had issues with 10G ports going admin-down upon upgrade as well.  Long story short, OOB is highly
>   desirable to have if SW upgrade is required on this platform.

Did you happen to catch a bug ID for either of these two 10G port issues?

> - Shallow buffers - 12MB for the whole box; and default values are ridiculously small.
>   I'm not sure what Cisco was thinking regarding buffers on this box. ASIC speed has nothing to do with
>   buffering requirements when you're downstepping from 10G to 1G -- you either have buffers to make up for
>   the Tx/Rx rate difference or you tail drop, it's as simple as that.

Are you referring partly to this?

https://www.cisco.com/c/dam/en/us/td/docs/routers/asr920/design/Cisco-ASR920-Microburst-whitepaper.pdf

>   We applied 100% shared buffers with policy-map, but we did run into buffer exhaustion when several customers
>   are doing heavy inbound traffic.  It's fine for putting in typical end-user / retail users, but for placing
>   lot of enterprise 1GE internet customers on the box, I don't know..  We ended up configuring fixed 512KB queue
>   on every 1GE port (so we don't really oversubscribe the 12MB buffer space) to absorb up to ~2ms worth of burst,
>   but this now brings back lot of tail drops on long distance TCP flows.  So, we're now having upstream IP
>   transit routers at head-end sites provide traffic-shaping with very low burst on customer EVCs terminating
>   on ASR920s.  It's not ideal, as this means I'll need -SE line card at upstream side to deal with the increased
>   queueing requirements, but it is a decent compromise.



More information about the cisco-nsp mailing list