[c-nsp] Cisco ASR920-24SZ-IM BVI Feature Limitations

Tassos Chatzithomaoglou achatz at forthnet.gr
Mon Jan 18 09:51:38 EST 2016


Not exactly related to BVI, but we have many cases where the ASR920
(different sw releases) stops forwarding/responding-to packets without
any apparent trigger and a reboot is required in order to return to
normal operation. Installation environment is 10G access rings using
mostly EoMPLS/VPLS & various carrier ethernet features.

Cisco is still trying to figure out the root cause, unsuccessfully until
now.

--
Tassos

Darin Herteen wrote on 18/1/2016 3:34 μμ:
> Thanks everyone for the responses as they have been quite informative. 
>
> QoS strategies/testing was next/last on my list to hammer out this week so the behavior mentioned below is especially helpful.
>
> Regards,
>
> Darin
>
> ________________________________________
> From: cisco-nsp <cisco-nsp-bounces at puck.nether.net> on behalf of James Jun <james at towardex.com>
> Sent: Saturday, January 16, 2016 10:15 AM
> To: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] Cisco ASR920-24SZ-IM BVI Feature Limitations
>
> On Sat, Jan 16, 2016 at 03:54:01PM +0200, Mark Tinka wrote:
>> On 16/Jan/16 13:57, Eric Van Tol wrote:
>>
>>> We've been pretty happy with the ASR, especially the models with 4x10G on-board. The cost is significantly less than an ME3600, even with a full suite of licenses (Advanced IP Metro, all 10G ports, all GE ports), and the footprint is much smaller (well, more shallow).
>> +1.
>>
>> We've started rolling them out since last December, and so far so good.
>>
> +1 also, we have several ASR920-24SZ-IM's and 24SZ-M's out in the field and we're very happy with them.
>
> Aside from LAG limitations (workaround solution was to not use them :-S), the only other issue I've run into is that default port buffer/queue sizes (48KB?) are rather small.  This is a slight annoyance since typical deployment of 920 has at least 2x 10GE feeding the 1GE revenue ports on the box.  As I understand, 920 only has 12MB shared buffer space so that probably explains it, but on default queue sizes, almost every 1GE end-user port (no traffic-shaping on user ports, just full-rate 1G port with 10G uplink) excessively collects output drops on practically most trivial IMIX usage.
>
> For example, a FreeBSD box sitting with 1GE behind ASR920 just doing wget from a download mirror 50ms away records output drops on 920; whereas a 1GE port off of ASR9K or MX80 would not collect output drops for this type of usage.  Sure, it is reasonable to expect an end-user running Speedtest.net or watching Netflix spamming multiple flows to cause output drops, but not on single flow of download.
>
> As a workaround, raising the queue-limit to 512 KB per 1G port dramatically gets rid of output drops for trivial traffic.  You should still see drops for longhaul bursty traffic overwhelming a 1GE interface when stepping down from 10G uplink, but that's pretty much a reasonable congestion at that point, so dropping packet is better.
>
> 512KB seems to be reasonable; 24x1GE * 512KB = 12.2MB, so we don't oversubscribe the global buffer space, and it's roughly ~4ms of output buffer per port.
>
> !
> class-map match-any cos_all
>  match cos  0  1  2  3  4  5  6  7
> policy-map MC_1G_512kb
>  class cos_all
>   bandwidth percent 100
>   queue-limit 512000 bytes
> !
>
>
> James
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>



More information about the cisco-nsp mailing list