[c-nsp] CRS PRP management eth interface limits

Vitkovský Adam adam.vitkovsky at swan.sk
Tue Aug 12 10:39:12 EDT 2014


Hello Valeriu,

I think that even for Management Ethernet ports - these limits are controlled by the LPTS process. 

However on ASR9k I'm not able to view or change the policers for the Managemet Ethernet ports (Route Switch Procesor location)

You can try to check yours with: "sh lpts pifib hardware police location" -look for PRP CPU
And you try to change them with: " lpts pifib hardware police location" -look for PRP CPU

Or alternatively you can try setting the limits globally per flow: "lpts pifib hardware police flow icmp" and maybe it'll get applied to the PRP as well. 

adam
> -----Original Message-----
> From: cisco-nsp [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of
> Valeriu Vraciu
> Sent: Tuesday, August 12, 2014 1:47 PM
> To: cisco-nsp at puck.nether.net
> Subject: [c-nsp] CRS PRP management eth interface limits
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hello,
> 
> Are there any limitations (rate limits) for traffic, applied to management
> Ethernet interface of a CRS3 PRP (Performance Route
> Processor) ? Temporarily changing those limits, if possible, would be great for
> our experiment. I was not able to find related information while searching
> (Cisco, Google), so any hint is appreciated.
> 
> What I try to achieve is to fill a 100 Gbps circuit between 2 CRSs for 50% or
> more. Using MGEN on a laptop with gigabit eth and a routing loop this
> probably can be done. The problem is that each of the 2 routers has at this
> moment only 100G interfaces, so the only way to inject traffic is through
> management eth.
> What happens is that the traffic on this interface does not exceed the
> following values (bps and pkts/s), no matter how much I increase MGEN
> parameters above 40000 UDP pkts/s (each packet 1460 bytes):
> 
> input:  480634000 bps, 40000 pkts/s
> output:    880000 bps,  1000 pkts/s (these are merely ICMP unreachables)
> 
> 
> MGEN was run like this:
> 
> mgen event "ON 1 UDP DST 192.168.255.1/5000 PERIODIC [PKTS 1460]"
> 
> where PKTS was 10K, 20K, 40K, 60K and 80K. Traffic on 100G link was growing
> until it reached and remained at about 15 Gbps for 40K and above. Achieved
> maximum traffic was 30 Gbps (15 Gbps for each PRP eth interface, 2 x PRP on
> each router).
> 
> 
> Regards.
> - --
> Valeriu Vraciu
> RoEduNet Iasi
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
> Comment: GPGTools - http://gpgtools.org
> 
> iEYEARECAAYFAlPp/q0ACgkQncI+CatY949K1QCeKjrqU6fSMbJU/sn97g2WTiT+
> u0gAniWXCvPSm1NGMiy9EMC9LvMFd/JF
> =igVR
> -----END PGP SIGNATURE-----
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list