[c-nsp] CRS PRP management eth interface limits

Aaron dudepron at gmail.com
Fri Aug 29 10:21:36 EDT 2014


MGMT E is for Management, not traffic.


On Fri, Aug 29, 2014 at 3:06 AM, Valeriu Vraciu <vvraciu at iasi.roedu.net>
wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hello Adam, all
>
> On 12/08/14 17:39, Vitkovský Adam wrote:
> > Hello Valeriu,
> >
> > I think that even for Management Ethernet ports - these limits are
> > controlled by the LPTS process.
> >
> > However on ASR9k I'm not able to view or change the policers for
> > the Managemet Ethernet ports (Route Switch Procesor location)
> >
> > You can try to check yours with: "sh lpts pifib hardware police
> > location" -look for PRP CPU And you try to change them with: " lpts
> > pifib hardware police location" -look for PRP CPU
> >
> > Or alternatively you can try setting the limits globally per flow:
> > "lpts pifib hardware police flow icmp" and maybe it'll get applied
> > to the PRP as well.
> >
>
> Thank you for ideas.
> I did not find any clue with lpts related to management Ethernet ports,
> but:
>
> Related to resulted traffic on 100G circuit, 2 stupid mistakes of mine:
> - - although I used 2 machines with different OS-es, I did not check the
> default IP TTL used, it was 64 on both of them (Linux and MacOS)
> - - did not estimate in advance how much traffic should be on 100G
> interface as result of the routing loop.
> After increasing default TTL to 255 traffic on 100G interface jumped
> to ~60 Gbps.
> So my goal is accomplished, but I am still digging about the limits on
> management interface.
>
> Regards,
> valeriu.
>
> > adam
> >> -----Original Message----- From: cisco-nsp
> >> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Valeriu
> >> Vraciu Sent: Tuesday, August 12, 2014 1:47 PM To:
> >> cisco-nsp at puck.nether.net Subject: [c-nsp] CRS PRP management eth
> >> interface limits
> >>
> > Hello,
> >
> > Are there any limitations (rate limits) for traffic, applied to
> > management Ethernet interface of a CRS3 PRP (Performance Route
> > Processor) ? Temporarily changing those limits, if possible, would
> > be great for our experiment. I was not able to find related
> > information while searching (Cisco, Google), so any hint is
> > appreciated.
> >
> > What I try to achieve is to fill a 100 Gbps circuit between 2 CRSs
> > for 50% or more. Using MGEN on a laptop with gigabit eth and a
> > routing loop this probably can be done. The problem is that each of
> > the 2 routers has at this moment only 100G interfaces, so the only
> > way to inject traffic is through management eth. What happens is
> > that the traffic on this interface does not exceed the following
> > values (bps and pkts/s), no matter how much I increase MGEN
> > parameters above 40000 UDP pkts/s (each packet 1460 bytes):
> >
> > input:  480634000 bps, 40000 pkts/s output:    880000 bps,  1000
> > pkts/s (these are merely ICMP unreachables)
> >
> >
> > MGEN was run like this:
> >
> > mgen event "ON 1 UDP DST 192.168.255.1/5000 PERIODIC [PKTS 1460]"
> >
> > where PKTS was 10K, 20K, 40K, 60K and 80K. Traffic on 100G link was
> > growing until it reached and remained at about 15 Gbps for 40K and
> > above. Achieved maximum traffic was 30 Gbps (15 Gbps for each PRP
> > eth interface, 2 x PRP on each router).
> >
> >
> > Regards.
> >> _______________________________________________ cisco-nsp mailing
> >> list  cisco-nsp at puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/cisco-nsp archive at
> >> http://puck.nether.net/pipermail/cisco-nsp/
>
> - --
> Valeriu Vraciu
> RoEduNet Iasi
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
> Comment: GPGTools - http://gpgtools.org
>
> iEYEARECAAYFAlQAJm4ACgkQncI+CatY948c5wCfbechxYFY1ae23Arkk9yPUn7A
> 7SsAn04EJVWGFxf/jHCWIcEfglIfvlqq
> =h/GV
> -----END PGP SIGNATURE-----
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>


More information about the cisco-nsp mailing list