[j-nsp] delay-buffer in Juniper
Harry Reynolds
harry at juniper.net
Wed Oct 17 13:04:07 EDT 2012
My testing on mpc and mpc-q circa 11.4 indicates that exact is supported, as a shaper, but in general shaping-rate is the preferred way to shape as it lets you specifyc both a CIR and PIR shaped rate.
HTHs.
http://www.juniper.net/techpubs/en_US/junos12.1/topics/reference/general/hw-cos-pic-schedulers-reference-cos-config-guide.html
PS: Doc pr 779052 raised to have rate-limit included for MPC in the table at above link.
Best regards and HTHs
-----Original Message-----
From: juniper-nsp-bounces at puck.nether.net [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of Serge Vautour
Sent: Wednesday, October 17, 2012 9:51 AM
To: juniper-nsp at puck.nether.net
Subject: Re: [j-nsp] delay-buffer in Juniper
All of my testing was on "Q" cards (DPCE-R-Q-40GE-SFP). I never tried on on a pure R card. And yes on this type of card "rate-limit" acts exactly like a policer. -Serge
________________________________
From: Johannes Resch <jr at xor.at>
To: Serge Vautour <serge at nbnet.nb.ca>
Cc: Serge Vautour <sergevautour at yahoo.ca>; Huan Pham <drie.huanpham at gmail.com>; Stefan Fouant <sfouant at shortestpathfirst.net>; "juniper-nsp at puck.nether.net" <juniper-nsp at puck.nether.net>
Sent: Wednesday, October 17, 2012 10:13:58 AM
Subject: Re: [j-nsp] delay-buffer in Juniper
Hi,c
> My testing has shown the following with regards to queue commands:
>
> DPC:
> -Rate-limit: works as expected, queue is policed.
note that support of "rate-limit" on "regular" DPC (non EQ types) is relatively new (added in 10.x?).
> -Exact: Not supported
did you test this on "regular" DPC (non EQ type)?
we did successfully test this in our setup on non-EQ DPC (specifically DPCE-R-40GE-SFP, DPCE-R-4XGE-XFP, DPCE-R-20GE-2XGE), we found it behaves like a shaper (so it is mandatory to combine it with "buffer size temporal x" to keep buffer size/resulting jitter in acceptable range for low latency queues).
on EQ type cards (DPCE-R-Q-xxxx), "exact" is not supported, but rate-limit works (from our tests, looks like strict policer).
cheers,
-jr
PS: and yes, maybe JNPR could have designed this with fewer config option/HW compatibility permutations :-)
> -Shaping-Rate: Command takes but it does nothing. The queue isn't
> capped and traffic can reach sub-interface or interface shaped rate.
>
> MPC:
> -Rate-limit: Command takes but it does nothing. The queue isn't capped
> and traffic can reach sub-interface or interface shaped rate.
> -Exact: The result looks like a shaper.
> -Shaping-Rate: Results look exactly like the "exact" command. Traffic
> looks shaped.
>
> The summary is that DPC cards can only "police" queues and MPC cards can only "shape" queues. I agree that the docs aren't very clear on what's supported. I had to run my own tests to figure it out.
>
> Serge
>
>
>
>
> ________________________________
> From: Huan Pham <drie.huanpham at gmail.com>
> To: Stefan Fouant <sfouant at shortestpathfirst.net>
> Cc: "juniper-nsp at puck.nether.net" <juniper-nsp at puck.nether.net>
> Sent: Saturday, October 13, 2012 8:45:58 PM
> Subject: Re: [j-nsp] delay-buffer in Juniper
>
>>
>>
>> Honestly, that's 9.5 documentation. Perhaps something changed at some
>> point, but in current versions of code this is not how it works.
>>
>>
>
>
> Hi Stefan,
>
> You are right, I should to pay attention to the version, as the
> behaviour may have changed. I am new to Juniper (moving from the Cisco
> background), so I am still not familiar with navigating Juniper
> documentation. I relied on Google first hit :) . Will try to navigate the Juniper documentation.
>
> However, for this feature in the newest OS, I do not think that the
> behaviour is as you described. Although worded differently, it
> basically means the same as the one I quoted:
>
> Here's the link to the 12.1 OS:
>
> http://www.juniper.net/techpubs/en_US/junos12.1/topics/reference/confi
> guration-statement/transmit-rate-edit-cos.html
>
> rate-limit—(Optional) Limit the transmission rate to the
> rate-controlled amount. In contrast to the exact option, the scheduler
> with the rate-limit option shares unused bandwidth above the rate-controlled amount.
>
>
> I do not always trust the documentation, especially there may be
> limitation with hardware platform, OS version etc that is not well
> documented, so I set up a quick lab (running version 11.4R5.5) to test
> this feature. The behaviour in this case is the same as documented.
>
>
> When I use transmit-rate rate-limit I can send more traffic than the
> transmit rate-controlled amount. The transmit-rate rate-limit is 1m,
> but I can send traffic at 8Mbps.
>
> When I use transmit-rate exact I can not send more traffic than the
> transmit rate-controlled amount.
>
> One side note though, when I set the rate to a very low number (e.g.
> 10k, or even 500k), and use the "exact" option, the actual transmuted
> rate (on 1Gbps port) is not rate limited to the configured value. This
> is probably understandable, due to factors as sampling intervals etc.
>
> Here's my lab config and results:
>
>
> ge-1/0/0 ge-1/1/0
> (MX5) R1 -------------------------------- R2 (Virtual)
> 10.1.1.1 10.1.1.2
>
>
> I generated traffic by pinging from R2 to R1 lab at MX5> ping 10.1.1.1
> routing-instance R2 rapid count 100000 size 1000
>
> (Ping from R2 to R1, and monitor traffic on outbound of R1)
>
>
>
>
> lab at MX5> show configuration class-of-service schedulers admin-sched
>transmit-rate {
> 1m;
> rate-limit;
> }
> buffer-size percent 10;
> priority medium-low;
>
>
>
> lab at MX5> show interfaces queue ge-1/1/0 | find admin
> Queue: 1, Forwarding classes: admin
> Queued:
> Packets : 75751
>931 pps
> Bytes : 80750566
>7942744 bps
> Transmitted:
> Packets : 75751
>931 pps
> Bytes : 80750566
>7942744 bps
> Tail-dropped packets : 0
>0 pps
> RED-dropped packets : 0
>0 pps
> Low : 0
>0 pps
> Medium-low : 0
>0 pps
> Medium-high : 0
>0 pps
> High : 0
>0 pps
> RED-dropped bytes : 0
>0 bps
> Low : 0
>0 bps
> Medium-low : 0
>0 bps
> Medium-high : 0
>0 bps
> High : 0
>0 bps
>
>
>
> Change to using "exact", I can not send more than 1M as expected.
>
> lab at MX5> show configuration class-of-service schedulers admin-sched
>transmit-rate {
> 1m;
> exact;
> }
> buffer-size percent 10;
> priority medium-low;
>
>
>
> lab at MX5> show interfaces queue ge-1/1/0 | find admin
> Queue: 1, Forwarding classes: admin
> Queued:
> Packets : 14288
>116 pps
> Bytes : 15231008
>994928 bps
> Transmitted:
> Packets : 14288
>116 pps
> Bytes : 15231008
>994928 bps
> Tail-dropped packets : 0
>0 pps
> RED-dropped packets : 0
>0 pps
> Low : 0
>0 pps
> Medium-low : 0
>0 pps
> Medium-high : 0
>0 pps
> High : 0
>0 pps
> RED-dropped bytes : 0
>0 bps
> Low : 0
>0 bps
> Medium-low : 0
>0 bps
> Medium-high : 0
>0 bps
> High : 0
>0 bps
>
>
>
>
>
> FULL CONFIG
> -----------
>
>
>
> lab at MX5> show configuration class-of-service forwarding-classes {
> queue 0 best-effort;
> queue 1 admin;
> queue 2 voip;
> queue 3 network-control;
> }
> interfaces {
> ge-1/0/0 {
> scheduler-map my-sched-map;
> }
> ge-1/1/0 {
> scheduler-map my-sched-map;
> }
> }
> scheduler-maps {
> my-sched-map {
> forwarding-class best-effort scheduler best-effort-sched;
> forwarding-class admin scheduler admin-sched;
> forwarding-class voip scheduler voip-sched;
> forwarding-class network-control scheduler
>network-control-sched;
> }
> }
> schedulers {
> best-effort-sched {
> transmit-rate 40k;
> buffer-size percent 40;
> priority low;
> }
> /* PING packets are subject of this scheduler */
> admin-sched {
> transmit-rate {
> 1m;
> rate-limit;
> }
> buffer-size percent 10;
> priority medium-low;
> }
> voip-sched {
> transmit-rate percent 10;
> buffer-size percent 10;
> priority high;
> }
> network-control-sched {
> transmit-rate percent 5;
> buffer-size percent 5;
> priority medium-high;
> }
> }
>
>
>
>
>
>
> lab at MX5# show firewall
> family inet {
> filter RE-Generated-Classifier {
> term OSPF {
> from {
> protocol ospf;
> }
> then {
> forwarding-class network-control;
> dscp cs7;
> }
> }
> term ICMP {
> from {
> protocol icmp;
> }
> then {
> forwarding-class admin;
> dscp cs1;
> }
> }
> /* Keep default behaviour */
> term OTHERS {
> then accept;
> }
> }
> }
>
>
>
> lab at MX5# show
> /* This loopback belong to the Master Router */ unit 0 {
> family inet {
> filter {
> output RE-Generated-Classifier;
> }
> }
> }
> /* This loopback belong to Virtual Router R2 */ unit 2 {
> family inet {
> filter {
> output RE-Generated-Classifier;
> }
> }
> }
>
>
> lab at MX5# show routing-instances
> R2 {
> instance-type virtual-router;
> interface ge-1/1/0.0;
> interface lo0.2;
> routing-options {
> router-id 10.1.1.2;
> }
> protocols {
> ospf {
> area 0.0.0.0 {
> interface ge-1/1/0.0;
> }
> }
> }
> }
>
>
> lab at MX5# show protocols ospf
> area 0.0.0.0 {
> interface ge-1/0/0.0;
> }
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list