[c-nsp] Hierarchical QoS Policies

Andre Beck cisco-nsp at ibh.net
Fri Apr 8 12:08:16 EDT 2005


Hi Rodney,

On Fri, Apr 08, 2005 at 08:57:12AM -0400, Rodney Dunn wrote:
> > it that does LLQ. I would at least expect a difference.
> >  
> > > > I have not found detailed documentation about the shaper and how it
> > > > interacts with LLQ so far, any pointers are welcome.
> > > 
> > > there were a similar thread last december on this mailing list. rodney dunn
> > > posted some explanations how shaping works.
> > 
> > Yeah, thanks, found that back in my mailbox (I've even read it then)
> > and reread it. Rodney essentially explains that LLQ under a shaper would
> > give better latency than just the shaper or a shaper with just "bandwidth"
> > child policies. I'd just like to be able to proof that in my lab...
> 
> I have been behind on email lately and too lazy to go back and read
> the entire thread.
> 
> What are you testing?

Details are in the first mail which opened this thread ;)

In short: I took a 831, configured it for completely trivial IP routing
between the two connected Ethernets and put a host into each of them.
Then I crafted a hierarchical policy-map which in the parent applies a
shape average to 100kbps and in the child prioritizes ICMP at 10% of
the bandwidth (resulting in 10kbps priority). The policy is applied
outbound to one of the Ethernet Interfaces of the 831 (the one behind
which my client test host lives) and I issue these tests:

- Ping (with varying packet size, but minimal sized pings of 64 bytes
  show the problem as well) from the client host to the server host
- Start a large TCP download towards the client host

The download will get shaped to approx. 12kbytes/s nicely, but at the
same time, RTTs of the ping go up significantly (from 1.6ms to a range
of values jittering between 16ms and 160ms, seemingly averaging at 60ms)
and jittery. The bandwidth seems to be guaranteed but I did expect better
latency behavior for the LLQed ICMP traffic. I'm still not sure whether
this is a problem or just the effect to be expected with a 100kbps
shaping, but I can remove the complete prioritizing child policy from
the shaper parent without significant changes in RTT average and jitter.
This lets me tend to rather call it a problem, as I'd expect at least
some change from activated LLQ. We had a discussion of serialization
delay in the thread and I am aware that at 100kbps a 1500 byte data
packet involves 120ms head block - but does this really apply to a
shaper as well?
 
> An email from a co-worker much more knowledgeable in broadband
> than I am had sent this:
>  
> -=-
> 1) Qos is not supported on VA for L2TP tunnels

Good to know. Though not relevant here as I'm seeing it with the most
simple Ethernet setup I could build and I'm seeing it as well with
the destination system where the shaping parent policy will be applied
to a tunnel interface.

> 2) with PPPoX (maybe others), hierarchical MQC does not work (shaping not
> supported)
> 3) with PPPoX, One can only have (non hierarchical) qos working on dialer
> or virtual-template AND only with MLP.

Interesting limitations. Are they there to stay? I had some plans to
move our direct PPPoE customers from access-rate over to shaping as
it behaves way more nicely, but that would mean I don't even need to
start with that.
 
> QoS over dsl:
> =============
> 
> So, when a customer requests to enable qos on his cpe running PPPoX, he
> has 2 options :
> 
> A) The easy one :
> enable vbr (rt or nrt) or cbr atm shaping on the atm pvc and apply the
> queueing service-policy directly on the pvc.  Note, without vbr
> (rt or nrt) or cbr, qos is not supported on ATM.  See also :
> http://www.cisco.com/warp/customer/121/7200_per-vc-CBWFQ.html

Yep, I've already found out about that. In my target project, we have
a bunch of 3745s with E3 ATM NMs and I had to establish QoS there as
well. Before finding out about Per-VC-CBWFQ I was close to collapse as
in subinterface context, they deny any application of an outbound
QoS policy.

> One note here which may save you some headaches is to not forget to tune
> the tx-ring-limit for the ATM pvc (as it is not done
> automagically as for normal interfaces) :
> http://www.cisco.com/warp/public/121/txringlimit_6142.html

Thanks, I'll have a look into this for that other part of the infra-
structure. But it doesn't apply to my tunnel interfaces if I read
correctly.
 
> B) The complicated way :
> enable mlp on the dialer on the CPE, and apply the QoS there.  But then
> mlp needs to be negotiated with whoever terminates the ppp.  This
> needs involvement of the dsl provider, as his UAC will need to have mlp
> enabled under its virtual template as well.

On a long term, MP on DSL will become rather typical here in Germany.
It allows to work around the 24h forced disconnect of our carrier
monopolist...
 
> And if this is vpdn, he will need to enable mlp on both LAC and LNS,
> except if there is LCP renegotiation configured.  In case of the
> latter, mlp only needs to be enabled on the LNS.  
> 
> There is the following doc on CCO, but it is pretty incomplete :
> http://www.cisco.com/warp/customer/105/pppoe_qos_dsl.html

Highly interesting pointers, but for other construction sites (they
will come in handy there).

> There are some feature request in to make per-user shaping and
> queueing work. I don't know the timeline.

Great. So it's known as a problem to be solved.

What I'm currently looking for is much simpler, though: A minimal
setup in which LLQ under a shaper can be obviously demonstrated to
work and change things. My lab experiment actually stems from a
workshop where I wanted to present hierarchical QoS with shaping
and LLQ (as I already deployed in a project) and was badly surprised
that it didn't work as expected. Now I'm trying to find out whether
this even applies to the deployed system and seemingly it does, how-
ever it has less impact there as I'm shaping to higher bandwidths.
Then again, I've just computed the overhead of G.711 in RTP in UDP in
IP in GRE in IP in ESP in IP in AAL5-SNAP in ATM Cells and it seems
this roughly estimates to an overhead factor of 230% or 1000kbps of
remaining bandwidth of a 2320kbps SDSL. The overhead factor for
larger payloads is way smaller. I'm pondering as to what bitrate
I should actually shape on tunnel ingress, as the resulting bitrate
is highly variable depending on the actual input packet size...

Thanks,
Andre.
-- 
                  The _S_anta _C_laus _O_peration
  or "how to turn a complete illusion into a neverending money source"

-> Andre Beck    +++ ABP-RIPE +++    IBH Prof. Dr. Horn GmbH, Dresden <-


More information about the cisco-nsp mailing list