At 12:21 +0000 2/23/98, Mike Norris wrote:
>Hello
> we're currently testing Cisco's custom queuing (CQ) on
>a 7206. There are two serial connections to the router:
>serial 1/1 is framed E1 with bandwidth of 1984 Kbps, serial
>1/3 is clocked at 2000 Kbps and is connected to a 7000 with
>connections to several downstream networks.
>
>The queues are based on access lists which distinguish
>by IP address of the downstream networks. The byte counts
>in the queues range from about 6000 to 60000.
>
>We're trying to establish whether CQ works as we expect
>and to adjust the configuration as appropriate. To do
>this, we're measuring output on serial 1/3 to the various
>downstream networks when the link is congested and there
>is packet loss. We've not been able to show conclusively
>that there is a link between a queue's byte count and
>output to the corresponding network; indeed, we moved
>one network to a queue with half the byte count and
>found that output to it **went up**.
>
>I'd be grateful for any advice on how to demonstrate
>the effects of CQ and, by extension, how to refine the
>technique.
>
Several things come to mind.
First, for any interpretation of CQ to work well, you need to have a good
idea of the length distribution of packets, and set the queue byte count to
be a multiple of a "common" length. I use "common" here rather than a more
usual statistical measure such as mean or mode, for reasons that will
become clearer as I go along.
Second, remember that packet count as well as byte count can define when a
queue service interval can end. Lots of short packets can override your
byte count.
Now, there is confusion over how CQ actually behaves in the following
situation:
Q1 has a 2000 byte count
Q2 has a 2000 byte count
A packet arrives in Q1 that is 4000 bytes long. Clearly, it has to be sent
in one unit. Q1 is now in a "deficit" of -2000 bytes. CQ moves on to
service Q2.
What is the behavior of Q1 on the next CQ cycle? I've heard different
explanations from Cisco folk at different times, and there may be release
dependency.
The first explanations I received was that CQ resets the credit at the
start of each cycle, so that a steady stream of 4000 byte packets on Q1
would result in Q1 consistently getting an unfair service. It would send
4000 each time.
When the Cisco Internetwork Design instructors, as a group, raised this as
a question, we were given the response that CQ does have memory. In this
example, on the second CQ cycle, it would remember that Q1 is in deficit,
add 2000 bytes to Q1, find the new value was zero, and skip Q1 in that
cycle.
In the next cycle, 2000 bytes credit would then be given to Q1, and the
first packet arriving transmitted. If that packet were 4000 long, the
whole credit-deficit cycle would start again.
So, if we use explanation #1, and assume Q2 has 2000 byte packets, the
behavior seen is:
4000(Q1)-2000(Q2)-4000(Q1)-2000(Q2)....
This is not bursty, but is unfair. Q1 gets 2/3 of the bandwidth rather
than 1/2, If we use explanation #2, the behavior seen is:
4000(Q1)-2000(Q2)-0(Q1)-2000(Q2)-4000(Q1)-2000(Q2)-0(Q1)....
This gives more fairness to the throughput in each class (i.e., queue), but
makes things bursty.
What we've been teaching is that when you see a situation like this, the
goal is to change the byte counts to reflect the proper packet lengths.
Before:
Q1 has 2000 byte count, to give it 50% of BW
Q2 has 2000 byte count, to give it 50% of BW
After:
Q1 has 4000 byte count, to give it 50% of BW with real traffic
Q2 has 4000 byte count, to maintain bandwidth ratios.
It's an imperfect world. Increasing the byte counts in each class here
will increase the latency required to service each class, but will smooth
the flow.
Can anyone give a definitive explanation of which model is correct for CQ?
This archive was generated by hypermail 2b29 : Sun Aug 04 2002 - 04:13:15 EDT