Re: A historical aside

From: Ben Black (ben@layer8.net)
Date: Tue Dec 18 2001 - 16:38:34 EST


Just as with any routing between multiple systems, you can control
how you decide what you send, but you have little, if any, control
over what is sent to you.

Providers don't care for QoS in the core because their service is
transporting packets from point A to point B: If customer A has
sent a packet, it is because they want it delivered (otherwise, why
are they sending it?). If, within a provider core, you arrive at
a state in which packets are consistently being dropped (and are
not victims of some sort of transient outage or attack), then the
solution _for a provider_ is to increase bandwidth within the core.

The logic that says they must increase bandwidth rather than
resorting to QoS could go something like this: A provider offers
2 grades of services, gold and silver, with guarantees of 99.9%
and 95% packet delivery, respectively (there could be even lower
grades of service, but they are not relevant or attractive...the
delivery guarantees could also be jitter/delay/etc guarantees).
Now, if a provider could _really_ provision in such a way that
_those_ guarantees are met, how could the incremental cost of
meeting 99.9% for _every_ customer by just increasing bandwidth
possibly be higher than the cost of full-scale QoS deployment and
maintenance? I don't think it can (and plainly many others think
the same thing).

So, to tie it back to the first statement, a provider must be
effectively transparent, as Sean said, not because QoS is an edge
function, but because it is a _customer_ function. Clearly such
a perfect, QoS-free provider world runs into snags when egress
bandwidth to a customer is insufficient for the offered load, but
I think the question of how to handle that situation well is not
easily answered, and not just because of QoS. (*handwave*)

Despite all that, this is a network design issue, not an issue of
Internet architecture. Isn't the question of whose packets to
drop completely orthogonal to figuring out where to send a packet
you aren't dropping?

Ben

On Tue, Dec 18, 2001 at 07:30:01PM +0100, Sean Doran wrote:
> Fred -
>
> To summarize the Peter-Sean religion on QoS: queue-reordering
> is an EDGE function, not a CORE function. The queues in the core
> are never long enough to make fancy-queueing worthwhile, and
> should not even be part of the architecture once you hit
> core-router-to-core-router speeds of 2.5Gbps. Peter Lothberg
> has excellent slides comparing the line utilization vs queueing
> delay for various line-speeds, incidentally.
>
> However, there are lots of edges to the network, and
> indeed there are entirely separate networks which are constructed
> out of only low-bandwidth bottlenecks in front of which one frequently
> observes packet queues. QoS is just fine there, architecturally.
>
> However, my feeling on pricing out edge QoS is that it should
> cost you money to have a provider do fancy-queueing for
> you, and that the price ought to be closely related to the price of
> upgrading the bottleneck to the point that there is on average no
> queueing in the provider->customer direction.
>
> Likewise, my feeling on {diff,int}-serv is that core
> providers who have zero average length queues (i.e., ones
> with ample bandwidth) should simply be "QoS transparent",
> and simply never interpret the packet markings or participate
> in QoS negotiations: those mechanisms are not worth it unless
> it lets one extract more cash from a customer who is unable
> to upgrade his or her bottleneck (e.g., they are stuck with
> only a monopoly local loop provider who won't deliver an
> upgrade). So far there is no evidence that the revenue
> uplift would ever come close to the maintenance costs...
>
> This should be taken into account in proposals in which
> QoS negotiations are subsumed by the routing system.
>
> Sean.



This archive was generated by hypermail 2b29 : Mon Aug 04 2003 - 04:10:03 EDT