Re: 3d graph theory text/reference

From: Sean Doran (smd@ebone.net)
Date: Tue Feb 13 2001 - 09:24:12 EST


A few people asked me what I was on (or on about) with my previous
note and basically it occurred to me that in a standard 2d network
graph you can play with the graph in the usual neato ways, like
Dijkstra, Kruskal, Prim, Floyd-Warshall, Bellman-Ford, OMP, and so on.
However, all of these are ugly for graphs G(V, E) where density
is such that |E| is close to |V|², such as the one Vijay likes to
talk about, or where between v and v' there are multiple metrics
per edge and/or multiple edges.

This last case is the one that worries me: lots of relatively
low-speed (~~ 2.5-40Gbps) parallel links between routers such
that the aggregate bandwidth is in the TBps range, when each
link can fail independently.

On a network graph we have two obvious metrics: line length, which
is what we use for SPF. Easy stuff, we know how this works.
There have also been some experiences with using line breadth
("bandwidth") where that extends the line into another dimension
by adjusting its "thickness" on paper. This really introduces the
concept of a plane between vertices. It's not very new; bellheads
have been dealing in DS0 vertical-horizontal miles for decades,
and abstract away the breadth dimension of the plane into a higher-
order container. In IP-land we don't abstract away the breadth
dimension, we just ignore it, except in corner cases where we are
doing TE-style activities.

Consider an inter-router link as a collection of wavelengths running
between a pair of routers as having two properties: they can all
fail at once (cable cut) or they can fail in pieces (optical component
failure, non-linear attenuation e.g. by lamination). This can be
"forced" into a two-dimensional graph with lines of various widths
by varying the width as lambdas come and go. So far so good; there
are ways of dealing with this sort of thing.

However, this "forcing" is really forceful: you don't get a serial
stream just by sending bits in parallel because of quantum uncertainty:
chromatic dispersion causes bits transmitted simultaneously on two
different colours tend to be detected at different times by a receiver.
This worsens as speed and distance increase and has to do with fundamental
behaviours of photons as both waves and particles.

Worse, nonlinearities are also introduced by molecular differences
in different fibres, so a pulse on one colour sent in two equal-length
fibres will spread out over time in different ways in the two fibres.
This worsens as the pulses are shorter (i.e., increasing pulses/second)
and the fibre has more molecules (i.e., it's longer). So even fibres
laid down in parallel in the same ducts will behave differently.

So, if you have a serialization mechanism between sender and receiver,
something like "flow SAR", so that you can preserve the ordering of any
given flow -- like a TCP connection -- then you can treat all this
as a line of varying width, however this has some bandwidth*delay+"quantum
jitter" implications on the receiver side. But simple 2d graphs.
Again, this is something telco people have been doing for decades,
although the pointer management in the faster higher-order containers
to keep bits synchronous to one another is frankly nightmarish.
I don't really like to consider the much-higher speed equivalent
of 48 T3s instead of an OC48c, even if the T3s are reliably
synchronous with respect to one another, so that you can send
bits in parallel across all of them.

An alternative is to expose each individual line as an edge in the
2d graph, where it's either up or down (and is of bw 2.5,5,10,40...),
but this ends up with lots of edges between any pair of vertices.
Likewise, one can make the graph really dense by passing wavelengths
"through" so that one fibre emanating in a direction out of v will
lead to a few colours landing at v' and a few colours at v'' and so forth,
which constrains the number of lines between any pair of vertices while
increasing the overall number of adjacencies. One can go all the
way with this by introducing lots and lots of vertices and building
a full mesh. None of these seem attractive.

Also, the Antonov-Lothberg "layering" approach can be used,
where one throws away viable paths by constraining traffic
to travel within one plane, perhaps until the final stack of
routers ("vertex") is reached. This is hard to troubleshoot.
Peter Lothberg has done a *great* slide showing what a single POP
can look like. Hopefully this is something I'll be permitted to
scan & put online.

The complexity in each of these approaches is what's at
the bottom of my original question, which might be phrased as, are
there sensible analogues to the 2d elementary graph algorithms that
can be applied to graphs of higher dimensions. That is, rather than
processing a very dense graph, or worrying about forcing network
components into a 2d graph, can we consider a bunch of parallel
lambdas to be a proper plane, so that we consider not
the up/down nature of a line, but the breadth of the plane,
when making forwarding decisions at a distance. That is,
breadth may be zero (total fibre cut) or it may have one
of many non-zero values, depending on which colours are working
at any given time.

Likewise, can we treat a collection of a collection of lambdas
(e.g. a conduit full of fibres or the like) as a 3d polyhedron,
where the overall volume is used for metric purposes by distant
routers?

I suppose one way to look at this is that we keep things neatly
monometric (length, length*breadth, length*(breadth*depth)) but
change the metrics frequently as components fail. Fine at
a distance, but up close between v and v', it's not so simple
if a goal is to maintain packet ordering within any given flow -- that
is, this is just a new set of metrics on a fundamentally 2d graph.

So, what I'm really after, is something that explores the tradeoffs
of various ways of dealing with this kind of network graph.

        Sean.



This archive was generated by hypermail 2b29 : Mon Aug 04 2003 - 04:10:04 EDT