In message <NCBBIKACLKNMKDHKKKNFCEBBFEAA.dima@krioukov.net>, "Dmitri Krioukov"
writes:
>
> Yes, these operational realities can be considered as another, deeper
> reason for the problem-patching IETF philosophy since vendors, after all,
> are not the last link in the provider-customer relationship chain, which
> at its *abstract* level looks like research/standards <- vendors <-
> operators <- I :)
You described the following layer 8 relationship:
research/standards <- vendors <- operators <- I
The IRTF postulates that there are still some rational minds within at
least a subset of the Internet provider community, therefore, the
following also exists:
.---------.--- research
V V
research/standards <- vendors <- operators
The research community has influence but must convince the operators
that their proposals are deployable, and must convince operators and
vendors that they are implementable (ie: will scale).
> > I'll add my list of short term priorities:
> >
> > 1. traffic engineering
> > ...
> > 2. quality of traffic layout after restoration
> > ...
> > 3. restoration time
>
> I do not really see p.2 and p.3 competing in all models.
> If your model precomputes some "optimal" traffic layout
> for a case of a potential failure and if it also provides
> for a quick failover if this failure occurs, then it
> satisfies both requirements simultaneously.
When doing research there is a temptation to dismiss any problem that
is know to be solvable as not worthy of research.
With current implementations, there remains a tradeoff for the
operator between how efficiently bandwidth (reservations) is utilized,
and the speed of restoration. There is also a tradeoff between the
very high quality of layout possible with offline layout and global
knowledge with harsh reality of the imperfections of the SRLG and the
very slow restoration that occurs when these unforseen SRLG make their
presence known. Examples: fibers that cross the San Andreas fault
within the same 100 mile shift in the fault (San Diego, 97 or 98 I
think), POPs within the same flood plane (major fiber outages during
93 floods in US midwest), fiber along the same railway right of way on
opposite sides of the track (NJ Amtrak derail almost isolates NY in
93), carrier and RBOC in same building accidentally wire one's
equipment to the others 48 volt power and suffer the same power outage
during generator maintenance (FAA 5 hour outage, late 1980s), fibers
on the same gas right of way affected by gas company maintenance
ripping up wrong conduit along right of way (98 or 99), etc. For many
providers, this plus databases that are just plain wrong, rule out
offline computation. For others offline is fine (and TDM inability to
share bandwidth plus the state of the art in this miraculous offline
provisioning is why your DS1, DS3, or OCx can take 2-10 hours to come
back but IP service is at worst back within 5 minutes when BGP is
having a bad day with a reasonably good provider that doesn't flap
when a fault occurs, or 20-30 seconds or less if the provider can hide
the restoration internally and not impact their EBGP announcement).
So 2 and 3 above are solvable, just not yet adequately solved. That
makes them interesting to me, but maybe not interesting from a
research purist standpoint.
> dima.
Regards,
Curtis
This archive was generated by hypermail 2b29 : Mon Aug 04 2003 - 04:10:04 EDT