> -----Original Message-----
> From: Curtis Villamizar [mailto:curtis@workhorse.fictitious.org]
> Sent: Thursday, January 04, 2001 9:54 AM
> To: Dmitri Krioukov
> Cc: Fred Baker; irtf-rr@nether.net
> Subject: Re: Wither irtf-rr
>
> Router vendors build what their customers need or at least what they
> perceive they need and will pay money for.
> ...
> It takes a very long time to go from theory to practice in the IP
> world despite the hype about moving in Internet time. A good example
> is CIDR deployment which was a small incremental change in theory
> (nothing very major about variable length subnet masks in theory), but
> took a few years to go from RFC to deployment and an eminent crisis
> to get it there.
> ...
> My point is that we need to be sure that the list of problems is real
> and important enough. What we strive to change to must actually work
> on a large scale. The IETF has a mixed track record on this. SDRP,
> ERP, microflow RSVP (on a large scale), IPLDN and ROLC work, NHRP, are
> all complete failures in that regard. What we strive for must be
> simple enough that it gets implemented and must represent a modest
> operational change.
Yes, these operational realities can be considered as another, deeper
reason for the problem-patching IETF philosophy since vendors, after all,
are not the last link in the provider-customer relationship chain, which
at its *abstract* level looks like research/standards <- vendors <-
operators <- I :)
One thing is that if we try to analyze even deeper reasons for why
numerous operators are not already banging on my door offering on-
demand DVD-quality video that I want so much, we end up with the same
logic of "more money with less effort" I mentioned before. This logic
is very strict and absolutely unbeatable from the business perspective.
Minds that can not accept this logic are in academia.
Another thing is if this situation should stigmatize those "radical"
academic research efforts as unrelated to the business realities?
The answer is clearly "no" since, for example, our major business reality,
the Internet itself, was born as a result of academic research and,
hence, utilizes a lot of results obtained in academia.
Can we start thinking about building a fundamentally new research
(initially)
network based on fundamentally new principles and "routing protocols"?
I don't think so, and not because there is no business need for it (was
there any business need for ARPANET?), but because these fundamentally
new principles are not found yet.
> The only way you'll get a major change in the Internet is if the death
> of the Internet is eminent and you can convince the ISPs of that
> before it collapses. I don't see that to be the case though
> suboptimal more than describes it.
> ...
> So why is there so little activity on this list? Maybe we don't
> really have a crisis on our hands. That doesn't mean we shouldn't try
> to improve things, but it might mean that exploring too radical a
> change is a waste of our time.
We definitely do not have any fundamental crisis since we are pretty
far from hitting fundamental limitations of currently exisiting
protocol and infrastructure. The problem-patching approach still has
a very long way to go. Should we wait until the fundamental crisis
occurs to start fundamental research?
> ...determining some means to
> more automatically aggregate over multiple adminstrative domains might
> be a good topic to pursue...
> ...
> 6. Advise your customers of the consequences in terms of route flap
> if they peer with multiple providers and announce a hole in the
> aggregate or a more specific route. Discount a backup
> connection to another router (or another POP) of your own to
> keep the aggreagation. Don't give them a reason to dual home.
Multihoming has been an endless discussion especially on the operations
lists. And here it seems that our very strong what-customer-wants logic
breaks again just because there is no solution. Deep pocket customers do
want to be multihomed. To Tier 1 providers, if possible, since it's when
they feel safest. No aggregation technique really works in this case.
> >> number of routes
>
> Some ISPs have requirements that push the need for route processor
> RAM over the 1/2 GB mark. Putting more than 4GB of RAM on a card is
> difficult for most processors (instructions with 32 address bits).
>
> There are ways to solve this using multiple servers and splitting
> the load. Before joining Avici, I spoke to Craig Labovitz about
> delegating subtrees of the patricia trie to different processors as
> a scaling tactic. I've also spoken to Sean and others about this.
> It eases both CPU load and memory limits of a single server.
>
> As long as the operational practices are what they are, the big
> router guys need to get this right. I see this as an implementation
> detail that you can't get wrong.
I think we need to address this problem (RT explosion) by slicing it
into the following pieces:
1. Internet Architecture level:
a) Controlling RT growth in the currently existing Internet
architecture --
Have we really explored all possible ways to aggregate?
b) Modifications to the currently existing architecture --
MPLS+Nimrod?
c) Search for scalable and non-hierarchical architecture --
Is it really a true nonsense?
2. Hardware level --
a) Is heavy usage of parallelism the only possible solution?
b) Can we rethink in this context the separation of RIBbing and
FIBbing entities idea more thoroughly?
> I'll add my list of short term priorities:
>
> 1. traffic engineering
> ...
> 2. quality of traffic layout after restoration
> ...
> 3. restoration time
I do not really see p.2 and p.3 competing in all models.
If your model precomputes some "optimal" traffic layout
for a case of a potential failure and if it also provides
for a quick failover if this failure occurs, then it
satisfies both requirements simultaneously.
-- dima.
This archive was generated by hypermail 2b29 : Mon Aug 04 2003 - 04:10:04 EDT