Re: requirements sub-group documents

From: Russ White (ruwhite@cisco.com)
Date: Thu Mar 07 2002 - 08:47:01 EST


> The requirements sub-group have finally produced a document
> and we'd like your review and commentary on it.

Comments below....

Russ

_____________________________
riw@cisco.com <>< Grace Alone

-----

     3.2 Separable Components

        The architecture MUST place different functions into separate
        components.

        Separating functions, capabilities, and so forth, into
        individual components, and making each component "stand alone"
        is generally considered by system architects to be "A Good
        Thing". It allows individual elements of the system to be
        designed and tuned to do their jobs "very well". It also
        allows for piecemeal replacement and upgrading of elements as
        new technologies and algorithms become available.

        The architecture MUST have the ability to replace or upgrade
        existing components, and to add new ones, without disrupting
        the remaining parts of the system. Operators must be able to
        roll out these changes and additions incrementally (i.e. no
        "flag days"). These abilities are needed to allow the
        architecture to evolve as the Internet changes.

I assume you mean here that layering is a good thing, and layer
violations are a bad thing? It doesn't say so explicitely,
but.... I'm actually kindof trying to figure out what it does say
explicitely. :-)

-----

          o Making topology and addressing separate subsystems. This
             may allow highly optimized topology management and
             discovery without constraining the addressing structure or
             physical topology in unacceptable ways.

Do you mean here to have one address which indicates the topology
point of attachment for a device, and another address which
identifies the device? How do you propogate topology information
without assigning some identifier to topology elemnts and
propogating those identifiers?

-----

          o Separate "fault detection and healing" from basic
             topology. From Mike O'Dell:
               "Historically the same machinery is used for both.
               While attractive for many reasons, the availability of
               exogenous topology information (i.e., the intended
               topology) should, it seems, make some tasks easier than
               the general case of starting with zero knowledge. It
               certainly helps with recovery in the case of constraint
               satisfaction. In fact, the intended topology is a
               powerful way to state certain kinds of policy.

Since the way we generally heal after we've detected a fault is
to change the path through the topology, I'm not certain I
understand how you can seperate the two? In other words, a fault
is actually more readily seen as a change in topology, rather
than as a failure. Or should failures be treated differently than
intentional changes in topology?

-----

        The architecture should also separate topology. routing and
        addressing from the application that use those components.
        This implies that applications such as policy definition,
        forwarding, and circuit and tunnel management are separate
        subsystems layered overtop of the basic topology, routing, and
        addressing systems.

I think you are saying here that the application should not use
or rely on topology information to do it's work (?). I'm not
certain how applications will be able to seperate themselves from
addressing, since they must have an address to know what to
connect to.

In general, there seem to be two ways to use addresses:

-- To indicate a topological location. In the real world,
physical location and topological location just hapen to overlap,
but that isn't true in networks, which is part of what makes like
difficult.

-- To indicate a given system or host.

The Internet was originally designed to seperate these two
things, but the topological information (the IP address) became
the primary way of identifying hosts, and we now have a mess in
terms of being able to aggregate. That mess is only going to get
bigger if we continue to see the topological address as a way to
identify a host.

Splitting these two concepts would be a good thing, probably.

-----

        Another facet of this requirement is that there may be multiple
        valid paths available to a destination. When there are
        multiple valid paths available, all valid paths MUST be
        available for forwarding traffic.

I would say 'loop-free' in here some place, since it seems to be
a clearer statement than a 'valid path.'

-----

        The routing and addressing architecture MUST NOT make any
        constraints on or assumptions about the topology or
        connectedness of the elements comprising the Internet. The
        routing and addressing architecture MUST NOT presume any
        particular network structure. The network does not have a
        "nice" structure. In the past we used to believe that there
        was this nice "backbone/tier-1/tier-2/end-site" sort of
        hierarchy. This is not so. Therefore, any new Architecture
        must not presume any such structure.

Hmmm.... This seems a little harsh. You must assume a
hierarchical structure which provides aggregation points, or
otherwise the routing system must suppose 2^128th addresses as
individual, routable, addresses. In fact, the paper earlier
states that aggregation is one method to reduce perceived
network complexity, among others--each of these methods brings an
implied netowrk structure of some type.

-----

        The speed of convergence is generally considered to be a
        function of the number of subnetworks in the network and the
        amount of connections between those networks. As either number
        grows, the time it takes to converge increases.

'prefixes,' or 'reachable destinations,' rather than subnets. The
term 'subnets' is rather slippery, while the other two are more
concrete.

-----

        In addition, a change can "ripple" back and forth through the
        system. One change can go through the system, causing some
        other router to change its advertised connectivity, causing a
        new change to ripple through. These oscillations can take a
        while to work their way out of the network. It is also
        possible that these ripples never die out. In this situation
        the routing and addressing system is unstable; it never
        converges.

I think this would be better termed as a continuous feedback
loop, perhaps.

-----

     3.8 End Host Security

        The Architecture MUST NOT prevent individual host-to-host
        communications sessions from being secured (i.e. it cannot
        interfere with things like IPSEC).

The counter to this is that mechanisms which secure
communications between the attached hosts must not rely on
topology point of attachment information to provide security in
any way. This is a layering violation, and as such, is a bad
thing.

-----

     3.9.1 Routing Information Policies

One other issue to be considered here is the complexity of the
decision algorithm. Any decision algorithm used must be something
which can be computed easily by hand,in your head, or in a
processor. We tend towards very complex decision algorithms in
the egp space, which seems like a bad thing in general.

-----

        The Architecture MUST provide multi-homing for all elements of
        the Internet. That is, multihoming of end-sites, "low-level"
        ISPs, and backbones (i.e. lots of redundant interconnections)

I would include hosts here, as people start wanting reliable
communcations to their cell phones. It's just a matter of time,
you know.... :-)

-----

        The requirement is that the routing architecture be kept as
        simple as possible. This requires careful evaluation of
        possible features and functions with a merciless weeding out of
        those that "might be nice".

But any protocols also need to be extensible in a simple,
lightweight way, to account for things not thought of here....

-----

     4.5 IP Prefix Aggregation

This section seems to contradict the earlier section which states
that the system must be scalable. :-)

-----

          5. They call for "support for NATs and other mid-boxes". If
             the Routing and Addressing architecture is "right" then
             there is no need for them, at least as far as Routing and
             Addressing are concerned.

             Also, we are confused as to what "support for NATs..."
             actually means."

It is rather ironic that if topology information is actually
seperated from host identification information in order to make
nat's unnecessary, they will also become harmless. :-)

-----



This archive was generated by hypermail 2b29 : Mon Aug 04 2003 - 04:10:04 EDT