[c-nsp] Conditional advertise-map

Shane Amante shane at castlepoint.net
Wed Sep 15 22:21:00 EDT 2010


Heath, All,

On Sep 15, 2010, at 12:13 MDT, Heath Jones wrote:
> I completely agree with the problem of tcam overflow if an aggregated prefix
> dissapears! I did overlook that though, thanks for pointing it out!
> 
> I think it is still certainly an advantage to do aggregation pre-tcam. We
> are only better off than where we are now. Even if the FIB gets hammered
> because of some change on the wider internet (if the tcam overflows), it
> could revert to software forwarding for some prefixes / catchall type
> arrangement. Obviously the selection of prefixes to catch in software is
> important...
> 
> I wonder if someone has modelled this - see just how much aggregation could
> realistically be done at each AS. I'd imagine its similar to the info you
> got about 1/2ish of the prefixes out there being deaggregates..?

In the past couple years, there has been a couple of IETF drafts and discussion, mostly in GROW, regarding FIB aggregation methods and associated modeling by research folks.  At the last IETF, the following was presented at GROW:
http://www.ietf.org/proceedings/78/slides/grow-2/grow-2.htm
http://tools.ietf.org/html/draft-uzmi-smalta-00

As the slides above note, this is building upon an earlier draft presented at IETF 76:
https://www.ietf.org/proceedings/76/slides/grow-2.pdf  (I'm not sure why this PDF is missing the content in most slides.  It's likely a "bug" when the slides were auto-converted from PPT to PDF.  Unfortunately, I don't know where to get a copy of the slides without the missing content).
http://tools.ietf.org/html/draft-zhang-fibaggregation-02

The nice thing is the research folks have put thought into various "levels" of FIB compression that could potentially be achieved.  (I, for one, am very grateful for their efforts).  The various "levels" of FIB compression allow one to achieve more optimal compression at the expense of additional CPU consumption and, more concerning, at the higher levels of FIB compression (level 3 and level 4 in draft-zhang) introduction of artificial aggregates (a.k.a.: "whiteholing") that could, under certain conditions, introduce routing/forwarding loops, attraction/routing of additional traffic that otherwise would get dropped, etc.  Most importantly, the research folks have spent some time doing theoretical modeling to characterize the amount of compression that could be achieved at each level (see drafts/slides for details).  In addition, particularly with the SMALTA work, they've looked at how to optimize their compression algorithms to (try to) efficiently maintain a fully optimized, compressed FIB while, all the while, dealing with incoming routing updates (prefixes, aggregates, etc.) appearing and disappearing.  Better still, the SMALTA folks are not introducing additional aggregates and, according to their model, they were able to stay within 1% to 6% of a "one-shot", fully optimized compressed FIB.  IMHO, the SMALTA draft appears to be one of the more promising avenues.

The one challenge is, of course, these are just theoretical models (likely good ones), but theoretical models with associated assumptions nonetheless -- IOW, it would be *very* interesting to take this work a step further and actually hook this up to a live, production network and document those results, but I'm unaware of any efforts to do that.

In summary, don't give up hope that this is "completely intractable problem" quite yet.  And, if anything, press your vendors to read those drafts and understand the simulation models & results and, if possible, explain why this doesn't work or, failing that, why they haven't implemented it, yet.  :-)

-shane


More information about the cisco-nsp mailing list