[c-nsp] EIGRP route knob tuning
Matthew Huff
mhuff at ox.com
Fri Dec 11 13:21:17 EST 2009
It makes perfect sense, but was quite a shock when it dawned on me what was happening. I made about the same changes you described and everything works fine now. However, it won't work at all when 40GB/100GB interfaces begin shipping. Or even if you wanted to make the bandwidth correct on aggregated 10gb trunks. I assume Cisco will have to come up with some new EIGRP version that's backward compatible which will encapsulate the old metrics within a new larger field. Anyone here anything about this yet from Cisco?
----
Matthew Huff | One Manhattanville Rd
OTA Management LLC | Purchase, NY 10577
http://www.ox.com | Phone: 914-460-4039
aim: matthewbhuff | Fax: 914-460-4139
-----Original Message-----
From: Murphy, William [mailto:William.Murphy at uth.tmc.edu]
Sent: Friday, December 11, 2009 12:42 PM
To: Matthew Huff; cisco-nsp at puck.nether.net
Subject: RE: EIGRP route knob tuning
We encountered same thing as we deployed 10G links. It was definitely an
EIGRP learning experience. We found docs out there that describe changing K
values to ignore bandwidth and then manipulate delay in order to achieve
optimal routing. When you do this the protocol is supposed to be more OSPF
like in the sense that the only value factoring into the equation is a
cumulative cost of sorts. This sounded scary to me so we opted for your
solution. We set the edge SVI's to maximum bandwidth so they would never be
considered in the minimum bandwidth calculation, and then we make sure the
SVI's on our L2 trunks are set to the same BW as the underlying link 1G or
10G...
-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net
[mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Matthew Huff
Sent: Friday, December 11, 2009 10:36 AM
To: cisco-nsp at puck.nether.net
Subject: [c-nsp] EIGRP route knob tuning
Anyone know what Cisco's plans for the metrics in EIGRP? 10GE has the
bandwidth set at max and the delay set to minimum, so how are they going to
handle 40GB and 100GB? Is there any whitepapers posted?
I ran into this a while looking at our core routing. The SVI on a 6500 is
set to a bandwidth equal to a gig-e interface, so we had some inefficient
routing given that we had 10GE layer 3 connections to our distribution. Some
routes were heading to the distribution and back rather than across the
Layer 2 trunk because the Layer 2 trunk SVI had lower bandwidth. Adjusting
the SVI to the max (same as a 10GB interface) fixed the problem. What
happens when 100GB uplinks appear?
----
Matthew Huff | One Manhattanville Rd
OTA Management LLC | Purchase, NY 10577
http://www.ox.com | Phone: 914-460-4039
aim: matthewbhuff | Fax: 914-460-4139
_______________________________________________
cisco-nsp mailing list cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list