[c-nsp] MPLS TE auto-tunnel and ISIS metric

Adam Vitkovsky adam.vitkovsky at swan.sk
Mon Oct 14 09:49:10 EDT 2013

Hi Peter,

I think if it's "onehop" the te-tunnels are actually established to the
next-hop IP address on all directly connected interfaces enabled with cmd:
"mpls ip" creating uniform overlay to the physical links. 
Can you please check this assumption with cmd: "sh int tunnel <autotunnel#
on H<->E link >" and check for destination IP. 

Also I don't know how you route your traffic into the tunnels. 

I believe what is happening is that the path towards PE2 loopback is seen
via the "auto-routed" te-tunnels which all have the same metric, so SPF
would chose the direct link/te-tunnel between dist-1 and core-2. 


-----Original Message-----
From: cisco-nsp [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of
Peter Rathlev
Sent: Monday, October 14, 2013 2:07 PM
To: cisco-nsp
Subject: [c-nsp] MPLS TE auto-tunnel and ISIS metric

What does our MPLS TE auto-tunnels not follow the configured ISIS metric?

We have a lab setup like this:

 +--------+ A     D +--------+
 | core-1 |---------| core-2 |-----> upstream
 +--------\___   ___/--------+
   C |    B   \ /  E     | F
     |         X         |
   G |    H___/ \__I     | J
 +--------/         \--------+
 | dist-1 |         | dist-2 |
 +--------+         +--------+
        \             /        <--- HSRP here
         \           /
          | client |

We're testing failover times using MPLS TE auto-tunnels. All core and
distributions switches are Sup2T. Without any TE everything works as
expected and we see okay failover times, something like 75-100 ms on
link-down (H<->E) and a little more than this when reloading the direct
uplink device ("core-1").

Not we've tried configuring auto-tunnel backup like this:

 mpls traffic-eng tunnels
 mpls traffic-eng auto-tunnel backup
 mpls traffic-eng auto-tunnel primary onehop  mpls traffic-eng auto-tunnel
primary config mpls ip

Everything seems to work as expected except for one thing: The actual
forwarding doesn't follow the configured ISIS metric. Every link uses a
metric of 20000 except H<->E ("dist-1"<->"core-2") which uses 60000.
That way we can force traffic to flow via "core-1" even though the
destination is ultimately through "core-2". (Right now it's to test failover
times when reloading "core-1", but H<->E could be a bad link for some

The MPLS TE tunnels seem to be built correctly. The relevant destination PE
is, somewhere upsteam of "core-2". Routing shows:

core-1#show ip route
Routing entry for
  Known via "isis", distance 115, metric 61000, type level-1
  Redistributing via isis
  Last update from on Tunnel65336, 00:37:55 ago
  Routing Descriptor Blocks:
  *, from, 00:37:55 ago, via Tunnel65336
      Route metric is 61000, traffic share count is 1 core-1#

So next-hop is ("core-2"). I would expect it to be "core-1"
somehow; even though traffic eventually has to cross "core-2" the first
next-hop should be "core-1". The tunnel itself:

core-1#show mpls traffic-eng tunnels Tunnel65336

Name: core-1_t65336                 (Tunnel65336) Destination:
    Admin: up         Oper: up     Path: valid       Signalling: connected
    path option 1, type explicit __dynamic_tunnel65336 (Basis for Setup,
path weight 60000)

  Config Parameters:
    Bandwidth: 0        kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
    Metric Type: TE (default)
    AutoRoute announce: enabled  LockDown: disabled Loadshare: 0 [0]
    auto-bw: disabled
  Active Path Option Parameters:
    State: explicit path option 1 is active
    BandwidthOverride: disabled  LockDown: disabled  Verbatim: disabled

  InLabel  :  -
  OutLabel : TenGigabitEthernet5/4, implicit-null
  Next Hop :
  FRR OutLabel : Tunnel65437, explicit-null
  RSVP Signalling Info:
       Src, Dst, Tun_Id 65336, Tun_Instance 812
    RSVP Path Info:
      My Address:   
      Explicit Route: 
      Record   Route:   NONE
      Tspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
    RSVP Resv Info:
      Record   Route:
      Fspec: ave rate=0 kbits, burst=1000 bytes, peak rate=0 kbits
  Shortest Unconstrained Path Info:
    Path Weight: 40000 (TE)
    Explicit Route:
      Time since created: 52 minutes, 42 seconds
      Time since path change: 52 minutes, 42 seconds
      Number of LSP IDs (Tun_Instances) used: 15
    Current LSP: [ID: 812]
      Uptime: 38 minutes, 41 seconds
      Selection: reoptimization
    Prior LSP: [ID: 805]
      ID: path option 1 [805]
      Removal Trigger: re-route path error
      Last Error: RSVP:: Path Error from Notify (code 25, value
3, flags 0) core-1# 

The path mentioned in "Shortest Unconstrained Path Info" is the correct path
according to ISIS metric, i.e. "dist-1" -> "core-1" -> "core-2".

But MPLS forwarding says something different:

dist-1#show mpls forwarding-table 32 detail 
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop    
Label      Label      or Tunnel Id     Switched      interface              
33         18     0             Tu65336    point2point 
	MAC/Encaps=14/18, MRU=9216, Label Stack{18}, via Te5/4
	D48CB5CBF34000190775DCC08847 00012000
	No output feature configured

The "Te5/4" here is H<->E, the link we want to no use. CEF says the same

dist-1#show ip cef internal, epoch 3,
RIB[I], refcount 7, per-destination sharing
  sources: RIB, RR, LTE
  feature space:
   IPRM: 0x00028000
   NetFlow: Origin AS 0, Peer AS 0, Mask Bits 0
   Broker: linked, distributed at 1st priority
   LFD: 1 local label
   local label info: global/33
        contains path extension list
        disposition chain 0x17186848
        label switch chain 0x17186848
   1 RR source [heavily shared]
    non-eos chain 18
  path 1335498C, path list 53F7CBD8, share 1/1, type attached nexthop, for
    MPLS short path extensions: MOI flags = 0x0 label 18
  nexthop Tunnel65336 label 18, adjacency IP midchain out of
Tunnel65336 133ECF20
  output chain: label 18 TAG midchain out of Tunnel65336 133EEAC0 label
  FRR Primary (0x13321620)
  <primary:  TAG adj out of TenGigabitEthernet5/4, addr
  <repair:  TAG midchain out of Tunnel65437 133EF140 label 203 TAG adj out
of TenGigabitEthernet5/5, addr 133EC8A0> dist-1#

It has H<->E as primary path and C<->G as repair.

Typical interface configuration:

interface TenGigabitEthernet5/4
 description core-2 Te5/4 [CDP]
 mtu 9216
 bandwidth 10000000
 ip address  ip pim sparse-mode  ip router
isis  mpls traffic-eng tunnels  mpls ip  storm-control broadcast level 0.20
bfd interval 200 min_rx 100 multiplier 4  isis circuit-type level-1  isis
network point-to-point  isis metric 60000  isis hello-multiplier 5  isis
hello-interval minimal  isis csnp-interval 10  isis bfd  hold-queue 256 in
ip rsvp bandwidth 5000000 end

Only difference among interfaces are IP addresses (of course) and metric,
with 20000 generally and 60000 in both directions for the link I want to not

So why does MPLS (TE) not select the lowest ISIS cost for this? Am I doing
something completely wrong?


cisco-nsp mailing list  cisco-nsp at puck.nether.net
archive at http://puck.nether.net/pipermail/cisco-nsp/

More information about the cisco-nsp mailing list