[c-nsp] Issues with MTI on multicast VPN (ME3600) Waris help ;)

daniel.dib at reaper.nu daniel.dib at reaper.nu
Fri Dec 21 04:26:52 EST 2012


 Hi,

 I'm trying to setup Multicast VPN (MVPN) on a Cisco ME3600. It's a 
 ME-3600X-24FS-M and the software is 
 me360x-universalk9-mz.151-2.EY1a.bin. There seems to be an issue with 
 the MTI. I only see packets outbound but no packets coming in. This is 
 the configuration used, I had to remove some information that can't be 
 displayed publically.

 ip vrf xyz
  rd 102:1
  mdt default 232.2.2.2
  route-target export 102:1
  route-target import 102:1
 router bgp 65000
 address-family vpnv4
   neighbor 172.24.0.251 activate
   neighbor 172.24.0.251 send-community extended
 address-family ipv4 mdt
   neighbor 172.24.0.251 activate
   neighbor 172.24.0.251 send-community extended
 address-family ipv4 vrf xyz
   redistribute connected
 interface Loopback0
  ip address 172.31.254.15 255.255.255.255
  ip router isis
  ip pim sparse-mode
 ip multicast-routing
 ip multicast-routing vrf xyz
 ip pim ssm default
 ip pim vrf xyz rp-address x.x.x.x override

 When enabling the mdt I get this message:

 (config-vrf)#mdt default 232.2.2.2
 IP MTU not supported on interface Tunnel1
 (config-vrf)#
 %PLATFORM_NCEF-3-NULL_HANDLE:  Null child flow data handle for mid 
 chain
 -Traceback= 626558z 2B0EA90z 2B102B8z 2B06FF0z 33067Cz 3677ECz 3692D4z 
 3C59E0z 3C6D4Cz 3C6DD8z 3A6AC4z 3A6B64z 165A594z 165A75Cz AABFF4z 
 C8CD78z
 %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state 
 to up
 %PIM-5-DRCHG: VRF xyz: DR change from neighbor 0.0.0.0 to 172.31.254.15 
 on interface Tunnel1

 PIM is enabled on the interfaces:

 sh ip pim vrf xyz int

 Address          Interface                Ver/   Nbr    Query  DR     
 DR
                                           Mode   Count  Intvl  Prior
 10.209.104.1     Vlan3001                 v2/S   0      30     1      
 10.209.104.1
 172.31.254.15    Tunnel1                  v2/S   0      30     1      
 172.31.254.15

 I see the tunnel interface but there are no PIM adjacencies over the 
 tunnel:

 sh ip pim vrf xyz nei
 PIM Neighbor Table
 Mode: B - Bidir Capable, DR - Designated Router, N - Default DR 
 Priority,
       P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
 Neighbor          Interface                Uptime/Expires    Ver   DR
 Address                                                            
 Prio/Mode

 Tunnel only has outbound packets:

 sh int tun1
 Tunnel1 is up, line protocol is up
   Hardware is Tunnel
   Interface is unnumbered. Using address of Loopback0 (172.31.254.15)
   MTU 17916 bytes, BW 100 Kbit/sec, DLY 50000 usec,
      reliability 255/255, txload 1/255, rxload 1/255
   Encapsulation TUNNEL, loopback not set
   Keepalive not set
   Tunnel source 172.31.254.15 (Loopback0)
    Tunnel Subblocks:
       src-track:
          Tunnel1 source tracking subblock associated with Loopback0
           Set of tunnels with source Loopback0, 2 members (includes 
 iterators), on interface <OK>
   Tunnel protocol/transport multi-GRE/IP
     Key disabled, sequencing disabled
     Checksumming of packets disabled
   Tunnel TTL 255, Fast tunneling enabled
   Tunnel transport MTU 1476 bytes
   Tunnel transmit bandwidth 8000 (kbps)
   Tunnel receive bandwidth 8000 (kbps)
   Last input never, output 00:00:06, output hang never
   Last clearing of "show interface" counters never
   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
   Queueing strategy: fifo
   Output queue: 0/0 (size/max)
   5 minute input rate 0 bits/sec, 0 packets/sec
   5 minute output rate 0 bits/sec, 0 packets/sec
      0 packets input, 0 bytes, 0 no buffer
      Received 0 broadcasts (0 IP multicasts)
      0 runts, 0 giants, 0 throttles
      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
      224 packets output, 16444 bytes, 0 underruns
      0 output errors, 0 collisions, 0 interface resets
      0 unknown protocol drops
      0 output buffer failures, 0 output buffers swapped out

 If I debug PIM hellos I see nothing coming in, only outbound packets.

 BGP is up:

 For address family: IPv4 MDT
 BGP router identifier 172.31.254.15, local AS number 65000
 BGP table version is 9, main routing table version 9
 3 network entries using 468 bytes of memory
 3 path entries using 180 bytes of memory
 2/2 BGP path/bestpath attribute entries using 256 bytes of memory
 4 BGP rrinfo entries using 96 bytes of memory
 1 BGP extended community entries using 24 bytes of memory
 0 BGP route-map cache entries using 0 bytes of memory
 0 BGP filter-list cache entries using 0 bytes of memory
 BGP using 1024 total bytes of memory
 BGP activity 21/5 prefixes, 21/5 paths, scan interval 60 secs

 Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ 
 Up/Down  State/PfxRcd
 172.24.0.251    4        65000    3269    2833        9    0    0 1d18h 
          2

 For address family: VPNv4 Unicast
 BGP router identifier 172.31.254.15, local AS number 65000
 BGP table version is 62, main routing table version 62
 13 network entries using 2080 bytes of memory
 13 path entries using 728 bytes of memory
 6/6 BGP path/bestpath attribute entries using 816 bytes of memory
 4 BGP rrinfo entries using 96 bytes of memory
 1 BGP extended community entries using 24 bytes of memory
 0 BGP route-map cache entries using 0 bytes of memory
 0 BGP filter-list cache entries using 0 bytes of memory
 BGP using 3744 total bytes of memory
 BGP activity 21/5 prefixes, 21/5 paths, scan interval 60 secs

 Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ 
 Up/Down  State/PfxRcd
 172.24.0.251    4        65000    3269    2833       62    0    0 1d18h 
         12

 I can see that a group has been joined:

 sh ip igmp vrf xyz groups
 IGMP Connected Group Membership
 Group Address    Interface                Uptime    Expires   Last 
 Reporter   Group Accounted
 z.z.z.z  Vlan3001                 16:58:43  00:02:54  10.209.106.254

 The MDT has been built:

 sh ip mroute 232.2.2.2
 IP Multicast Routing Table
 Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - 
 Connected,
        L - Local, P - Pruned, R - RP-bit set, F - Register flag,
        T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - 
 Extranet,
        X - Proxy Join Timer Running, A - Candidate for MSDP 
 Advertisement,
        U - URD, I - Received Source Specific Host Report,
        Z - Multicast Tunnel, z - MDT-data group sender,
        Y - Joined MDT-data group, y - Sending to MDT-data group,
        V - RD & Vector, v - Vector
 Outgoing interface flags: H - Hardware switched, A - Assert winner
  Timers: Uptime/Expires
  Interface state: Interface, Next-Hop or VCD, State/Mode

 (172.31.254.15, 232.2.2.2), 1d18h/00:03:24, flags: sT
   Incoming interface: Loopback0, RPF nbr 0.0.0.0
   Outgoing interface list:
     Vlan1164, Forward/Sparse, 01:15:24/00:03:24

 (172.24.0.251, 232.2.2.2), 1d19h/00:03:27, flags: sTIZ
   Incoming interface: Vlan1164, RPF nbr 10.31.254.49
   Outgoing interface list:
     Vlan1168, Forward/Sparse, 1d13h/00:03:27
     MVRF xyz, Forward/Sparse, 01:16:14/00:01:45

 If I check hops on the way I see the mroute as well. This is from the 
 core box:

 JKTcore01#sh ip mroute 232.2.2.2
 IP Multicast Routing Table
 Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - 
 Connected,
        L - Local, P - Pruned, R - RP-bit set, F - Register flag,
        T - SPT-bit set, J - Join SPT, M - MSDP created entry,
        X - Proxy Join Timer Running, A - Candidate for MSDP 
 Advertisement,
        U - URD, I - Received Source Specific Host Report,
        Z - Multicast Tunnel, z - MDT-data group sender,
        Y - Joined MDT-data group, y - Sending to MDT-data group
        V - RD & Vector, v - Vector
 Outgoing interface flags: H - Hardware switched, A - Assert winner
  Timers: Uptime/Expires
  Interface state: Interface, Next-Hop or VCD, State/Mode

 (172.31.254.15, 232.2.2.2), 1d18h/stopped, flags: sTIZ
   Incoming interface: Vlan1162, RPF nbr 10.31.254.42, RPF-MFD
   Outgoing interface list:
     MVRF xyz, Forward/Sparse, 01:16:32/00:01:27, H

 So something seems broken with the MTI. Cisco FN says that my release 
 supports MVPN.
 I found some bugs but nothing that seems related to this. Do you see 
 any error in the
 configuration or is MVPN a bit flakey on the ME3600?

 Thanks.

 Daniel Dib
 CCIE #37149


More information about the cisco-nsp mailing list