[c-nsp] Multicasting with MDT between ASR9K and ME3600

Leigh Harrison lharrison at convergencegroup.co.uk
Mon May 6 02:53:03 EDT 2013


Hello all,

To keep you updated, here is the latest with our multicasting setup:-

After some good help from Cisco, other member on the mailing list and some long nights, I realised there were some configuration problems on my kit.  Firstly, the ASR9K's need to have and MDT source interface to be specified, I got this set to the loopback that I use as the bgp router-id and it made the 9K's much happier.

This can be checked with "show bgp ipv4 mdt summary".  If you're not getting any routes, check the mdt source interface.  I could also see the issue manifesting itself when looking at "show pim vrf xyz neigh" as a neighbour entry was showing as "0.0.0.0", rather than the actual address it needed to be.

I upgraded the software on my 3600's from 15.1(2)EY1a up to 15.3(2)S.  I configured MDT and get sent lots of errors on screen about memory allocation.  One of the chaps on the mailing list had sent me a "gotcha" about an SDM profile that is required for MDT on the new 3600 software called "application".  There was a big issue here for me.  We run the 3600's on the "ip" SDM template so that we can have up to 24,000 routes in them, we currently have around 8,500 at peak on various 3600's.  Altering the SDM template from "ip" to "application" would give 250 MDT routes,  but drop our available IP routes down to 12,000.  That left me with the option of going from being 1/3 of capacity down to 2/3 of capacity.  As you can imagine, I opted not to do this.

I then tried a few different methods to get the multicast working down to the customer sites.  First up, I tried to run a GRE tunnel from the CPE on site up through the 3600 to the ASR9K and run static mroutes over it.  This didn't work as the ASR won't terminate a GRE tunnel in a vrf.  If someone else has any insight into this, then I'd appreciate the knowledge share, but I couldn't get it working.

Finally, I removed the layer 3 connectivity from the 3600 and ran a psudowire up to the ASR, terminating the link in a l2vpn onto a BVI.  Bypassing the 3600's meant that the multicasting worked no problems and all testing proved positive.  The customer will be testing this week, but joining all CPE's to 239.0.1.239, I could ping all of them in one go.  It's also worth noting that I had to use static RP's for the devices in the customer vrf, rather than anything dynamic and I'm not too sure how the BVI will affect my QoS setup yet, that's this weeks job..

Our setup is much simpler than most, we have a Core of ASR9K's connected to each other as a backbone and rings of 3600's for customer access.  The 3600's we run in rings of 3 with 10Gb between then and 10Gb uplinks to the 9K's.  No legacy protocols in the network, all Ethernet and IP based running MPLS.

Thanks to all who helped out and I hope this post helps other out.

Leigh

-----Original Message-----
From: cisco-nsp [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Leigh Harrison
Sent: 29 April 2013 12:26
To: Adam Vitkovsky; 'Waris Sagheer (waris)'
Cc: cisco-nsp at puck.nether.net; 'Vinod Kumar Balasubramanyam (vinbalas)'
Subject: Re: [c-nsp] Multicasting with MDT between ASR9K and ME3600

Thanks for the reply Adam.

Multicast-routing is on and up for that vrf (I missed the snippet out):-

RP/0/RSP0/CPU0:CG-LHC-ASR9010-1#sh run multicast-routing vrf multicast_test Mon Apr 29 12:24:52.613 BST multicast-routing  vrf multicast_test
  address-family ipv4
   interface Loopback239
    enable
   !
   mdt data 239.0.1.0/24 threshold 10
   mdt default ipv4 239.0.0.1
   interface all enable


Leigh

-----Original Message-----
From: Adam Vitkovsky [mailto:adam.vitkovsky at swan.sk]
Sent: 29 April 2013 11:26
To: Leigh Harrison; 'Waris Sagheer (waris)'
Cc: 'Vinod Kumar Balasubramanyam (vinbalas)'; cisco-nsp at puck.nether.net
Subject: RE: [c-nsp] Multicasting with MDT between ASR9K and ME3600

Looks like multicast is not enabled. 
Don't see the following in your config:

multicast-routing
 vrf multicast_test
  address-family ipv4
   interface all enable
  !

adam
-----Original Message-----
From: cisco-nsp [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Leigh Harrison
Sent: Monday, April 29, 2013 10:28 AM
To: Waris Sagheer (waris)
Cc: Vinod Kumar Balasubramanyam (vinbalas); cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] Multicasting with MDT between ASR9K and ME3600

Vinod,

This is the error I get when trying to ping the multicast address under the
vrf:-

RP/0/RSP0/CPU0:CH-LHC-ASR9010-1#ping vrf multicast_test 200.200.200.11 Mon Apr 29 09:26:17.700 BST Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 200.200.200.11, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

RP/0/RSP0/CPU0:CH-LHC-ASR9010-1#ping vrf multicast_test 239.0.1.239 Mon Apr
29 09:26:25.759 BST Mdef cons get failed for VRF 0x60000003 - No such process RP/0/RSP0/CPU0:CH-LHC-ASR9010-1#

Leigh

From: Waris Sagheer (waris) [mailto:waris at cisco.com]
Sent: 29 April 2013 07:59
To: Leigh Harrison
Cc: cisco-nsp at puck.nether.net; Vinod Kumar Balasubramanyam (vinbalas)
Subject: Re: [c-nsp] Multicasting with MDT between ASR9K and ME3600

Leigh,
Vinod will review the configuration and will get back to you.

Best Regards,

[http://www.cisco.com/web/europe/images/email/signature/horizontal06.jpg]


Waris Sagheer
Technical Marketing Manager
Service Provider Access Group
waris at cisco.com<mailto:waris at cisco.com>
Phone: +1 408 853 6682
Mobile: +1 408 835 1389

CCIE - 19901



[Think before you print.] Think before you print.

This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.

For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html



From: Leigh Harrison
<lharrison at convergencegroup.co.uk<mailto:lharrison at convergencegroup.co.uk>>
Date: Sunday, April 28, 2013 11:33 AM
To: Waris Sagheer <waris at cisco.com<mailto:waris at cisco.com>>
Cc: "cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>"
<cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>>, Vinod Kumar Balasubramanyam <vinbalas at cisco.com<mailto:vinbalas at cisco.com>>
Subject: Re: [c-nsp] Multicasting with MDT between ASR9K and ME3600

Hello all,

Thanks for the responses. I didn't put the mdt config snippet in there, but it has been configured and is peered up just right.

>From the 3600, I have three tunnels dynamically created:
Tunnel 0 is created with reference to the default/global pim peer Tunnel 1 is created with reference to the mdt when configured under the vrf Tunnel 2 is created with reference to the rp under the vrf, I have to statically assign this.

The 9K and the 3600 can see each other in the vrf, but I can't long a test group of 239.0.1.239 that I put onto the loopbacks of both the 9K and the 3600. Oddly, the 9K complains about a process not being available when I try to ping the multicast address.

The default pim is created and all devices respond no problem, all bgp is built and the mdt address family is active all over.

Am I missing something obvious or is there something more sinister afoot?

Leigh

Sent from my iPhone - apologies for any spelling or grammar mistakes

On 26 Apr 2013, at 19:46, "Waris Sagheer (waris)"
<waris at cisco.com<mailto:waris at cisco.com>> wrote:
Hi Leigh,
Can you elaborate the issues?
I am copying Vinod who will be able to help you.

Best Regards,

[http://www.cisco.com/web/europe/images/email/signature/horizontal06.jpg]


Waris Sagheer
Technical Marketing Manager
Service Provider Access Group
waris at cisco.com<mailto:waris at cisco.com>
Phone: +1 408 853 6682
Mobile: +1 408 835 1389

CCIE - 19901



[Think before you print.] Think before you print.

This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.

For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html



From: Leigh Harrison
<lharrison at convergencegroup.co.uk<mailto:lharrison at convergencegroup.co.uk>>
Date: Friday, April 26, 2013 9:25 AM
To: "cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>"
<cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>>
Subject: [c-nsp] Multicasting with MDT between ASR9K and ME3600

Hello folks,

We're going through setting up multicasting in our network between some core ASR9K's and some edge ME3600's.  The underlying multicasting is implemented and working well, but we're having some trouble in getting the vrf's working correctly for multicasting.  Would someone be able to offer some sagely advice and some pointers as to what we're doing wrong or not doing??

Leigh


Config for one of the 9K's:-

vrf multicast_test
address-family ipv4 unicast
  import route-target
   64900:123456
  !
  export route-target
   64900:123456
  !
!
router pim
address-family ipv4
  auto-rp mapping-agent Loopback0 scope 20 interval 60
  auto-rp candidate-rp Loopback0 scope 20 group-list 224-4 interval 60
  interface Loopback0
   enable
  !
  interface TenGigE0/3/1/0
   enable
  !
  interface TenGigE0/3/1/3
   enable
  !
  interface TenGigE0/4/1/3
   enable
  !
  interface GigabitEthernet0/3/0/0
   enable
  !
!
vrf multicast_test
  address-family ipv4
   rp-address 200.200.200.11
   interface Loopback239
    enable
   !
  !
!
!
router igmp
interface Loopback0
  join-group 239.0.0.239
!
vrf multicast_test
  interface Loopback239
   join-group 239.0.1.239
  !
!
!
interface Loopback239
vrf multicast_test
ipv4 address 200.200.200.11 255.255.255.255 !
router bgp 64900
vrf multicast_test
  rd 64900:12345611
  address-family ipv4 unicast
   redistribute connected
  !
!
!
RP/0/RSP0/CPU0:CH-LHC-ASR9010-1#sh pim vrf multicast_test interface Fri Apr
26 17:19:30.029 BST

PIM interfaces in VRF multicast_test
Address               Interface                     PIM  Nbr   Hello  DR
DR
                                                         Count Intvl  Prior

200.200.200.11        Loopback239                   on   1     30     1
this system
10.200.2.9            mdtmulticast/test             on   2     30     1
10.200.5.1
RP/0/RSP0/CPU0:CH-LHC-ASR9010-1#sh pim vrf multicast_test neigh Fri Apr 26
17:19:34.212 BST

PIM neighbors in VRF multicast_test

Neighbor Address             Interface              Uptime    Expires  DR
pri   Flags

200.200.200.11*              Loopback239            03:03:26  00:01:37 1
(DR) B P
10.200.2.9*                  mdtmulticast/test      01:42:23  00:01:22 1
10.200.5.1                   mdtmulticast/test      01:42:16  00:01:19 1
(DR) P

RP/0/RSP0/CPU0:CH-LHC-ASR9010-1#ping vrf multicast_test 239.0.1.239 Fri Apr
26 17:22:45.480 BST Mdef cons get failed for VRF 0x60000003 - No such process


Config for the ME3600 (directly connected)

CG-Peer1-3600ME-1#sh run vrf multicast_test Building configuration...

Current configuration : 444 bytes
ip vrf multicast_test
rd 64900:12345615
mdt default 239.0.0.1
mdt data 239.0.1.0 0.0.0.255 threshold 1 route-target export 64900:123456 route-target import 64900:123456 !
!
interface Loopback239
ip vrf forwarding multicast_test
ip address 200.200.200.15 255.255.255.255 ip pim sparse-mode ip igmp join-group 239.0.1.239 !
router bgp 64900
!
address-family ipv4 vrf multicast_test
  redistribute connected
exit-address-family
!
end

CG-Peer1-3600ME-1#
CG-Peer1-3600ME-1#sh ip pim vrf multicast_test int CG-Peer1-3600ME-1#sh ip pim vrf multicast_test interface

Address          Interface                Ver/   Nbr    Query  DR     DR
                                          Mode   Count  Intvl  Prior
10.200.5.1       Tunnel1                  v2/S   1      30     1
10.200.5.1
200.200.200.15   Loopback239              v2/S   0      30     1
200.200.200.15
CG-Peer1-3600ME-1#sh ip pim vrf multicast_test neigh PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.200.2.9        Tunnel1                  01:43:11/00:01:28 v2    1 / G

______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk) ______________________________________________________________________
_______________________________________________
cisco-nsp mailing list
cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk) ______________________________________________________________________

______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk) ______________________________________________________________________

______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk) ______________________________________________________________________

______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk) ______________________________________________________________________
_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk) ______________________________________________________________________

______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk) ______________________________________________________________________

_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk) ______________________________________________________________________

______________________________________________________________________
This email has been scanned by the Symantec Email Security Cloud System, Managed and Supported by TekNet Solutions (http://www.teknet.co.uk)
______________________________________________________________________



More information about the cisco-nsp mailing list