multicast peering config for a Juniper M10

From: Antonio Querubin (tony@lava.net)
Date: Fri Jun 15 2001 - 06:40:13 EDT


Greetings,

I was hoping someone more experienced with Juniper routers could assist me
in configuring an M10 with which I'm quite the novice with. I'm trying to
move 2 multicast tunnels and 1 direct peering connection from a cisco to a
Juniper M10 router but for some reason can't get it to work.

The interface portion of the config has:

fe-0/0/0 {
    description "Fast Ethernet on eth-srv VLAN";
    unit 0 {
        family inet {
            no-redirects;
            primary;
            address 64.65.64.30/25;
        }
    }
}
t3-0/1/0 {
    description "Sprint, 82.HFGS.710001.GTEW";
    encapsulation cisco-hdlc;
    t3-options {
        compatibility-mode larscom subrate 2;
        no-payload-scrambler;
        cbit-parity;
    }
    unit 0 {
        family inet {
            address 160.81.200.30/30;
        }
    }
}
gr-0/3/0 {
    unit 0 {
        description "MBone tunnel to WorldCom/UUNET";
        tunnel {
            source 64.65.64.149;
            destination 208.205.11.91;
        }
        family inet {
            address 157.130.204.122/30;
        }
    }
    unit 1 {
        description "Multicast tunnel to Yahoo Broadcast (broadcast.com)";
        tunnel {
            source 64.65.64.149;
            destination 206.190.40.61;
        }
        family inet {
            address 206.190.40.186/30;
        }
    }
}
lo0 {
    unit 0 {
        family inet {
            address 64.65.64.148/32;
            address 127.0.0.1/32;
            address 64.65.64.149/32;
        }
    }
}

The pertinent part of the protocol config looks like:

bgp {
    group Sprint {
        family inet {
            any;
        }
        peer-as 1239;
        neighbor 160.81.200.29;
    }
    group tunnels {
        family inet {
            multicast;
        }
        neighbor 157.130.204.121 {
            peer-as 704;
        }
        neighbor 206.190.40.185 {
            peer-as 5779;
        }
    }
}
msdp {
    peer 144.228.240.253 {
        local-address 64.65.64.149;
    }
    peer 206.190.40.61 {
        local-address 64.65.64.149;
    }
    peer 157.130.204.121 {
        local-address 157.130.204.122;
    }
}
pim {
    dense-groups {
        224.0.1.39/32;
        224.0.1.40/32;
    }
    rp {
        local {
            address 64.65.64.148;
            priority 250;
        }
        auto-rp mapping;
    }
    interface all {
        mode sparse-dense;
    }
}

The MBGP peering comes up:

Groups: 4 Peers: 5 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 278399 102804 50 150 778 0
inet.2 6373 3751 1083 10 1774 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Damped...
64.65.64.2 6435 30416 6010 0 0 22:28 83542/83565/0 21/21/0
64.65.64.66 6435 36106 3454 0 0 22:23 18340/94331/0 0/0/0
157.130.204.121 704 1014 48 0 3 22:08 0/0/0 179/2788/1083
160.81.200.29 1239 38438 56 0 0 18:07 922/100503/50 3551/3561/0
206.190.40.185 5779 85 84 0 0 40:25 0/0/0 0/3/0

The MSDP sessions come up:

Peer address Local address State Last up/down Peer-Group
144.228.240.253 64.65.64.149 Established 00:19:47
157.130.204.121 157.130.204.122 Established 00:09:10
206.190.40.61 64.65.64.149 Established 00:40:48

The ciscos on the same ethernet see the Juniper as an RP:

PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 64.65.64.148 (iiwi.lava.net), v2
    Info source: 64.65.64.148 (iiwi.lava.net), via Auto-RP
         Uptime: 00:09:40, expires: 00:02:14

From a cisco I can run an mtrace between the Juniper to one of our
upstreams:

mamo>mtrace iiwi 144.228.240.253
Type escape sequence to abort.
Mtrace from 64.65.64.148 to 144.228.240.253 via RPF
From source (iiwi.lava.net) to destination (rp-stk.sprintlink.net)
Querying full reverse path...
 0 rp-stk.sprintlink.net (144.228.240.253)
-1 sl-bb20-stk-14-0.sprintlink.net (144.232.4.234) PIM/MBGP [64.65.64.0/22]
-2 sl-bb21-stk-14-0.sprintlink.net (144.232.4.233) PIM/MBGP [64.65.64.0/22]
-3 sl-bb22-stk-15-0.sprintlink.net (144.232.4.242) PIM/MBGP [64.65.64.0/22]
-4 sl-bb21-prl-11-3.sprintlink.net (144.232.8.218) PIM/MBGP [64.65.64.0/22]
-5 sl-gw1-prl-12-0-0.sprintlink.net (144.232.30.3) PIM/MBGP [64.65.64.0/22]
-6 sl-lavanet-1-0.sprintlink.net (160.81.200.30) PIM [64.65.64.148/32]
-7 iiwi.lava.net (64.65.64.148)
mamo>mtrace 144.228.240.253 iiwi
Type escape sequence to abort.
Mtrace from 144.228.240.253 to 64.65.64.148 via RPF
From source (rp-stk.sprintlink.net) to destination (iiwi.lava.net)
Querying full reverse path...
 0 iiwi.lava.net (64.65.64.148)
-1 iiwi.lava.net (64.65.64.148) PIM [144.228.0.0/16]
-2 sl-gw1-prl-1-1-1.sprintlink.net (160.81.200.29) PIM [144.228.240.253/32]
-3 sl-bb21-prl-2-0.sprintlink.net (144.232.30.2) PIM/MBGP [144.228.240.253/32]
-4 sl-bb22-stk-2-1.sprintlink.net (144.232.8.217) PIM/MBGP [144.228.240.253/32]
-5 sl-bb21-stk-15-0.sprintlink.net (144.232.4.241) PIM/MBGP [144.228.240.253/32]
-6 sl-bb20-stk-14-0.sprintlink.net (144.232.4.234) PIM [144.228.240.253/32]
-7 rp-stk.sprintlink.net (144.228.240.253)

But no multicast traffic flows into our network. If anyone has any ideas
I'd appreciate hearing from you. BTW, the tunnels and Sprint connection
work when they're terminated on a cisco so the upstreams are configured
ok. I suspect I'm overlooking something very obvious...



This archive was generated by hypermail 2b29 : Mon Aug 05 2002 - 10:42:37 EDT