[c-nsp] redistribute routes leaked from another VRF?

Phil Mayers p.mayers at imperial.ac.uk
Tue Jan 4 10:12:05 EST 2011


On 04/01/11 14:56, Jeff Bacon wrote:

>> How do you handle situations where your customers (the
>> Multicast source) are using the same Multicast group
>> address(es)?
>
> Well, first off, I don't have customers. So I can't answer that part. :)
>
> It has more to do with how multicast MPLS is implemented - to
> oversimplify (and someone correct me if I'm missing something), the
> model is:
> - you define a single mcast addr in global which carries all the mcast
> for the VRF (it might be possible to map to multiple addrs to break it
> out some based on what your subscrip pattern might be)

More or less. You can also define an "mdt data" group range. When 
traffic for a particular (s,g) pair exceeds a configurable threshold, a 
group is picked from this range and that (s,g) transitions to the new 
group in the default VRF, and PEs with receivers join this group. This 
means you don't flood high-bandwidth groups to every PE - just to PEs 
which are interested.

> - various hooks are shoved in to transport and handle PIM and mroutes on
> a per-VRF basis OOB
> - mcast within the VRF is GRE-encap'ed then transported over the single
> mcast addr to the dest

As above, not necessarily to a single address.

It's also worth knowing that there's an evolved set of this called 
mvpn-ng (not on Cisco IOS yet unfortunately) which dispenses with 
various of the PIM-in-PIM layering. Since 6500/SXI doesn't support it, 
I've never bothered to do more than read the RFC, but the terminology is 
a lot more evolved and addresses some of the concerns like inter-AS and 
such.

>
> I'm sure it works, though I haven't tried it. But it means you have
> mcast (in global) carrying GRE-encap mcast for the VRFs floating around
> your net. I think you have to set up PIM in global to carry the
> transport groups. Merely thinking about trying to manage/debug that
> makes my head hurt.

You do have to setup PIM in the global space. However you can use 
PIM-SSM meaning you don't need an RP, and never have any (*,g) joins, 
because the source IP of the other PE(s) is known.

(There have been problems when using MPVN w/ SSM and interoperability 
with JunOS in the past. as JunOS didn't support the "mdt" BGP AF used to 
distribute the "mdt data" under newer IOS versions. My understanding was 
that is now supported but I haven't looked into it, and solved the 
problem by using ASM rather than SSM in the global space)

>
> Plus, GRE encap with MPLS means a recirc through the EARL so you pay the
> latency penalty twice plus the extra load. And you need jumbo frames to
> encap at full standard 1500 MTU.

I was under the impression that there's no recirc in this case, but I 
could be wrong and can't find a reference.

As for MTU - if you're running MPLS you presumably have jumbos enabled 
anyway?

>
> Hence, don't do it if you don't have a really good reason to. I don't,
> so I don't.

Well, the "good reason" for doing this is if you want multicast in MPLS 
L3VPNs, surely ;o)

FWIW we run MVPN. It works fine, and after a bit of playing around to 
understand it, I didn't find it massively confusing.


More information about the cisco-nsp mailing list