[f-nsp] Fwd: Multicasting config example?

Andreas Larsen andreas at larsen.pl
Tue Feb 24 13:34:55 EST 2009


For Multicast to work 100& you do need your Unicast routing protcol to work
first. So make sure all subnet's etc are pingable after you "enabled" them
for pim.

But yes foundry code is bugged to some extent with multicast.  I would make
sure you run at least 3.8 or newer.

Regards Andreas

On Tue, Feb 24, 2009 at 6:45 PM, debbie fligor <fligor at illinois.edu> wrote:

>
>
> On Feb 24, 2009, at 11:10, Kenneth Hellmann wrote:
>
>  Good feedback Debbie, but I have to remind everyone that this isn't magic.
>> Everything happens for a reason, and throwing out every config config line
>> in the book and hoping that something sticks isn't the correct way to
>> proceed.
>>
>
> Absolutely right Ken.  I wrote that longer example before the config had
> been posted with no idea what he'd set or not set yet.
>
>
>
>>
>> Bjorn,
>>
>> The reason that your original config worked is because both ports were on
>> the same vlan. Even though they had separate ip addresses, they were both
>> on
>> vlan 1. Because they were on the same vlan and you had ip multicast
>> active,
>> igmp would allow the join.
>>
>> When you changed the vlan to 13, they were not on the same vlan, and since
>> pim wasn't configured on the ve, there was no multicast routing protocol
>> to
>> route from one subnet to the other.
>>
>
> sorry about that Bjorn, we usually use "route-only" in devices where we put
> IP addresses on physical interfaces, and so that was how I was thinking
> about it.  we very seldom put IP addresses directly on ports, we do almost
> everything with VEs.  I think Ken has pegged your problem.
>
>
>>
>> If you want to send a multicast from one subnet to another, simply enable
>> pim on both subnets.
>>
>
> That will work while everything is in box A, but if he wants it to work
> between interfaces in box A and box B as he asked originally, I think he'll
> still need to get OSPF on some more interfaces or things wont go.
>
>
>
>>
>> It is not magic. And Foundry doesn't do it any differently from anyone
>> else.
>>
>
> We're constantly being told by Foundry that we can do certain things only
> on routed ports, not on VEs, so there are some things that can be different.
>  I think you're right though, this isn't one of them.
>
>
>
>>
>> Ken
>>
>> -----Original Message-----
>> From: foundry-nsp-bounces at puck.nether.net
>> [mailto:foundry-nsp-bounces at puck.nether.net] On Behalf Of debbie fligor
>> Sent: Wednesday, February 25, 2009 1:03 AM
>> To: foundry-nsp at puck.nether.net
>> Subject: [f-nsp] Fwd: Multicasting config example?
>>
>> I hadn't meant to take this off list. In case someone else is
>> following along and having fun with multicast on the MLXs, here's what
>> I sent yesterday.
>>
>> Begin forwarded message:
>>
>>  From: debbie fligor <fligor at illinois.edu>
>>> Date: February 23, 2009 15:09:06 CST
>>> To: Bjørn Skovlund Rydén <BSR at fullrate.dk>
>>> Cc: debbie fligor <fligor at illinois.edu>
>>> Subject: Re: [f-nsp] Multicasting config example?
>>>
>>> For multicast between subnets to work you need more info than you'll
>>> find in the MLX config guides (IMO).
>>>
>>> Some of this you might have done, but here's the steps:
>>>
>>> globally enable ip multicast-routing.  reboot even if it doesn't
>>> tell you to.  turn on pim routing
>>>
>>> tell pim routing to use your favorite mix of routes in what order.
>>> we do multicast specific, unicast specific, multicast default, and
>>> unicast default, in that order.   our default leaves campus though,
>>> and multicast leaves in a different path than unicast does.  If your
>>> multicast and unicast networks are identical, it's not too important
>>> which order is setup.
>>>
>>> Pick an IP address to be your RP.  preferably one on a loopback so
>>> it's not tied to a specific subnet or device.  set that to be the RP
>>> candidate on the MLX it's configured on.  Tell all the MLXs that is
>>> the RP address. you can use BSR, but best practice is currently to
>>> hard code it and not use BSR.
>>>
>>> Then make sure you've got ip pim-sparse set on the ve for the vlan,
>>> as well as the routed interface and the loopback that is the RP and
>>> all the routing links between the two boxes.
>>>
>>>
>>> so for the device that's the RP here's an example bit of config:
>>>
>>>
>>> ip multicast-routing
>>>
>>> router pim
>>> route-precedence mc-non-default uc-non-default mc-default uc-default
>>>
>>> interface loopback 2
>>> port-name RP for on-campus
>>> ip ospf area 0
>>> ip address 130.126.0.145/32
>>> ip pim-sparse
>>>
>>> router pim
>>> rp-address 130.126.0.145
>>> rp-candidate loopback 2
>>> !
>>> interface ve 3502
>>> port-name uiuc-core1-dist11-lnk
>>> ip ospf area 0
>>> ip address 172.20.20.5/30
>>> ip pim-sparse
>>> ip mtu 9000
>>> !
>>>
>>>
>>> and then here's the config bits on distribution devices that matter
>>> for getting things between the MLXs and to the user networks.
>>>
>>>
>>> router pim
>>> route-precedence mc-non-default uc-non-default mc-default uc-default
>>>
>>> router pim
>>> rp-address 130.126.0.145
>>>
>>>
>>> interface ve 3502
>>> port-name uiuc-core1-dist11-lnk
>>> ip ospf area 0
>>> ip address 172.20.20.6/30
>>> no ip redirect
>>> ip pim-sparse
>>>
>>> interface ve 499
>>> port-name uiuc-test2wireless-net
>>> ip ospf area 1
>>> ip ospf passive
>>> ip address 192.17.201.1/24
>>> no ip redirect
>>> ip helper-address 128.174.45.8
>>> ip pim-sparse
>>>
>>>
>>>
>>> If this wasn't enough detail, let me know what questions you still
>>> have.  Some weeks we feel like we are Foundry's only multicast QA
>>> department.
>>>
>>> I almost forgot, you want to be running something really current
>>> like 3.9.00a or later for them to have fixed most of the multicast
>>> bugs.
>>>
>>> -debbie
>>>
>>>
>>> On Feb 23, 2009, at 14:45, Bjørn Skovlund Rydén wrote:
>>>
>>>  Hi everyone,
>>>>
>>>> Sorry to bother you for this, but my technical friend at Foundry
>>>> seems to have gone on vacation for a few weeks, and I’d like to get
>>>> on with this.
>>>>
>>>> We’re running a mesh of 6 MLX’s with distribution rings based on
>>>> FES/FESX. I’m now starting to look at multicasting and having read
>>>> back and forth in the config-guide, I’m still a bit clueless as to
>>>> how to get the simplest thing to work.
>>>>
>>>> I would like to receive multicast traffic on a VLAN on MLX A and
>>>> have recipient on MLX B on a routed interface. Very basic stuff,
>>>> I’d say, but after a day in the test-lab, I’m still not successful :
>>>> ( So can someone give me the most basic configuration example on
>>>> how to do this?
>>>>
>>>> Kind regards,
>>>> Bjørn
>>>> _______________________________________________
>>>> foundry-nsp mailing list
>>>> foundry-nsp at puck.nether.net
>>>> http://puck.nether.net/mailman/listinfo/foundry-nsp
>>>>
>>>
>>> -----
>>> -debbie
>>> Debbie Fligor, n9dn       Network Engineer, CITES, Univ. of Il
>>> email: fligor at illinois.edu          <http://www.uiuc.edu/ph/www/
>>> fligor>
>>>                 "My turn."  -River Tam
>>>
>>>
>>>
>>>
>>>
>>>
>> -----
>> -debbie
>> Debbie Fligor, n9dn       Network Engineer, CITES, Univ. of Il
>> email: fligor at illinois.edu          <http://www.uiuc.edu/ph/www/fligor>
>>                   "My turn."  -River Tam
>>
>>
>>
>>
>>
>> _______________________________________________
>> foundry-nsp mailing list
>> foundry-nsp at puck.nether.net
>> http://puck.nether.net/mailman/listinfo/foundry-nsp
>>
>> _______________________________________________
>> foundry-nsp mailing list
>> foundry-nsp at puck.nether.net
>> http://puck.nether.net/mailman/listinfo/foundry-nsp
>>
>
> -----
> -debbie
> Debbie Fligor, n9dn       Network Engineer, CITES, Univ. of Il
> email: fligor at illinois.edu          <http://www.uiuc.edu/ph/www/fligor>
>                   "My turn."  -River Tam
>
>
>
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20090224/88b24d9f/attachment.html>


More information about the foundry-nsp mailing list