[c-nsp] Filtering Layer 2 Multicasts on 6509
Klementina Miloslava
kmiloslava at robotvirgin.com
Thu Jan 20 08:39:33 EST 2011
But since it's L2 multicast, Devin would have to have multiple hosts with tcp
stacks configured to respond to the same mac address - the multicast address
he specified.
It's my understanding that when you do this, the host typically has a real
ip/mac for management and normal operations. Then, on top of that, a
multicast aware application that's handling the multicast mac address.
In this situation, the switch will never learn the multicast mac address in
the cam table for two reasons. 1. because the host is never originating
traffic from that mac adddress. 2. becuase a switch will only learn a mac
address on a single interface.
So, as Andras pointed out, the switch won't know where to forward the frames
so it will flood to all interfaces on the vlan. Since it's flooded the hosts
with the tcp stack configured to use that mac address will gladly respond,
but the switch will still never learn it in the cam table.
So in additional to the CPU problem, you are likely seeing that traffic on
all ports. A simple tcpdump will verify this. In the end you have a
pruning
problem that needs to be solved. But since this is l2 there is no RP
involved to handle the pruning. However, it's simply done as Andras pointed
out.
Switch#conf t Switch(config)#mac-address-table static <dest_mcast_address>
vlan <vlan_id> interface GigabitEthernet x/y GigabitEthernet x/z
Is this analysis correct? Someone correct if necessary.
Klementina
>
>
> On Thu, 20 Jan 2011, Tóth András wrote:
>
>> Hi,
>>
>> Most multicast traffic is forwarded in hardware on the 6500 platform.
>> However, in this case, the layer-2 mcast address is not a standard
>> ieee mcast address which means the packets will be treated like
>> broadcast.
>>
>> Standard layer-2 mcast starts like this:
>> 01.00.5e.XX.XX.XX
>>
>> All broadcast packets are punted to cpu for processing since the cpu
>> is member of that vlan domain and thus it will review all ARP packets
>> and other kinds of broadcast packets in that vlan domain.
>>
>> Resolution options:
>> There are a couple options here to prevent those packets from being
>> punted to cpu.
>>
>> 1) I believe you can match those particular mcast packets with a mac
>> access-list and then drop them before hitting the cpu using control
>> plane policing. This will require you to enable QoS globally and if
>> you have not enabled QoS globally already, this is a bad option. Also,
>> I'm not 100% off top of my head whether you can use mac ACL for
>> control plane policing.
>>
>> 2) Identify which interfaces require this mcast flow on the 6500. Then
>> create a static mac entry in your cam table such that the 6500 will
>> forward this mcast stream out all of those ports which require this
>> stream:
>>
>> Switch#conf t
>> Switch(config)#mac-address-table static <dest_mcast_address> vlan
>> <vlan_id> interface GigabitEthernet x/y GigabitEthernet x/z
>>
>> You can assign this mac address to multiple interfaces and even in
>> multiple vlans.
>>
>> Best regards,
>> Andras
>>
>>
>> On Wed, Jan 19, 2011 at 11:08 PM, Devin Kinch <devinkinch at gmail.com> wrote:
>>>> If it's true L2 multicast, it shouldn't be hitting your RP from my
>>>> understanding of L2 Multicast.
>>>
>>> I didn't think so either. But when I turn down the SVI interfaces in
>>> those VLANs, the CPU usage goes down to around 8-9%. Here's the output
>>> showing the CPU usage:
>>>
>>> 6686588558665866686568555866585558955866685568667865986558656855686568
>>> 8371990985129424261948999640989895598405059945041409672886290689060905
>>> 100 * *
>>> 90 * * * * * * * ** * * ** * * * *
>>> 80 * ** * * * * * * ** * * * * ** * * * *
>>> 70 * * ** * * * * * * ** * * * * ** ** * * * *
>>> 60 **********************************************************************
>>> 50 ######################################################################
>>> 40 ######################################################################
>>> 30 ######################################################################
>>> 20 ######################################################################
>>> 10 ######################################################################
>>> 0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
>>> 0 5 0 5 0 5 0 5 0 5 0 5 0
>>> CPU% per hour (last 72 hours)
>>> * = maximum CPU% # = average CPU%
>>>
>>> s01#sh proc cpu sort
>>> CPU utilization for five seconds: 59%/48%; one minute: 61%; five minutes:
>>> 61%
>>> PID TID 5secUtil 1minUtil 5minUtil
>>> 1 1 41.0% 39.3% 39.3%
>>> 16406 6 28.8% 28.4% 28.4%
>>> 16429 4 7.5% 4.8% 2.9%
>>> 16406 22 7.0% 3.4% 3.3%
>>> 16406 21 6.5% 3.7% 1.9%
>>> 16406 17 4.5% 5.9% 3.9%
>>>
>>> s01#show proc cpu detail 16406
>>> CPU utilization for five seconds: 61%/46%; one minute: 61%; five minutes:
>>> 61%
>>> PID/TID 5Sec 1Min 5Min Process Prio STATE CPU
>>> 16406 51.9% 51.6% 51.4% ios-base
>>> 28d18h
>>> 1 0.0% 0.0% 0.0% 20 Condvar
>>> 0.000
>>> 2 1.7% 1.7% 1.6% 5 Ready
>>> 1d01h
>>> 3 9.2% 3.8% 4.1% 10 Reply
>>> 27m38s
>>> 4 0.0% 0.0% 0.9% 10 Receive
>>> 36m35s
>>> 5 0.0% 0.0% 0.0% 11 Nanosleep
>>> 2m21s
>>> 6 28.3% 28.5% 28.4% 21 Intr
>>> 17d05h
>>> 7 0.9% 1.8% 2.0% 22 Intr
>>> 1d05h
>>> 8 0.9% 0.9% 0.9% 23 Intr
>>> 12h23m
>>> 9 0.0% 0.0% 0.0% 25 Intr
>>> 0.000
>>> 10 0.0% 0.0% 0.1% 10 Receive
>>> 24m13s
>>> 11 0.0% 0.0% 0.0% 10 Receive
>>> 58m52s
>>> 12 0.0% 0.0% 0.0% 10 Condvar
>>> 0.001
>>> 13 0.0% 0.0% 0.0% 10 Receive
>>> 53m01s
>>>
>>> According to the caps, it's all L2 mcast with an Ethertype of 8819. Which
>>> is cobranet. If you want to know more about it, knock yourself out:
>>> http://www.peaveyoxford.com/kc/Working%20with%20CobraNet.pdf
>>>
>>> I tried to filter using VLAN ACL, but it stopped all communication in the
>>> VLAN for Ethertype 8819.
>>>
>>> On 2011-01-19, at 1:46 PM, Max Pierson wrote:
>>>
>>>> Here's more info on VACL's ...
>>>>
>>>> http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SXF/native/configuration/guide/vacl.html
>>>>
>>>> Apply the VACL to only drop the traffic if its destination to your RP
>>>> MAC. If it's true L2 multicast, it shouldn't be hitting your RP from my
>>>> understanding of L2 Multicast. (Someone please correct me if i'm wrong).
>>>>
>>>> On Wed, Jan 19, 2011 at 2:40 PM, Devin Kinch <devinkinch at gmail.com>
>>>> wrote:
>>>>
>>>> But then the traffic would also be filtered out as in enters the VLAN...
>>>> I still need the traffic to pass through the trunk links to the edge
>>>> aggregation switches at each site. I just need to keep it from hitting
>>>> the RP.
>>>>
>>>> Unless I'm confused about where VACLs apply filtering...
>>>>
>>>> On 2011-01-19, at 12:23 PM, Max Pierson wrote:
>>>>
>>>>> VACL's should do the trick for you here. If you know the value of the
>>>>> Ethertype frame, you should still be able to filter on it even though
>>>>> it's proprietary. If you don't, time to break out the shark! I would
>>>>> start off by trying to filter based on ethertype since that won't be
>>>>> common to any other traffic on the wire.
>>>>>
>>>>> Max
>>>>>
>>>>> On Wed, Jan 19, 2011 at 1:54 PM, Devin Kinch <devinkinch at gmail.com>
>>>>> wrote:
>>>>> I have a network running two 6509's with VSS in the core of the network.
>>>>> Several of the attached VLANs are used by an application that transmits
>>>>> audio with a non-IP based layer 2 multicast (016b.68xx.xxxx or something
>>>>> like that) stream. It uses many different destination MAC addresses and
>>>>> the frames have their own vendor proprietary Ethertype. We also need
>>>>> layer 3 routing in the same VLANs to manage these devices.
>>>>>
>>>>> The issue I have is that all these layer 2 multicasts are causing the
>>>>> CPU usage on the Sup to hover at around 70-80%. It isn't causing any
>>>>> noticeable impact to the network today, but it may impact future
>>>>> scalability. I've tried using CoPP (which doesn't support L2
>>>>> filtering)... is there an obvious, elegant way of filtering this traffic
>>>>> from hitting the RP, while still forwarding at layer 2? Perhaps static
>>>>> MAC entries?
>>>>>
>>>>>
>>>>> Devin Kinch
>>>>> _______________________________________________
>>>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>>>
>>>>
>>>>
>>>
>>> _______________________________________________
>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
More information about the cisco-nsp
mailing list