[c-nsp] Filtering Layer 2 Multicasts on 6509
Devin Kinch
devinkinch at gmail.com
Wed Jan 19 18:01:56 EST 2011
Thanks Pete, that's awesome. Still didn't tell me anything I didn't know, but I will definitely keep that one in the toolbelt :)
The results:
------- dump of incoming inband packet -------
interface Vl500, routine mistral_process_rx_packet_inlin, timestamp 15:54:34.867
dbus info: src_vlan 0x1F4(500), src_indx 0x1000(4096), len 0xC0(192)
bpdu 0, index_dir 0, flood 1, dont_lrn 0, dest_indx 0x41F4(16884)
40820000 01F40400 10000000 C0080000 1E000510 01000008 00000008 41F4136A
mistral hdr: req_token 0x0(0), src_index 0x1000(4096), rx_offset 0x76(118)
requeue 0, obl_pkt 0, vlan 0x1F4(500)
destmac 01.60.2B.FE.01.00, srcmac 00.60.2B.02.B9.6F, protocol 8819
layer 3 data: 10020404 B7E80100 00004240 4004FFFF FFFF0000 FFFFFFFF
00000000 00000000 00000000 FFFFFFFF FFFFFFFF FFFFFFFF
FFFF0000 00000000 0000FFFF 0000100A 00004212 1800
------- dump of incoming inband packet -------
interface Vl510, routine mistral_process_rx_packet_inlin, timestamp 15:54:34.871
dbus info: src_vlan 0x1FE(510), src_indx 0x1010(4112), len 0x2C0(704)
bpdu 0, index_dir 0, flood 1, dont_lrn 0, dest_indx 0x41FE(16894)
48820000 01FE0400 10100002 C0080000 1E000540 01000008 00000008 41FED6AE
mistral hdr: req_token 0x0(0), src_index 0x1010(4112), rx_offset 0x76(118)
requeue 0, obl_pkt 0, vlan 0x1FE(510)
destmac 01.60.2B.FE.01.00, srcmac 00.60.2B.02.82.83, protocol 8819
layer 3 data: 10020404 010D0100 00004240 40040000 00000000 0000FFFF
FFFFFFFF FFFFFFFF FFFF0000 0000FFFF FFFF0000 00000000
FFFFFFFF FFFFFFFF 0000FFFF 00001000 000041F4 1800
------- dump of incoming inband packet -------
interface Vl510, routine mistral_process_rx_packet_inlin, timestamp 15:54:34.871
dbus info: src_vlan 0x1FE(510), src_indx 0x1010(4112), len 0x40(64)
bpdu 0, index_dir 0, flood 1, dont_lrn 0, dest_indx 0x41FE(16894)
50820000 01FE0400 10100300 40080000 1E000000 0E000008 00000018 41FEA4F4
mistral hdr: req_token 0x0(0), src_index 0x1010(4112), rx_offset 0x76(118)
requeue 0, obl_pkt 0, vlan 0x1FE(510)
destmac 01.60.2B.FF.FF.00, srcmac 00.60.2B.02.82.83, protocol 8819
layer 3 data: 00020B04 010D0000 EE02FF00 06030802 B0040000 00000000
04400060 2B028283 02600000 00005555 55555555 55550000
00000000 00000000 0000FFFF
------- dump of incoming inband packet -------
interface Vl530, routine mistral_process_rx_packet_inlin, timestamp 15:54:34.871
dbus info: src_vlan 0x212(530), src_indx 0x100A(4106), len 0x40(64)
bpdu 0, index_dir 0, flood 1, dont_lrn 0, dest_indx 0x4212(16914)
58820000 02120400 100A0000 40080000 1E000070 0E000008 00000018 42124E73
mistral hdr: req_token 0x0(0), src_index 0x100A(4106), rx_offset 0x76(118)
requeue 0, obl_pkt 0, vlan 0x212(530)
destmac 01.60.2B.FF.FF.00, srcmac 00.60.2B.02.BF.64, protocol 8819
layer 3 data: 00020B04 0EBD0000 EE02FF00 06030802 B0040000 00000000
04400060 2B02BF64 02600000 00005555 55555555 55550000
00000000 00000000 0000FFFF
------- dump of incoming inband packet -------
interface Vl500, routine mistral_process_rx_packet_inlin, timestamp 15:54:34.871
dbus info: src_vlan 0x1F4(500), src_indx 0x1001(4097), len 0x40(64)
bpdu 0, index_dir 0, flood 1, dont_lrn 0, dest_indx 0x41F4(16884)
60820000 01F40400 10010000 40080000 1E000550 01000008 00000008 41F4505B
mistral hdr: req_token 0x0(0), src_index 0x1001(4097), rx_offset 0x76(118)
requeue 0, obl_pkt 0, vlan 0x1F4(500)
destmac 01.60.2B.FF.FF.00, srcmac 00.60.2B.02.B8.C2, protocol 8819
layer 3 data: 00020B04 B8E80000 EE022000 06030802 B0040000 00000000
04400060 2B02B96F 02600000 00005555 55555555 55550000
00000000 00000000 0000FFFF
------- dump of incoming inband packet -------
interface Vl530, routine mistral_process_rx_packet_inlin, timestamp 15:54:34.871
dbus info: src_vlan 0x212(530), src_indx 0x100A(4106), len 0x2C0(704)
bpdu 0, index_dir 0, flood 1, dont_lrn 0, dest_indx 0x4212(16914)
68820000 02120400 100A0002 C0080000 1E000030 0E000008 00000018 42122CA2
mistral hdr: req_token 0x0(0), src_index 0x100A(4106), rx_offset 0x76(118)
requeue 0, obl_pkt 0, vlan 0x212(530)
destmac 01.60.2B.FE.03.00, srcmac 00.60.2B.02.BF.64, protocol 8819
layer 3 data: 10020404 0EBD0300 00000240 00000240 00000240 00004240
40040000 FFFFFFFF 0000FFFF 0000FFFF FFFFFFFF FFFFFFFF
FFFFFFFF 0000FFFF 0000FFFF 00001001 000041F4 1800
------- dump of incoming inband packet -------
interface Vl500, routine mistral_process_rx_packet_inlin, timestamp 15:54:34.871
dbus info: src_vlan 0x1F4(500), src_indx 0x1000(4096), len 0xC0(192)
bpdu 0, index_dir 0, flood 1, dont_lrn 0, dest_indx 0x41F4(16884)
70820000 01F40400 10000000 C0080000 1E000510 01000008 00000008 41F4836A
mistral hdr: req_token 0x0(0), src_index 0x1000(4096), rx_offset 0x76(118)
requeue 0, obl_pkt 0, vlan 0x1F4(500)
destmac 01.60.2B.FE.01.00, srcmac 00.60.2B.02.B9.6F, protocol 8819
layer 3 data: 10020404 B8E80100 00004240 4004FFFF FFFFFFFF FFFFFFFF
FFFF0000 00000000 00000000 00000000 00000000 0000FFFF
FFFFFFFF FFFFFFFF 0000FFFF 0000100A 00004212 1800
Notice the timestamps... yep, a whole lot of Cobranet. So the RP does look that these packets. Any ideas for filtering?
On 2011-01-19, at 2:41 PM, Pete Lumbis wrote:
> I think the key is to determine why we are punting in order to apply
> the correct solution.
>
> Try and grab a netdr capture of the traffic
> http://www.pingjeffgreene.com/networkers-corner-2/troubleshooting-tools/netdr/
>
> Alternatively you can do an RP inband span, but this is a bit more
> involved than the netdr capture
> http://www.pingjeffgreene.com/networkers-corner-2/troubleshooting-tools/6500-rp-inband-span/
>
> The most common problem I see for high CPU due to multicast is that
> the TTL is 1, double check that as well.
>
> -Pete
>
> On Wed, Jan 19, 2011 at 5:08 PM, Devin Kinch <devinkinch at gmail.com> wrote:
>>> If it's true L2 multicast, it shouldn't be hitting your RP from my understanding of L2 Multicast.
>>
>> I didn't think so either. But when I turn down the SVI interfaces in those VLANs, the CPU usage goes down to around 8-9%. Here's the output showing the CPU usage:
>>
>> 6686588558665866686568555866585558955866685568667865986558656855686568
>> 8371990985129424261948999640989895598405059945041409672886290689060905
>> 100 * *
>> 90 * * * * * * * ** * * ** * * * *
>> 80 * ** * * * * * * ** * * * * ** * * * *
>> 70 * * ** * * * * * * ** * * * * ** ** * * * *
>> 60 **********************************************************************
>> 50 ######################################################################
>> 40 ######################################################################
>> 30 ######################################################################
>> 20 ######################################################################
>> 10 ######################################################################
>> 0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
>> 0 5 0 5 0 5 0 5 0 5 0 5 0
>> CPU% per hour (last 72 hours)
>> * = maximum CPU% # = average CPU%
>>
>> s01#sh proc cpu sort
>> CPU utilization for five seconds: 59%/48%; one minute: 61%; five minutes: 61%
>> PID TID 5secUtil 1minUtil 5minUtil
>> 1 1 41.0% 39.3% 39.3%
>> 16406 6 28.8% 28.4% 28.4%
>> 16429 4 7.5% 4.8% 2.9%
>> 16406 22 7.0% 3.4% 3.3%
>> 16406 21 6.5% 3.7% 1.9%
>> 16406 17 4.5% 5.9% 3.9%
>>
>> s01#show proc cpu detail 16406
>> CPU utilization for five seconds: 61%/46%; one minute: 61%; five minutes: 61%
>> PID/TID 5Sec 1Min 5Min Process Prio STATE CPU
>> 16406 51.9% 51.6% 51.4% ios-base 28d18h
>> 1 0.0% 0.0% 0.0% 20 Condvar 0.000
>> 2 1.7% 1.7% 1.6% 5 Ready 1d01h
>> 3 9.2% 3.8% 4.1% 10 Reply 27m38s
>> 4 0.0% 0.0% 0.9% 10 Receive 36m35s
>> 5 0.0% 0.0% 0.0% 11 Nanosleep 2m21s
>> 6 28.3% 28.5% 28.4% 21 Intr 17d05h
>> 7 0.9% 1.8% 2.0% 22 Intr 1d05h
>> 8 0.9% 0.9% 0.9% 23 Intr 12h23m
>> 9 0.0% 0.0% 0.0% 25 Intr 0.000
>> 10 0.0% 0.0% 0.1% 10 Receive 24m13s
>> 11 0.0% 0.0% 0.0% 10 Receive 58m52s
>> 12 0.0% 0.0% 0.0% 10 Condvar 0.001
>> 13 0.0% 0.0% 0.0% 10 Receive 53m01s
>>
>> According to the caps, it's all L2 mcast with an Ethertype of 8819. Which is cobranet. If you want to know more about it, knock yourself out:
>> http://www.peaveyoxford.com/kc/Working%20with%20CobraNet.pdf
>>
>> I tried to filter using VLAN ACL, but it stopped all communication in the VLAN for Ethertype 8819.
>>
>> On 2011-01-19, at 1:46 PM, Max Pierson wrote:
>>
>>> Here's more info on VACL's ...
>>>
>>> http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SXF/native/configuration/guide/vacl.html
>>>
>>> Apply the VACL to only drop the traffic if its destination to your RP MAC. If it's true L2 multicast, it shouldn't be hitting your RP from my understanding of L2 Multicast. (Someone please correct me if i'm wrong).
>>>
>>> On Wed, Jan 19, 2011 at 2:40 PM, Devin Kinch <devinkinch at gmail.com> wrote:
>>>
>>> But then the traffic would also be filtered out as in enters the VLAN... I still need the traffic to pass through the trunk links to the edge aggregation switches at each site. I just need to keep it from hitting the RP.
>>>
>>> Unless I'm confused about where VACLs apply filtering...
>>>
>>> On 2011-01-19, at 12:23 PM, Max Pierson wrote:
>>>
>>>> VACL's should do the trick for you here. If you know the value of the Ethertype frame, you should still be able to filter on it even though it's proprietary. If you don't, time to break out the shark! I would start off by trying to filter based on ethertype since that won't be common to any other traffic on the wire.
>>>>
>>>> Max
>>>>
>>>> On Wed, Jan 19, 2011 at 1:54 PM, Devin Kinch <devinkinch at gmail.com> wrote:
>>>> I have a network running two 6509's with VSS in the core of the network. Several of the attached VLANs are used by an application that transmits audio with a non-IP based layer 2 multicast (016b.68xx.xxxx or something like that) stream. It uses many different destination MAC addresses and the frames have their own vendor proprietary Ethertype. We also need layer 3 routing in the same VLANs to manage these devices.
>>>>
>>>> The issue I have is that all these layer 2 multicasts are causing the CPU usage on the Sup to hover at around 70-80%. It isn't causing any noticeable impact to the network today, but it may impact future scalability. I've tried using CoPP (which doesn't support L2 filtering)... is there an obvious, elegant way of filtering this traffic from hitting the RP, while still forwarding at layer 2? Perhaps static MAC entries?
>>>>
>>>>
>>>> Devin Kinch
>>>> _______________________________________________
>>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>>
>>>
>>>
>>
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
More information about the cisco-nsp
mailing list