[c-nsp] Filtering Layer 2 Multicasts on 6509

Pete Lumbis alumbis at gmail.com
Wed Jan 19 17:41:01 EST 2011


I think the key is to determine why we are punting in order to apply
the correct solution.

Try and grab a netdr capture of the traffic
http://www.pingjeffgreene.com/networkers-corner-2/troubleshooting-tools/netdr/

Alternatively you can do an RP inband span, but this is a bit more
involved than the netdr capture
http://www.pingjeffgreene.com/networkers-corner-2/troubleshooting-tools/6500-rp-inband-span/

The most common problem I see for high CPU due to multicast is that
the TTL is 1, double check that as well.

-Pete

On Wed, Jan 19, 2011 at 5:08 PM, Devin Kinch <devinkinch at gmail.com> wrote:
>> If it's true L2 multicast, it shouldn't be hitting your RP from my understanding of L2 Multicast.
>
> I didn't think so either.  But when I turn down the SVI interfaces in those VLANs, the CPU usage goes down to around 8-9%.  Here's the output showing the CPU usage:
>
>    6686588558665866686568555866585558955866685568667865986558656855686568
>    8371990985129424261948999640989895598405059945041409672886290689060905
> 100                                   *                 *
>  90   *  *   *       *   *   *   *   **      *   *      **   *   *   *   *
>  80   *  **  *   *   *   *   *   *   **  *   *   *   *  **   *   *   *   *
>  70 * *  **  *   *   *   *   *   *   **  * * *   *  **  **   *   *   *   *
>  60 **********************************************************************
>  50 ######################################################################
>  40 ######################################################################
>  30 ######################################################################
>  20 ######################################################################
>  10 ######################################################################
>   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
>             0    5    0    5    0    5    0    5    0    5    0    5    0
>                   CPU% per hour (last 72 hours)
>                  * = maximum CPU%   # = average CPU%
>
> s01#sh proc cpu sort
> CPU utilization for five seconds: 59%/48%; one minute: 61%; five minutes: 61%
> PID     TID      5secUtil 1minUtil 5minUtil
> 1       1         41.0%   39.3%    39.3%
> 16406   6         28.8%   28.4%    28.4%
> 16429   4          7.5%    4.8%     2.9%
> 16406   22         7.0%    3.4%     3.3%
> 16406   21         6.5%    3.7%     1.9%
> 16406   17         4.5%    5.9%     3.9%
>
> s01#show proc cpu detail 16406
> CPU utilization for five seconds: 61%/46%; one minute: 61%; five minutes: 61%
> PID/TID   5Sec    1Min     5Min Process             Prio  STATE        CPU
> 16406    51.9%   51.6%    51.4% ios-base                              28d18h
>      1   0.0%    0.0%     0.0%                       20  Condvar      0.000
>      2   1.7%    1.7%     1.6%                        5  Ready        1d01h
>      3   9.2%    3.8%     4.1%                       10  Reply       27m38s
>      4   0.0%    0.0%     0.9%                       10  Receive     36m35s
>      5   0.0%    0.0%     0.0%                       11  Nanosleep    2m21s
>      6  28.3%   28.5%    28.4%                       21  Intr        17d05h
>      7   0.9%    1.8%     2.0%                       22  Intr         1d05h
>      8   0.9%    0.9%     0.9%                       23  Intr        12h23m
>      9   0.0%    0.0%     0.0%                       25  Intr         0.000
>     10   0.0%    0.0%     0.1%                       10  Receive     24m13s
>     11   0.0%    0.0%     0.0%                       10  Receive     58m52s
>     12   0.0%    0.0%     0.0%                       10  Condvar      0.001
>     13   0.0%    0.0%     0.0%                       10  Receive     53m01s
>
> According to the caps, it's all L2 mcast with an Ethertype of 8819.  Which is cobranet.  If you want to know more about it, knock yourself out:
> http://www.peaveyoxford.com/kc/Working%20with%20CobraNet.pdf
>
> I tried to filter using VLAN ACL, but it stopped all communication in the VLAN for Ethertype 8819.
>
> On 2011-01-19, at 1:46 PM, Max Pierson wrote:
>
>> Here's more info on VACL's ...
>>
>> http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SXF/native/configuration/guide/vacl.html
>>
>> Apply the VACL to only drop the traffic if its destination to your RP MAC. If it's true L2 multicast, it shouldn't be hitting your RP from my understanding of L2 Multicast. (Someone please correct me if i'm wrong).
>>
>> On Wed, Jan 19, 2011 at 2:40 PM, Devin Kinch <devinkinch at gmail.com> wrote:
>>
>> But then the traffic would also be filtered out as in enters the VLAN... I still need the traffic to pass through the trunk links to the edge aggregation switches at each site.  I just need to keep it from hitting the RP.
>>
>> Unless I'm confused about where VACLs apply filtering...
>>
>> On 2011-01-19, at 12:23 PM, Max Pierson wrote:
>>
>>> VACL's should do the trick for you here. If you know the value of the Ethertype frame, you should still be able to filter on it even though it's proprietary. If you don't, time to break out the shark! I would start off by trying to filter based on ethertype since that won't be common to any other traffic on the wire.
>>>
>>> Max
>>>
>>> On Wed, Jan 19, 2011 at 1:54 PM, Devin Kinch <devinkinch at gmail.com> wrote:
>>> I have a network running two 6509's with VSS in the core of the network.  Several of the attached VLANs are used by an application that transmits audio with a non-IP based layer 2 multicast (016b.68xx.xxxx or something like that) stream.  It uses many different destination MAC addresses and the frames have their own vendor proprietary Ethertype.  We also need layer 3 routing in the same VLANs to manage these devices.
>>>
>>> The issue I have is that all these layer 2 multicasts are causing the CPU usage on the Sup to hover at around 70-80%.  It isn't causing any noticeable impact to the network today, but it may impact future scalability.  I've tried using CoPP (which doesn't support L2 filtering)... is there an obvious, elegant way of filtering this traffic from hitting the RP, while still forwarding at layer 2?  Perhaps static MAC entries?
>>>
>>>
>>> Devin Kinch
>>> _______________________________________________
>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>
>>
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>



More information about the cisco-nsp mailing list