[f-nsp] Multicast is being switched by LP CPU on MLXe?
Daniel Schmidt
daniel.schmidt at wyo.gov
Mon Dec 19 13:20:36 EST 2016
It's been a while, but I wouldn't think that pim snooping would not do a
darn thing on l2, you would want igmp snooping. That said, I don't think
that would work without at least a local l3 interface to respond to the
queries. Otherwise, you might as well just use broadcast. (Not that I
recommend that)
On Mon, Dec 19, 2016 at 4:57 AM, Alexander Shikoff <minotaur at crete.org.ua>
wrote:
> Hello!
>
> On Sun, Dec 18, 2016 at 10:03:59AM -0700, Eldon Koyle wrote:
> > What does your pim configuration look like? Especially your rp
> config.
> > Making sure there is no rp-candidate for traffic you want to keep on
> a single l2 domain can
> > help a lot (in fact, we only add rp entries for specific apps). This
> is especially true
> > for groups used by SSDP or mDNS. It's been a while, but I remember
> having similar issues.
> > I'll have to go dig through my configs and see if it reminds me of
> anything else.
>
> There is no any L3 multicast routing.
> Configuration is clear L2.
>
> > --
> > Eldon Koyle
> >
> > On Dec 13, 2016 08:29, "Alexander Shikoff" <[1]minotaur at crete.org.ua>
> wrote:
> >
> > Hi!
> > Well, I'd like to bring this thread up again hoping to catch
> > someone who has also hit this issue.
> > Today I upgraded software to 05.9.00be, and situation is still
> > the same: with enabled Multicast Traffic Reduction,
> > multicast traffic is being switched by LP CPU.
> > Current test VLAN configuration is:
> > !
> > vlan 450 name ITCons2DS_test
> > tagged ethe 7/1 to 7/2 ethe 9/5 ethe 11/2 ethe 12/8 ethe 13/8
> > multicast passive
> > multicast pimsm-snooping
> > !
> > [2]telnet at lsr1-gdr.ki#show vlan 450
> > PORT-VLAN 450, Name ITCons2DS_test, Priority Level 0, Priority
> Force 0, Creation Type
> > STATIC
> > Topo HW idx : 65535 Topo SW idx: 257 Topo next vlan: 0
> > L2 protocols : NONE
> > Statically tagged Ports : ethe 7/1 to 7/2 ethe 9/5 ethe 11/2
> ethe 12/8 ethe 13/8
> > Associated Virtual Interface Id: NONE
> > ----------------------------------------------------------
> > Port Type Tag-Mode Protocol State
> > 7/1 TRUNK TAGGED NONE FORWARDING
> > 7/2 TRUNK TAGGED NONE FORWARDING
> > 9/5 TRUNK TAGGED NONE FORWARDING
> > 11/2 TRUNK TAGGED NONE FORWARDING
> > 12/8 TRUNK TAGGED NONE FORWARDING
> > 13/8 TRUNK TAGGED NONE FORWARDING
> > Arp Inspection: 0
> > DHCP Snooping: 0
> > IPv4 Multicast Snooping: Enabled - Passive
> > IPv6 Multicast Snooping: Disabled
> > No Virtual Interfaces configured for this vlan
> > IGMP snooping works, I'm able to see In/Out interfaces and current
> > active querier:
> > [3]telnet at lsr1-gdr.ki#show ip multicast vlan 450
> > ----------+-----+---------+---------------+-----+-----+------
> > VLAN State Mode Active Time (*, G)(S, G)
> > Querier Query Count Count
> > ----------+-----+---------+---------------+-----+-----+------
> > 450 Ena Passive 192.168.210.1 119 1 1
> > ----------+-----+---------+---------------+-----+-----+------
> > Router ports: 12/8 (11s)
> > Flags- R: Router Port, V2|V3: IGMP Receiver, P_G|P_SG: PIM Join
> > 1 (*, 239.32.4.130) 00:34:48 NumOIF: 1 profile:
> none
> > Outgoing Interfaces:
> > e9/5 vlan 450 ( V2) 00:34:48/40s
> > 1 (91.238.195.1, 239.32.4.130) in e11/2 vlan 450 00:34:48
> NumOIF: 1
> > profile: none
> > Outgoing Interfaces:
> > TR(e9/5,e7/1) vlan 450 ( V2) 00:34:48/0s
> > FID: 0xa0a9 MVID: None
> > Right after multicast stream start flooding from Eth11/2 out of
> TR(e9/5,e7/1),
> > the CPU load on LP 11 increases:
> > [4]telnet at lsr1-gdr.ki#show cpu-utilization lp 11
> > 17:25:10 GMT+02 Tue Dec 13 2016
> > SLOT #: LP CPU UTILIZATION in %:
> > in 1 second: in 5 seconds: in 60 seconds: in 300
> seconds:
> > 11: 6 6 6 6
> > And I see these packets processed by LP CPU:
> > LP-11#debug packet capture include vlan-id 450
> > [...]
> > 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
> > ************************************************************
> **********
> > [ppcr_tx_packet] ACTION: Forward packet using fid 0xa0a9
> > [xpp10ge_cpu_forward_debug]: Forward LP packet
> > Time stamp : 00 day(s) 11h 32m 49s:,
> > TM Header: [ 1022 00a9 a0a9 ]
> > Type: Multicast(0x00000000) Size: 34 Mcast ID: 0x9a0 Src Port: 2
> > Drp Pri: 2 Snp: 2 Exclude Src: 0 Cls: 0x00000001
> > ************************************************************
> **********
> > 00: a0a9 0403 5e50 41c2-7840 0abc 4400 0000 FID = 0xa0a9
> > 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
> > 20: 0540 08df 0000 3d11-5cb4 5bee c301 ef20 VLAN = 450(0x01c2)
> > 30: 0482 07d0 07d0 052c-0000 4701 e11a 8534 CAM = 0x00055e
> > 40: 5a95 fb85 94ee 0b69-9938 967a c827 f571 SFLOW = 0
> > 50: 73cc 8e72 98cc 82e0-436e 30f1 4414 f400 DBL TAG = 0
> > 60: 11fd 7b2b c8be d9ca-d0fa 44d0 45b5 53e5
> > 70: a386 ac24 cc0b 9698-c0a2 ff65 9f32 6b14
> > Pri CPU MON SRC PType US BRD DAV SAV DPV SV ER TXA SAS Tag MVID
> > 4 0 0 11/2 3 0 1 0 1 1 1 1 0 0 1 0
> > 91.238.195.1 -> 239.32.4.130 UDP [2000 -> 2000]
> > ************************************************************
> **********
> > [ppcr_rx_packet]: Packet received
> > Time stamp : 00 day(s) 11h 32m 49s:,
> > TM Header: [ 0564 8a23 0040 ]
> > Type: Fabric Unicast(0x00000000) Size: 1380 Class: 4 Src sys port:
> 2595
> > Dest Port: 0 Drop Prec: 1 Ing Q Sig: 0 Out mirr dis: 0x0 Excl src:
> 0 Sys mc: 0
> > ************************************************************
> **********
> > Packet size: 1374, XPP reason code: 0x00045286
> > 00: 05f0 0403 5c50 41c2-7841 fffe 4400 0000 FID = 0x05f0
> > 10: 0100 5e20 0482 f4cc-55e5 4600 0800 4588 Offset = 0x10
> > 20: 0540 08e0 0000 3d11-5cb3 5bee c301 ef20 VLAN = 450(0x01c2)
> > 30: 0482 07d0 07d0 052c-0000 4701 e11f c052 CAM = 0x00ffff(R)
> > 40: e9df e2fb 1f9d 0c1d-354a 7df5 f0df edab SFLOW = 0
> > 50: 1145 566c 4c59 2557-f7cf c708 a75e 5a29 DBL TAG = 0
> > 60: 1704 9f8b 151c b66b-957a 51eb ac99 772d
> > 70: 07e7 23d7 f84a 50ac-5864 452d 7f70 0495
> > Pri CPU MON SRC PType US BRD D
> > 4 0 0 11/2 3 0 1 0 1 1 1 0 0 0 1 0
> > I have no ideas why it happens. "Multicast Guide" clearly tells
> > that these packets should be processed in hardware.
> > Please advice!
> > Thanks!
> > --
> > MINO-RIPE
> > _______________________________________________
> > foundry-nsp mailing list
> > [5]foundry-nsp at puck.nether.net
> > [6]http://puck.nether.net/mailman/listinfo/foundry-nsp
> >
> > Посилання
> >
> > 1. mailto:minotaur at crete.org.ua
> > 2. http://telnet@lsr1-gdr.ki/#show
> > 3. http://telnet@lsr1-gdr.ki/#show
> > 4. http://telnet@lsr1-gdr.ki/#show
> > 5. mailto:foundry-nsp at puck.nether.net
> > 6. http://puck.nether.net/mailman/listinfo/foundry-nsp
>
> --
> MINO-RIPE
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
--
E-Mail to and from me, in connection with the transaction
of public business, is subject to the Wyoming Public Records
Act and may be disclosed to third parties.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20161219/fbb74876/attachment.html>
More information about the foundry-nsp
mailing list