[f-nsp] Multicast causing high CPU

Peter Olsen Peter.Olsen at GlobalConnect.dk
Sat Mar 7 08:02:42 EST 2009


JetCore FPGA can handle around 200.000pps broadcast/multicast before you have ~80-100% CPU load as far as L2 is concerned
All packets is handled by CPU by default.
You can play around with the CPU protection features but to my experience it have limited impact (and limited success as far as CPU load is concerned)
I guess this is kind of architecture limitation in JetCore.
 
It is even worse if you have the same mac adress arriving from two different directions, e.g. hitting two port at the same time. (will happen if you have a L2 loop), then 99% CPU load will happen at very load pps. In this case it is caused by the mac learning process eating all CPU resources. This behaviour makes JetCore L2 very sensitive to layer2 loops.
 
For us it was first efficiently fixed going to MLX/XMR where CPU-protection feature is very efficient and you can force all braodcast/multicast packets to stay in hardware for forwarding. XMR/MLX can also handle much more than 200.000pps in CPU
 
br,
Peter
 

________________________________

From: foundry-nsp-bounces at puck.nether.net [mailto:foundry-nsp-bounces at puck.nether.net] On Behalf Of Alexey Kouznetsov
Sent: 7. marts 2009 13:36
To: foundry-nsp at puck.nether.net
Subject: Re: [f-nsp] Multicast causing high CPU


Hello!
 
We got the problem with high CPU usage on our bigiron 8000 router managed by:
 
SL 3: BxGMR4 M4 Management Module, SYSIF 2 (Mini GBIC), M4, ACTIVE
      Serial #:  
8192 KB BRAM, SMC version 1, BM version 21
  512 KB PRAM(512K+0K) and 2048*8 CAM entries for DMA  8, version 0209
  512 KB PRAM(512K+0K) and shared CAM entries for DMA  9, version 0209
  512 KB PRAM(512K+0K) and 2048*8 CAM entries for DMA 10, version 0209
  512 KB PRAM(512K+0K) and shared CAM entries for DMA 11, version 0209
 
When MC traffic goes to near 80 MBit (with packet size about 1300 bytes) we got near 100% CPU usage apd MC packet lost. 
 
telnet at BigIron Router#show cp
74 percent busy, from 141995 sec ago
1 sec avg: 96 percent busy
5 sec avg: 97 percent busy
60 sec avg: 96 percent busy
300 sec avg: 96 percent busy

telnet at BigIron Router#show processes cpu
Process Name 5Sec(%) 1Min(%) 5Min(%) 15Min(%) Runtime(ms)
IP_M 69.22 83.14 80.04 71.83 95219021
rest counters here near to 0.
 
Currently this switch doing the MC routing and L2 switching to 7*8 GB ports. Switching itself work withot problems. Routing of MC work perfect untill it load CPU less then 70-80%. then packet lost and as far as thi is TV unusable picture.
as far as I can see from  sh ip pim mcache 
 
1    (x.x.y.194 224.0.x.16) in v11 (tag e6/8), cnt=28266
     upstream nbr is x.x.x.109 on v11 using ip route
     Sparse Mode, RPT=0 SPT=1 REG=0 MSDP Adv=0 MSDP Create=1
     L3 (SW) 1: tag e8/8(VL1018)
     fast=0 slow=1 pru=0 swL2=0 hwL2=0 tag graft 0L2C
     age=0s up-time=2m no fid, no flow,
2    (* 224.0.x.16) RP x.x.x.109, in NIL (NIL), cnt=2028
     RP is directly connected
     Sparse Mode, RPT=1 SPT=0 REG=0 MSDP Adv=0 MSDP Create=0
     L3 (SW) 1: tag e8/8(VL1018)
     fast=0 slow=1 pru=0 swL2=0 hwL2=0 tag graft 0L2C
     age=0s up-time=2m no fid, no flow,

switchig works in software, not hardware mode. path to 224.0.x.16 receiced via msdp connection. MCast route to x.x.y.194 received via MBGP (but I tried to make static route, both route and mroute to such address)
 
all mcast related config bellow.
  
ip multicast-routing

ip multicast-perf
ip show-subnet-length
ip igmp query-interval 120
ip igmp group-membership-time 240

no ip source-route
ip multicast passive
ip multicast filter
ip multicast hardware-drop
    
mcast-hw-replic-oar

router pim
rp-address х.х.х.1 0
hardware-drop

router msdp
msdp-peer х.х.х.2
ttl-threshold х.х.х.2 10

interface ve 1
ip address y.y.31.2/24
ip pim-sparse
!
interface ve 3
ip address y.y.1.2/24
ip pim-sparse

..
! This is intercase where MC come from
interface ve 11
ip address x.x.x.1/30
ip pim-sparse
ip pim border
  
I saw there mail from someone to create ve interface w/o IP address... in this case such vlan does not receve MC at all. Also I tried to disable L2 mcast 
no ip multicast 
but no any changes seen.
 
Any ideas how to force HW MC routing ? 
 
With best regards
/Alexey
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20090307/eb2160ed/attachment.html>


More information about the foundry-nsp mailing list