[f-nsp] High CPU MLX-4
Eldon Koyle
ekoyle+puck.nether.net at gmail.com
Tue Apr 18 21:57:41 EDT 2017
Have you disabled icmp redirects? That is a common cause of unexplained
high cpu utilization. I think the command is: no ip redirect (either
interface or global).
Also, which code version are you running?
--
Eldon
On Apr 18, 2017 7:14 PM, "Joe Lao" <Joelao8392 at mail.com> wrote:
> Hello List
>
> My colleague posted on this list last month about a LP CPU issue
> experienced on MLX routers with GRE tunnels
>
> The issue did not resolve itself instead we asked our customers to not
> send outbound traffic through us
>
> However a new issue has arised
>
>
> Our topography is as follows
>
> CARRIER A -----> MLX-4-1 ---- MLX-4-2 ----> CARRIER B .. Carrier B
> connection is specifically designed to tank attacks, carrier A backhauls
> clean/protected traffic
>
> MLX-4-2 holds our GRE tunnels
>
>
> Now we are seeing 95% LP CPU on MLX-4-1 and a packet capture shows only
> GRE packets from MLX-4-2 destined for the customers GRE endpoint
>
>
> SLOT #: LP CPU UTILIZATION in %:
> in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
> 1: 94 94 94 94
>
>
> LP-1#show tasks
> Task Name Pri State PC Stack Size CPU Usage(%)
> task vid
> -------------- --- ----- -------- -------- ------ ------------
> --------
> con 27 wait 0005c710 040c5dc8 32768 0 0
> mon 31 wait 0005c710 041b7f10 8192 0 0
> flash 20 wait 0005c710 041c6f40 8192 0 0
> dbg 30 wait 0005c710 041beec0 16384 0 0
> main 3 wait 0005c710 23cc6f40 262144 1 101
> LP-I2C 3 wait 0005c710 27d70ee0 4096 0 101
> LP-Assist 3 wait 0005c710 29bbef00 32768 0 101
> LP-FCopy 3 wait 0005c710 29bc3f00 16384 0 101
> LP-VPLS-Offld 3 wait 0005c710 29bc8f00 16384 0 101
> LP-OF-Offld 3 wait 0005c710 29bcdf00 16384 0 101
> LP-TM-Offld 3 wait 0005c710 29bd2f00 16384 0 101
> LP-Stats 3 wait 0005c710 29bd7f60 16384 0 101
> LP-IPC 3 wait 0005c710 29c18f00 262144 0 101
> LP-TX-Pak 3 wait 0005c710 29c21f00 32768 0 101
> LP-RX-Pak 3 wait 0005c710 29c42f38 131072 97 101
> LP-SYS-Mon 3 wait 0005c710 29c47f28 16384 0 101
> LP-RTD-Mon 3 wait 0005c710 29c4cf08 16384 0 101
> LP-Console 3 ready 20b636c0 29c6df78 131072 0 101
> LP-CPU-Mon 3 wait 0005c710 29c96f40 163840 0 101
>
>
> MLX-4-2 Client GRE endpoint
> xxxxxxxx -> xxxxx [Protocol:47]
> **********************************************************************
> [ppcr_rx_packet]: Packet received
> Time stamp : 00 day(s) 00h 14m 33s:,
> TM Header: [ 8026 2000 0000 ]
> Type: Fabric Unicast(0x00000008) Size: 152 Parity: 2 Src IF: 0
> Src Fap: 0 Dest Port: 0 Src Type: 0 Class: 0x00000000
> **********************************************************************
> Packet size: 146, XPP reason code: 0x00004747
>
> Traffic levels are very low , the connection to carrier A shows
> approximately 40Mbps
>
> LP CPU on MLX-4-2 is
>
> SLOT #: LP CPU UTILIZATION in %:
> in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:
> 1: 1 1 1 1
>
>
> As a test I shut the port between MLX-4-1 and MLX-4-2 immediately CPU
> usage dropped to 1% on MLX-4-1
>
>
> No protocols over GRE tunnel we announce /24 and such and route throught
> the tunnel using static route
>
>
>
> Show port on MLX-4-1 to MLX-4-2
>
> Port is not enabled to receive all vlan packets for pbr
> MTU 1548 bytes, encapsulation ethernet
> Openflow: Disabled, Openflow Index 1
> Cluster L2 protocol forwarding enabled
> 300 second input rate: 64599535 bits/sec, 61494 packets/sec, 0.74%
> utilization
> 300 second output rate: 2468 bits/sec, 4 packets/sec, 0.00% utilization
> 82862765 packets input, 10844340289 bytes, 0 no buffer
> Received 25656 broadcasts, 27667 multicasts, 82809442 unicasts
> 0 input errors, 0 CRC, 0 frame, 0 ignored
> 0 runts, 0 giants
> NP received 82871502 packets, Sent to TM 82860777 packets
> NP Ingress dropped 10729 packets
> 9484 packets output, 726421 bytes, 0 underruns
> Transmitted 127 broadcasts, 553 multicasts, 8804 unicasts
> 0 output errors, 0 collisions
> NP transmitted 9485 packets, Received from TM 48717 packets
>
> Show port on MLX-4-2 to MLX-4-1
>
> Port is not enabled to receive all vlan packets for pbr
> MTU 1548 bytes, encapsulation ethernet
> Openflow: Disabled, Openflow Index 1
> Cluster L2 protocol forwarding enabled
> 300 second input rate: 2416 bits/sec, 3 packets/sec, 0.00% utilization
> 300 second output rate: 64189791 bits/sec, 61109 packets/sec, 0.74%
> utilization
> 5105571056 <(510)%20557-1056> packets input, 760042160157 bytes, 0 no
> buffer
> Received 1874232 broadcasts, 5287030 multicasts, 5098409794
> <(509)%20840-9794> unicasts
> 0 input errors, 0 CRC, 0 frame, 0 ignored
> 0 runts, 0 giants
> NP received 5105571056 <(510)%20557-1056> packets, Sent to TM 5105113719
> <(510)%20511-3719> packets
> NP Ingress dropped 457337 packets
> 590086066756 packets output, 81697023432476 bytes, 0 underruns
> Transmitted 129784095 broadcasts, 208762136 multicasts, 589747520525
> unicasts
> 0 output errors, 0 collisions
> NP transmitted 590086072891 packets, Received from TM 590091974310
> packets
>
>
> Cheers
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20170418/fc53ef0c/attachment-0001.html>
More information about the foundry-nsp
mailing list