<div dir="auto">Have you disabled icmp redirects? That is a common cause of unexplained high cpu utilization. I think the command is: no ip redirect (either interface or global).<div dir="auto"><br></div><div dir="auto">Also, which code version are you running?</div><div dir="auto"><div dir="auto"><br></div><div dir="auto">-- </div><div dir="auto">Eldon</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Apr 18, 2017 7:14 PM, "Joe Lao" <<a href="mailto:Joelao8392@mail.com">Joelao8392@mail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:Verdana;font-size:12.0px"><div>Hello List</div>
<div> </div>
<div>My colleague posted on this list last month about a LP CPU issue experienced on MLX routers with GRE tunnels</div>
<div> </div>
<div>The issue did not resolve itself instead we asked our customers to not send outbound traffic through us</div>
<div> </div>
<div>However a new issue has arised</div>
<div> </div>
<div> </div>
<div>Our topography is as follows</div>
<div> </div>
<div>CARRIER A -----> MLX-4-1 ---- MLX-4-2 ----> CARRIER B .. Carrier B connection is specifically designed to tank attacks, carrier A backhauls clean/protected traffic</div>
<div> </div>
<div>MLX-4-2 holds our GRE tunnels</div>
<div> </div>
<div> </div>
<div>Now we are seeing 95% LP CPU on MLX-4-1 and a packet capture shows only GRE packets from MLX-4-2 destined for the customers GRE endpoint</div>
<div> </div>
<div><br>
SLOT #: LP CPU UTILIZATION in %:<br>
in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:<br>
1: 94 94 94 94</div>
<div> </div>
<div> </div>
<div>LP-1#show tasks<br>
Task Name Pri State PC Stack Size CPU Usage(%) task vid<br>
-------------- --- ----- -------- -------- ------ ------------ --------<br>
con 27 wait 0005c710 040c5dc8 32768 0 0<br>
mon 31 wait 0005c710 041b7f10 8192 0 0<br>
flash 20 wait 0005c710 041c6f40 8192 0 0<br>
dbg 30 wait 0005c710 041beec0 16384 0 0<br>
main 3 wait 0005c710 23cc6f40 262144 1 101<br>
LP-I2C 3 wait 0005c710 27d70ee0 4096 0 101<br>
LP-Assist 3 wait 0005c710 29bbef00 32768 0 101<br>
LP-FCopy 3 wait 0005c710 29bc3f00 16384 0 101<br>
LP-VPLS-Offld 3 wait 0005c710 29bc8f00 16384 0 101<br>
LP-OF-Offld 3 wait 0005c710 29bcdf00 16384 0 101<br>
LP-TM-Offld 3 wait 0005c710 29bd2f00 16384 0 101<br>
LP-Stats 3 wait 0005c710 29bd7f60 16384 0 101<br>
LP-IPC 3 wait 0005c710 29c18f00 262144 0 101<br>
LP-TX-Pak 3 wait 0005c710 29c21f00 32768 0 101<br>
LP-RX-Pak 3 wait 0005c710 29c42f38 131072 97 101<br>
LP-SYS-Mon 3 wait 0005c710 29c47f28 16384 0 101<br>
LP-RTD-Mon 3 wait 0005c710 29c4cf08 16384 0 101<br>
LP-Console 3 ready 20b636c0 29c6df78 131072 0 101<br>
LP-CPU-Mon 3 wait 0005c710 29c96f40 163840 0 101</div>
<div> </div>
<div> </div>
<div>MLX-4-2 Client GRE endpoint</div>
<div>xxxxxxxx -> xxxxx [Protocol:47]<br>
******************************<wbr>******************************<wbr>**********<br>
[ppcr_rx_packet]: Packet received<br>
Time stamp : 00 day(s) 00h 14m 33s:,<br>
TM Header: [ 8026 2000 0000 ]<br>
Type: Fabric Unicast(0x00000008) Size: 152 Parity: 2 Src IF: 0<br>
Src Fap: 0 Dest Port: 0 Src Type: 0 Class: 0x00000000<br>
******************************<wbr>******************************<wbr>**********<br>
Packet size: 146, XPP reason code: 0x00004747</div>
<div> </div>
<div>Traffic levels are very low , the connection to carrier A shows approximately 40Mbps</div>
<div> </div>
<div>LP CPU on MLX-4-2 is</div>
<div> </div>
<div>SLOT #: LP CPU UTILIZATION in %:<br>
in 1 second: in 5 seconds: in 60 seconds: in 300 seconds:<br>
1: 1 1 1 1 </div>
<div> </div>
<div> </div>
<div>As a test I shut the port between MLX-4-1 and MLX-4-2 immediately CPU usage dropped to 1% on MLX-4-1</div>
<div> </div>
<div> </div>
<div>No protocols over GRE tunnel we announce /24 and such and route throught the tunnel using static route</div>
<div> </div>
<div> </div>
<div> </div>
<div>Show port on MLX-4-1 to MLX-4-2</div>
<div> </div>
<div> Port is not enabled to receive all vlan packets for pbr<br>
MTU 1548 bytes, encapsulation ethernet<br>
Openflow: Disabled, Openflow Index 1<br>
Cluster L2 protocol forwarding enabled<br>
300 second input rate: 64599535 bits/sec, 61494 packets/sec, 0.74% utilization<br>
300 second output rate: 2468 bits/sec, 4 packets/sec, 0.00% utilization<br>
82862765 packets input, 10844340289 bytes, 0 no buffer<br>
Received 25656 broadcasts, 27667 multicasts, 82809442 unicasts<br>
0 input errors, 0 CRC, 0 frame, 0 ignored <br>
0 runts, 0 giants<br>
NP received 82871502 packets, Sent to TM 82860777 packets<br>
NP Ingress dropped 10729 packets<br>
9484 packets output, 726421 bytes, 0 underruns<br>
Transmitted 127 broadcasts, 553 multicasts, 8804 unicasts<br>
0 output errors, 0 collisions<br>
NP transmitted 9485 packets, Received from TM 48717 packets</div>
<div> </div>
<div>Show port on MLX-4-2 to MLX-4-1</div>
<div> </div>
<div>Port is not enabled to receive all vlan packets for pbr<br>
MTU 1548 bytes, encapsulation ethernet<br>
Openflow: Disabled, Openflow Index 1<br>
Cluster L2 protocol forwarding enabled<br>
300 second input rate: 2416 bits/sec, 3 packets/sec, 0.00% utilization<br>
300 second output rate: 64189791 bits/sec, 61109 packets/sec, 0.74% utilization<br>
<a href="tel:(510)%20557-1056" value="+15105571056" target="_blank">5105571056</a> packets input, 760042160157 bytes, 0 no buffer<br>
Received 1874232 broadcasts, 5287030 multicasts, <a href="tel:(509)%20840-9794" value="+15098409794" target="_blank">5098409794</a> unicasts<br>
0 input errors, 0 CRC, 0 frame, 0 ignored <br>
0 runts, 0 giants<br>
NP received <a href="tel:(510)%20557-1056" value="+15105571056" target="_blank">5105571056</a> packets, Sent to TM <a href="tel:(510)%20511-3719" value="+15105113719" target="_blank">5105113719</a> packets<br>
NP Ingress dropped 457337 packets<br>
590086066756 packets output, 81697023432476 bytes, 0 underruns<br>
Transmitted 129784095 broadcasts, 208762136 multicasts, 589747520525 unicasts<br>
0 output errors, 0 collisions<br>
NP transmitted 590086072891 packets, Received from TM 590091974310 packets</div>
<div> </div>
<div> </div>
<div>Cheers</div>
<div> </div></div></div>
<br>______________________________<wbr>_________________<br>
foundry-nsp mailing list<br>
<a href="mailto:foundry-nsp@puck.nether.net">foundry-nsp@puck.nether.net</a><br>
<a href="http://puck.nether.net/mailman/listinfo/foundry-nsp" rel="noreferrer" target="_blank">http://puck.nether.net/<wbr>mailman/listinfo/foundry-nsp</a><br></blockquote></div></div>