[f-nsp] packetloss B2P622 POS blades

Ronald Esveld ronald.esveld at qi.nl
Tue Feb 10 09:42:48 EST 2009


Jeroen,

Try hw bc, to see if anything is wrong with the hw buffers

Ronald


Met vriendelijke groet, With kind regards, 

Ronald Esveld
network engineer

Qi ict
Delftechpark 35-37
Postbus 402, 2600 AK Delft

T : +31 15 888 0 444
F : +31 15 888 0 445
E : mailto:ronald.esveld at qi.nl
I : http://www.qi.nl/

Qi ict evenementen:
http://www.qi.nl/cms/publish/content/showpage.asp?pageid=449

-----Oorspronkelijk bericht-----
Van: foundry-nsp-bounces at puck.nether.net
[mailto:foundry-nsp-bounces at puck.nether.net] Namens Jeroen Oldenhof
Verzonden: dinsdag 10 februari 2009 14:29
Aan: foundry-nsp at puck.nether.net
Onderwerp: [f-nsp] packetloss B2P622 POS blades

Hi!

Has anyone experience with the B2P622 Foundry POS blades?

We've deployed them around 9 months ago when we took our STM-1 in 
production.. but since then we're constantly facing packetloss of about
1%.

Our setup (hope it remains readable):
                  +-------+    STM-1    +-------+
Transits / <---+--+ RTR-A +=============+ RTR-B +--- <internal>
IX's           |  +-------+             +-------+
         +-----+-+
         |monitor|
         +-------+

RTR-A&B: Foundry BI4000 w/MGMT4 ironcore running 07.8.04 (B2R07804).
RTR-A is an edge-router facing several transits and exchanges. Through 
the STM it is connected to RTR-B using iBGP.
the monitor is used for smokeping and other management tasks.

At first we performed eBGP routing to the ix's and transits on RTR-A. 
This caused much packetloss, even when pinging from the monitor-box to 
RTR-A directly. With no traffic packetloss is zero, but getting much 
worse when the amount of traffic grows, around 4% at 60mbit. RTR-A was 
also pulling around 20% CPU. I assume the POS interface has little or no

CAM for layer 3, which makes it querying the CPU big time. This being a 
resource consuming i guess some packets could be dropped there.

We then moved all routing and BGP functionality to RTR-B, making RTR-A 
simply a breakout box. Packetloss is reduced, but still around 1%.. and 
the smokepings sill looks awful..
We also swapped POS/FE/MGMT blades and ports and tried different 
firmwares on both ends. No port-errors on both ethernet ports or POS 
ports. The STM provider (TATA) reports no errors.

On all paths outside the STM there is ZERO loss: from monitoring box to 
several internet destinations, from and to several internal hosts past 
the RTR.. so the STM and its interfaces are definately the problem.

I figured out that some POS debugging can be done using 'dm console-on 4

2', 'dm cli 4 2 Q' returns some interesting commands.. but I can't find 
a way to use them properly..

so.. anyone on this list has any experience with these? Or encountered 
simmillar issues?

Thanks!

Jeroen Oldenhof

telnet at RTR-B# show pos 4/2

POS4/2 is up, line protocol is up
  No port name
  Hardware is Packet over Sonet
  Peer Internet address is 0.0.0.0
  MTU 4470 bytes, encapsulation PPP, clock is line
  Framing is SDH, BW 155000Kbit, CRC 32
  Loopback not set, keepalive is set (10 sec), scramble disabled
  LCP state is opened, IPCP state is init
  300 second input rate: 50423416 bits/sec, 8213 packets/sec
  300 second output rate: 12068536 bits/sec, 5802 packets/sec
  2135136909 packets input, 11504550338524 bytes, 0 no buffer
  Received 0 CRCs, 0 shorts, 0 giants, 0 alignments
  1940413782 packets output, 2732631789028 bytes, 0 underruns
  Line protocol is UP
  Member of 5 L2 VLANs, port is tagged, port state is FORWARDING
  STP configured to ON
  Configured Path Trace String :
  Received Path Trace String : RTR-A 4/2

_______________________________________________
foundry-nsp mailing list
foundry-nsp at puck.nether.net
http://puck.nether.net/mailman/listinfo/foundry-nsp



More information about the foundry-nsp mailing list