[j-nsp] Netscreen 5400 per-UDP-port bandwidth cap?

Alex alex.arseniev at gmail.com
Fri Mar 5 05:10:13 EST 2010


Phil,
Do you have UDP flood screen enabled? If yes what is the threshold and UDP 
packet size you are using?
What you descibe below is very similar to how UDP (and ICMP) flood screen 
operates.
Rgds
Alex

----- Original Message ----- 
From: "Phil Mayers" <p.mayers at imperial.ac.uk>
To: <juniper-nsp at puck.nether.net>
Sent: Thursday, March 04, 2010 5:40 PM
Subject: [j-nsp] Netscreen 5400 per-UDP-port bandwidth cap?


> All,
>
> I'm working with JTAC on this, but they're stumped so far and I thought 
> I'd throw it out here.
>
> We have an NSRP pair of netscreen 5400s in a slightly complex 
> configuration.
>
> Each firewall has a single 10gig port with multiple sub-ints. Each sub-int 
> is bound to a zone and the netscreens route traffic between them (and 
> apply policy). Rather than using VSIs we configure downstream BGP routing 
> policy to split traffic between the firewalls very successfully. 
> Effectively, the NSRP serves only to sync up the address & policy configs.
>
> We have recently discovered that UDP flows with any (src,port) to a 
> specific (dst,port) pair seem to be capped at the weirdly precise value of 
> 5.8Mbit/sec. See below for more info on the exact traffic pattern.
>
> That is, if we have:
>
> source1, sport 1234, dport 5000 - offers 10mbit/sec of UDP
> source2, sport 5678, dport 5000 - offers 10mbit/sec of UDP
>
> At the receiver, destination port 5000 receives 5.8mbit/sec total, split 
> approx 50/50 between the two senders i.e. ~3mbit/sec received per-sender, 
> ~70% loss per flow. Stopping one sender means the other gets the full 
> ~5.8Mbit/sec.
>
> There are no apparently traffic limits using TCP or, weirdly, GRE (using 
> GRE p2p interfaces on source->dest). I'm using two linux boxes and iperf 
> to generate the test traffic:
>
> iperf -i 1 -u -c dest -b 10M
>
> ...but the original report was for real UDP traffic.
>
> Now, there are as far as I'm aware no rate-limiting, bandwidth, QoS or 
> other policies configured either on the firewall and definitely not on the 
> router it is attached to. JTAC have not spotted anything.
>
> If I take a SPAN (port mirror) of the 10gig port on the router facing the 
> firewall, I see the following traffic pattern, correlating in/out packets 
> by IP IP# and source/dest MAC address:
>
> 0.00 - 0.59 seconds - packets flow normally (~725kbyte of data)
> 0.60 - 0.99 seconds - packets go into the netscreen but do not come back 
> out
> 1.00 - 1.59 seconds - packets flow normally
>
> ...and so on.
>
> It's frankly bizarre.
>
> I have verified that this still occurs with "no-hw-sess" on the policy 
> after being advised to try this by JTAC.
>
> ScreenOS version is 6.2.0r4.0, hardware is M2 management blades and 2XGE 
> 10gig linecards.
>
> The router attached to the firewall is a 6509/sup720 and I have confidence 
> it is not implicated in the loss.
>
> Any suggestions?
>
> Cheers,
> Phil
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 



More information about the juniper-nsp mailing list