[j-nsp] QFX vs EX4550 as collapsed core
Amos Rosenboim
amos at oasis-tech.net
Fri Apr 26 11:09:40 EDT 2013
4550 packet buffers are not that big.
We are getting tail drops on ports that show 5-6 Gbps utilization (output of monitor interface show command).
It's related to (micro)bursts, and there is not much to do about it. Deeper buffers would certainly help.
If I remember correctly QFX uses a cut through design, these problems are less relevant.
Amos
Sent from my iPhone
On 26 Apr 2013, at 10:42, "Tore Anderson" <tore at fud.no<mailto:tore at fud.no>> wrote:
* Andy Litzinger
Hi, we're deploying to a new environment where there will be about
500 virtual servers hosted completely on Cisco UCS. The Core would
mostly be hosting uplinks to the UCS Fabric Interconnects (End Host
Mode), inter-vlan routing and links to service appliances (FW/LB) and
the Internet edge routers. Nearly all of our traffic is North/South
from server to LB to internet or server to LB to another server. The
core would mostly be routing a few (dozens at most) routes so RIB/FIB
size shouldn't be a great concern. Most links will be 10G, but there
are a handful of 1G management links.
We're considering either the QFX3500 ( x2) or the EX4450 (x2 as a VC)
to fill this role (or potentially Cisco Nexus 6001)
* are there any L3 benefits of one over the other? I haven't found
clear numbers in the datasheets
We're using 2-node EX4500 VCs in much the same way as you describe as
core switches in a couple of our data centres. We're quite happy with
them - they've been trouble free so far (knock on wood).
We briefly considered using the QFX3500s instead for a recent deployment
but quickly disqualified them when we realised they do not support IPv6
at all.
The EXes support IPv6 fairly well, although according to the specs they
have an upper limit of 1000 IPv6 neighbour entries, which is
disconcertingly small - at least if they're handling server LANs and not
just router-to-router links. Due to hosts having both a link-local
address in addition to the global one, 1000 entries will only handle 500
hosts.
Tore
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list