[j-nsp] Big flows up to 320 Gbs

adamv0025 at netconsultings.com adamv0025 at netconsultings.com
Mon Sep 28 03:42:31 EDT 2020


> james list
> Sent: Saturday, September 26, 2020 11:57 AM
> 
> Dear experts
> I have a project to connect at layer2 level 16 servers (two interfaces
> each) with a total of 32 x 10Gbs server interfaces in order to setup a big
data
> solution.
> 
> These interfaces must have in normal conditions full L2 bandwidth
available
> to transmit among themselves and redundancy of switch (if there is a fault
> condition 160 Gbs is enough).
> 
> Since this kind of huge bandwidth requirement could cause bottlenecks in
> Datacenter Lan environment I was thinking to setup a separate lan
> architecture.
> 
> I was thinking to setup a virtual chassis environment with 2 x qfx5100
with
> multiple 40 Gbs interfaces to set vc ports.
> 
Couple of thoughts,

Routing vs switching
First of all try to push for a L3 solution if the app itself doesn't support
it look into routing on host (cRDP, etc...) -so that all you need to worry
about outside of hosts is routing (not switching). 
With L3 you don't need to worry about split brain scenarios with virtual
chassis or multi-chassis lag, 

Scalability and throughput,
Note that creating a dedicated 2 x qfx5100 pod will limit the solution to
the bandwidth of a single qfx5100, which may or may not be sufficient to
grow the solution in future.
Migration to a standard folded Clos fabric will then be complicated if
needed.
Also note that qfx5100 might have throughput limitations at smaller than
1500B packet sizes, so check with the app team on expected packet size and
compare that with the switch specs or best do your performance testing.

Buffers,
If you start to mix and match 40s and 10s (in failure condition) you need to
worry about buffers in the stepdown from 40 to 10.
 

adam




More information about the juniper-nsp mailing list