[c-nsp] Large networks

Shaun R. mailinglists at unix-scripts.com
Tue Aug 25 19:40:39 EDT 2009


I worked for a company in the past that had a very large flat network.   The 
network consisted of two /20's (255.255.240.0) that were configured on a 
7206 npe-300 router that connected to a bunch of catalyst 2924 switches (the 
old school ones).  Everything was on vlan1.  The company was a small hosting 
company that provided mainly dedicated servers.  This company was constantly 
having problems with what i called broadcast attacks.  The network graphs 
would show traffic on all interfaces spike and normally the 100mbit uplink 
between the switches would saturate and the network would die.  From that 
experience i took my time to design and deploy my network to be as correct 
as possible. I put each customer on there own vlan with there own subnet 
carved out.  My 3750 stack is my access/core and i have 7206-VXR-npe-g2's 
for borders (bgp/ospf).  Every edge switch is uplinked twice with gigE 
(2gbit of bw) and customers are uplinked normally at 100mbit.  For years 
this was fine and worked great but when deploying our own servers i always 
found myself kicking out a new vlan and subnet.  I wasnt sure if it was 
needed being that it was our own servers (our own servers meaning that we 
managed them, customer do not have admin/root access).

Then came virtual server hosting.  With VPS Hosting we have one physical 
server (a host) that we carve out a /26 for and assign it to it's own vlan. 
We've done this for a few years now and it's worked fine but it's also kind 
of caused problems.  One problem is that some hosts needs more ips than 
other hosts.  We end up with some hosts having 20 ips free in there subnet 
while other hosts have none and need another allocation assigned to them. 
Also, we cannot move a customer from one host to another with out making the 
customer change ip address's.  For a while now i've been wanting to just 
combine all the VPS hosts into one vlan and carve them out /24's as needed. 
Then each host could just get a ip from that pool and when that pool started 
to become depleated i could assign another /24.  Right now when totalling 
all ips assigned to hosts that are free not being used we have thousands and 
it sucks because thats waisted space.  Each Host can have up to 40 virtual 
servers on them.  So lets say i combine 40 hosts with a total of 1000 
Virtual Servers, thats now 1000 servers in one vlan.  These virtual servers 
are running on the Xen platform connected to the hosts bridge interface 
which is using ebtables to fillter traffic at a layer2 level by mac and 
source address.

Another problem that company i worked for had was that they where 
calculating bandwidth usage off the 2924 network interfaces.  The problem 
with this we later found was that ARP/Broadcast traffic ended up being a 
huge amount added to there bill at the end of the month.  I want to say that 
each customer had around 4-6GB of transfer tacked onto there bandwidth 
usage.

So what i'm really asking is...
    1. When should i really cut out a new vlan for a server or group of 
servers for my own use (meaning the customer doesnt have admin privileges to 
the machine)?
    2. Was the problem with the large network that they didnt cut the /20 
into smaller subnets or was the problem that they didnt cut them into 
smaller subnets and put them into there own vlans?
    3. Say i combine all the VPS Hosts, 1000 Virtual servers in 1 vlan, with 
say 15 /24's... Is this ok? how is this compared to say having 25 
vlans/subnets with each pysical host in one of them?

Anything else i should be worried about here?

~Shaun 




More information about the cisco-nsp mailing list