[c-nsp] Equipment for a large-ish LAN event

Chuck Church chuckchurch at gmail.com
Wed Dec 9 08:13:09 EST 2015


Isn't game traffic fairly small in bandwidth need, but very latency
dependent?  QOS seems like a good fit here.  Priority queue the game traffic
based on matched ACL, and best effort everything else, re-marking it as
necessary.  Based on previous years, what are the true bandwidth needs?  

Chuck

-----Original Message-----
From: cisco-nsp [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of
Laurent Dumont
Sent: Tuesday, December 08, 2015 4:23 PM
To: cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] Equipment for a large-ish LAN event

Here is a rough draft of our usual topology. Imagine a few more "Players"
section scattered around the 2x10G rings.

We are already planning for IPV6 but that really depends on our upstream
ability to actually provide the feature. Very good point about BCP38, that
is not something we had considered. We usually segment the network for each
row of tables which usually ends up being 48 players in the same vlan.

Thanks!

       +-----------2x10G----------+-------CORE - Routing to external
      |                                      |
      |                                      |
      |                                      |
      |                                      |
2x10G|                                | 2x10G
      |                                      |
      |                                      |
      +-------------------------------+     3650 - Distribution switches 
for the 2x10G ring
                  2x10G                |
                                             |
                                             |2960X as Access with 2x1G
Uplinks
                                             |
                                             |
                         +-------+----------+
                         |                       |
                         |                       | Players
                         |                       |
                         +------------------+



On 12/8/2015 3:34 PM, Mikael Abrahamsson wrote:
> On Tue, 8 Dec 2015, Laurent Dumont wrote:
>
>> We were looking at either the Nexus 7004 chassis or the ASR 9004/9006 
>> chassis as the core "switch". We would then use 48xGigE and 1x24 SFP+ 
>> line cards. Our actual port requirements and somewhat flexible but we 
>> do need at least 4x10G Fiber ports. And at least 48 GigE ports for 
>> players or access switches.
>
> I don't really understand your topology. 2001-2004 I was involved in 
> providing network connectivity to around 2500-4500 users at Dreamhack, 
> back then the largest LAN in the world as far as we knew. Back then we 
> made do with 2x100FE for 20 computers and the core connectivity was 
> 2xGE. I'd say your design seems to fairly similar, but with 2x10GE 
> instead, but I'm just guessing from what you wrote.
>
> ASR9k has been used before and will do just fine. Dreamhack has grown 
> a bit since I was involved:
>
>
http://www.cisco.com/c/dam/en/us/products/collateral/routers/asr-9000-series
-aggregation-services-routers/dreamhack_v4acs_final.pdf 
>
>
http://www.extremetech.com/extreme/107245-inside-the-worlds-largest-lan-part
y 
>
>
http://www.pack4dreamhack.nl/interviews/dreamhack-behind-the-scenes-network/

>
>
>> I'm also open to any suggestion within Cisco portfolio. Our needs are 
>> pretty standard and nothing extraordinary but we would like to use 
>> this opportunity in order to try new equipment and technologies that 
>> are usually only seem within ISP and large networks.
>
> Don't forget to provide dual stack (IPv4 and IPv6) connectivity. Limit 
> your broadcast domains (I'd say 20-50 users per broadcast domain), and 
> make sure you do antispoofing (BCP38) for everybody.
>

_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list