[j-nsp] Advice on a 100Gbps+ environment
Phil Bedard
philxor at gmail.com
Tue Jul 2 07:31:31 EDT 2013
Do you have to use a L2 switch at the end of row? We have largely gotten rid of L2 switches at peering facilities since 10GE density is high enough, there is less statmux gain these days, and ports are cheap enough on the MX. Especially if the customer is pushing that much traffic via Nx10GE.
We use a combination of LAG+ECMP since we hit the old 16 member limit and it seems to work well at least on the MX. We have 200+ Gbps in numerous locations with this setup.
The RIB capacity is large enough I don't think that many sessions/routes is an issue with overlapping prefixes, but there are alternatives. Like someone else mentioned do loop back peering or LAG, most providers are willing to do one or both methods.
The MX works fine as a peering/transit router.
Phil
On Jul 2, 2013, at 4:58 AM, Morgan McLean <wrx230 at gmail.com> wrote:
> Hi,
>
> I've only really dealt with traffic levels under 20Gbps. I have a client
> that will be pushing over 100gps, and close to 200 within the next six
> months, at least thats the goal. Judging by the type of traffic it is...I
> could see it happening. I'm probably in over my head, but thats another
> topic.
>
> The plan is to run OSPF from a couple existing MX480s I setup to a new
> switching core which is running VRRP and extending L2 out to existing end
> of row switches. This leaves me with hoping that OSPF ECMP works well
> enough to push these kinds of traffic levels over a bunch of 10GE links,
> load balancing between the upstream MX routers. Can I rely on ECMP e old for this
> type of setup? Each MX will have about 50Gbps provider connectivity to
> start, and will have ~150Gbps by the time the contracts ramp up. The MX's
> are not at the same site, so I'm limited to using 10g links site to site
> over their cwdm.
>
> This leaves me then with a problem; getting a bunch of capacity in L2 form
> to the EOR switches. At least on the EX4500 (we will move to bigger), the
> max number of links for a lag is 8, so 80Gbps. And I can't expect to really
> make complete use of that. Do people use MSTP for this purpose at all?
> Spreading environments over a couple 80GB, or a few 40GB lags? I could also
> run 40GE ports in lag...but will juniper devices allow LAGs with 40GE ports?
>
> Also...any tips on handling all of the provider connections? I don't know
> if they will be giving us lags, which means I'll be potentially running
> 10-15 BGP sessions per MX, which is a lot of routes. They're running the
> RE-S-1800 quad core 16GB, but should I opt for partial or just default at
> that point? Its possible all of the links will be from the same provider.
>
> --
> Thanks,
> Morgan
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list