[j-nsp] Use cases for IntServ in MPLS backbones

adamv0025 at netconsultings.com adamv0025 at netconsultings.com
Wed Oct 3 10:17:19 EDT 2018


> From: James Bensley [mailto:jwbensley at gmail.com]
> Sent: Wednesday, October 03, 2018 9:19 AM
> 
> On Tue, 2 Oct 2018 at 15:11, Mark Tinka <mark.tinka at seacom.mu> wrote:
> > Of course, in the real world,
> > it was soon obvious that your Windows laptop or your iPhone XS sending
> > RSVP messages to the network will not scale well.
> 
> A point I was trying to make way back in this thread, was that IntServ doesn't
> scale well for multi-stakeholder networks, which has been my background,
> ISP and managed WAN operations, so I've never deployed it.
> If you have a single tenant WAN with control over the WAN *and* all end
> devices you can manage the scale.
> 
If you push the QoS boundary onto PEs then you get single-stakeholder networks. 
So my original question was meant for an MPLS core where you're in control of all PEs  -and I'm trying to gather views of the group on what are the pros and cons of IntServ vs DiffServ in this "closed" environment of an MPLS backbone.
 

> On Tue, 2 Oct 2018 at 14:38, <adamv0025 at netconsultings.com> wrote:
> > And besides, I'm not sure I'd ever want to be in a position where I
> > allow my core links to max out and have TE to try and shuffle flows
> > around so that I can squeeze all traffic in.
> > - sure this would probably not be the case of day to day operation but
> > most likely only employed during link failures,
> 
> So tying this to my point above about a single-tenant WAN, this is something
> that Google does (any Googler's on-list please correct where I am wrong).
> They have two WANs, B2 and B4. One is public facing for peering and transit
> (B2?) and the other is internal, e.g.DC to DC (B4?). The DC to DC WAN tries to
> sweat it's own assets as much as possible and run some links in the high-90's
> percent utilisation.
> Nx100G LAGs between DCs aren't cheap, even for Google. With a single
> tenant WAN you can run your links much hotter (higher average
> throughput) with the aim to reduce the time spent transmitting (lower
> average utilisation).
> 
You brought up a good point,
I hear these arguments saying "if hyperscalers are doing it in their network we should be doing the same in ours" all the time. But people fail to realize how very different customers these backbones serve. None of the standard managed services or internet services providers have the luxury of tailoring traffic flows before it hit their backbone, we can't send out schedules to our customers dictating when each one is allowed to run backups between DCs so that there's no contention in our backbone - is one example. 
What hypersalers have are highly tailored domain specific environments (closed systems) that have very little in common with tier-1 to tier-3 carrier backbones.  
I imagine these networks will be the first to migrate onto new routing paradigms like intent based routing etc...      

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::



More information about the juniper-nsp mailing list