[c-nsp] qos plan - advice please

Aaron aaron1 at gvtc.com
Fri Aug 30 11:41:38 EDT 2013


Thanks Robert, 

- (15) asr9k's in core
- (40 or 50) asr901's and me3600's

That pretty much covers my mpls cloud.... I'm running single area ospf on
all those, and mpls on all, and so all of them (9k's, 901's and me's) act as
a mix of p's and pe's

Aaron

-----Original Message-----
From: cisco-nsp [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of
Robert Blayzor
Sent: Friday, August 30, 2013 10:27 AM
To: cisco-nsp at puck.nether.net list
Subject: Re: [c-nsp] qos plan - advice please

On Aug 30, 2013, at 11:00 AM, Aaron <aaron1 at gvtc.com> wrote:
> I mean do I really need to go to each and every interface and apply 
> policy-map (service-policy) to all the interfaces of all my mpls lsr's ?!
> 
> Is it possible to enable something like rsvp and mpls te to allow for 
> end to end qos ?
> 
> What is the nicest way to do this ?


You're either implementing IntServ (hard) or DiffServ (soft) QoS.

The problem is IntServ isn't really practical or scalable unless you have a
very small network; it also isn't very resource conservative.

If using DiffServ aware TE then you're going to have to setup queueing
policies on all of your interfaces in the network.  RSVP will do the
admission control, but you'll still need to have your service policies on
all the interfaces the LSP's traverse.

--
Robert Blayzor
INOC, LLC
rblayzor at inoc.net
http://www.inoc.net/~rblayzor/




_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list