[c-nsp] per-user MCQ on vaccess interfaces?
Nick Shah
Nick.Shah at aapt.com.au
Tue Nov 15 20:21:20 EST 2005
Gert
Both MQC and CAR are supported. Using Radiator (other radiuses syntax
would differ) here's how we do it.
CAR:
rtrce at blah Password = "secret"
cisco-avpair = "lcp:interface-config#1=ip vrf forwarding TestVRF",
cisco-avpair = "lcp:interface-config#2=ip address 10.255.64.125
255.255.255.252",
cisco-avpair = "lcp:interface-config#3=ip access-group DSL10 in",
cisco-avpair = "lcp:interface-config#4=ip rip authentication mode
md5",
cisco-avpair = "lcp:interface-config#5=ip rip authentication
key-chain RIPKEY",
cisco-avpair = "lcp:interface-config#6=rate-limit output access-group
rate-limit 1 232000 5200 5200 conform-action transmit exceed-action
drop",
Framed-IP-Address = 10.255.64.126
MQC :
rtrce at blah Password="secret"
cisco-avpair = "lcp:interface-config#1=ip vrf forwarding
CENCOR24189275-0001",
cisco-avpair = "lcp:interface-config#2=ip address 10.255.64.237
255.255.255.252",
cisco-avpair = "lcp:interface-config#3=ip access-group DSL10 in",
cisco-avpair = "lcp:interface-config#4=ip rip authentication mode
md5",
cisco-avpair = "lcp:interface-config#5=ip rip authentication
key-chain RIPKEY",
cisco-avpair = "lcp:interface-config#6=service-policy out
CE-Pol-16kI-240kBD",
Framed-IP-Address = 10.255.64.238
Ps. The actual policy map (and subsequent class maps) should be
preconfigured on the router.
Rgds
Nick
-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net
[mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Gert Doering
Sent: Wednesday, 16 November 2005 7:05 a.m.
To: cisco-nas at puck.nether.net; cisco-nsp at puck.nether.net
Subject: [c-nsp] per-user MCQ on vaccess interfaces?
Hi,
is there a way to set per-user CAR/MCQ rules from Radius, for PPPoE
dial-in
customers?
What we need is something like this:
- user connection comes in via PPPoE
- user has purchased a maximum total bandwidth
(but the access link is faster, due to media constraints)
-> we need to apply outgoing traffic shaping and incoming policing
in case he modifies the shaping configuration on the CPE
- user can potentially purchase different QoS classes, like this:
- up to 512 Kbit/s of traffic to 10.10.10.0/24 gets TOS bits set
to
"prio high" (for an on-net VPN link with guaranteed bandwidth)
- up to 2 Mbit/s of aggregate traffic gets TOS bits set to
"best-effort"
- everything above 2 Mbit/s is dropped
I know that Cisco's hierarchical QoS stuff can do all this, but I'm not
sure whether I can apply it completely from Radius.
(The underlying issue is: the provisioning is done by different teams
than the actual router configuration and maintenance, so it would be
greatly preferred to have *all* per-user config in Radius. There are
some FreeBSD-based PPPoE solutions - mpd - that can do it, but we do
also want to consider a Cisco-based solution)
Pre-configuring different classes for the "access bandwidth" would be
possible, but due to the demand for "VPN QoS classes", we cannot
pre-configure all possible per-user configurations.
Now, CAR and GTS could be applied on a per-interface basis from Radius
just fine (as all the config is done inside the interface), but for
the hierarchical stuff, you need to configure the policy-map stuff
globally...
Any ideas?
gert
--
USENET is *not* the non-clickable part of WWW!
//www.muc.de/~gert/
Gert Doering - Munich, Germany
gert at greenie.muc.de
fax: +49-89-35655025
gert at net.informatik.tu-muenchen.de
_______________________________________________
cisco-nsp mailing list cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
This communication, including any attachments, is confidential. If you are not the intended recipient, you should not read it - please contact me immediately, destroy it, and do not copy or use any part of this communication or disclose anything about it.
More information about the cisco-nsp
mailing list