[rbak-nsp] SE800 QoS architecture
Jim Tyrrell
jim at scusting.com
Fri Sep 11 13:09:31 EDT 2009
> We also use intercontext routing, but run an IGP (routing protocol) ,
such as OSPF or IS-IS to create a loop free topology. We do not exchange
default routes across this
Ah.. I read it as only allowing intercontext static routes and not
routing protocols. I would use the static routes between contexts just
to route the loopbacks between them and then run BGP/OSPF etc between
the loopbacks then - makes sense. :)
>and admin shutting the l2tp peer
How do you admin shut an l2tp peer? I must be blind as I cant see an
option for that.. I have just set a max-sessions before to resrict new
subs.
We do a similar thing with rebalancing out of hours at present, but with
the new contexts will have Radius automagically directing subs to lesser
loaded tunnels and away from any overloaded so hopefully we wont need to
force a rebalance unless its really bad.
Thanks.
Jim.
>>
>> >I'll be gratefull for any feedback as to the best way of terminateing
>> >and routing L2TP subs on particular line cards, but having routing fall
>> >back to others in the event of failure.
>>
>> Well, as I'm sure you know, L2TP subs must be associated with a
>> linecard, in order for the redback to tie them to a "circuit", this
>> is done by default like this:
>>
>> redback(config-l2tp)#lns card select ?
>> priority Use card priority to select card for session
>> route Use route to peer to choose card for session (default)
>>
>> The circuit is managed by the card who has the best route to the l2tp
>> peer at the time, this is the "owner" card,
>> for instance, the GE-3 cards can terminate 16K of these (it is
>> possibly now 32K, not heard if this limit has been increased or not),
>> due to RAM constraints.
>>
>> You can also select a specific card for users to use :
>>
>> redback(config-l2tp)#lns card ?
>> 1..14 Card number to locate sessions on
>> selection Specify card selection algorithm for LNS session circuits
>>
>> It used to be the case (not sure about now) that you could create a
>> "group" of cards to share the sub load,
>> as you know, in order to avoid the 16K limit, the problem with this
>> was that, if your sub is "owned" by card 1 but the route to the peer
>> (where the EPPA is sending L2TP to the peer) is on card 2, then QoS
>> is not possible because a virtual circuit had to exist between the
>> PPA on card 1 and EPPA on card 2 which could not have prioritisation
>> or congestion management applied to it (again, somebody correct me if
>> this is now possible, this information could be out of date), so you
>> had to ensure that subs stayed glued to the card where the active
>> L2TP peer route was present if you wanted any QoS (which is why I
>> suppose now the card-groups are no longer the default and peer route is)
>>
>> For information, we use the default (route to peer) method, meaning
>> that the card sub limits are important to us.
>>
>> We have multiple boxes, each with a seperate "termination" context
>> for each provider / DSLAM network,
>> at least two boxes have copies of this termination context, each
>> context having a /32 loopback announced via routing protocol with the
>> choice of l2tp peer to tunnel to (context, chassis) made by the DSL
>> network.
>>
>> We also use intercontext routing, but run an IGP (routing protocol) ,
>> such as OSPF or IS-IS to create a loop free topology. We do not
>> exchange default routes across this.
>>
>> Should there be a network event which causes an uneven number of
>> subscribers to end up on a single linecard, we have a system which
>> "rebalances" the cards by moving the subscribers off (and admin
>> shutting the l2tp peer to prevent new sessions joining), this is done
>> automatically and usually outside of business hours.
>>
>> Hope you find this information useful, am happy to go into more
>> detail offlist if you so wish.
>>
>> David.
>>
>>
>
>
More information about the redback-nsp
mailing list