[rbak-nsp] sh l2tp global ipc - ppp sessions/card
Richard Clayton
sledge121 at gmail.com
Sun Jan 9 08:16:48 EST 2011
David
A couple of my friends work at 2 large UK ISP's and they both load balance
their host link resilient pairs without any problems (they don't shape
towards the lac though) and the BT SIN pdf 472v1p9 details how to achieve
load balancing over the pair. I think the main reason they load balance is
less user disruption when one of the fibres fails. I do agree with you
though as the same SIN doc does say that you should actually use them as
active/standby.
Am I correct in saying that your advise for shaping/queuing is as follows:
1. Set the pair as active/standby
2. Configure subscribers to connect to the card that has the route back to
the lac using the 'lns card selection route' command
3. Configure subscriber shaping as normal
And in the event of the primary fibre failing
1 Configure subscribers to connect to the card that now has the route back
to the lac
2. Boot all users so they connect to the correct card
Thanks
Rick
On 8 January 2011 15:09, David Freedman <david.freedman at uk.clara.net> wrote:
> The point of traffic egress toward the user is the ePPA (chip) on the
> linecard facing the physical interface back towards the LAC IP that the
> user
> has been terminated from.
> If you have two circuits on your hostlink and are using them in an
> ³active/active² scenario, I¹m assuming you receive the LAC /32 across both
> linecards and can reach it through both with an equal metric (thus causing
> the load sharing to occur).
>
> In this case, I don¹t believe ³lns card selection route² would work for you
> if you wanted egress QoS back to the LAC as the card selected to own the
> subscriber would only receive the traffic a certain percentage of the time.
>
> From what I have seen, it is not common to have such "active/active" setups
> across a single hostlink's constituent circuits as this was not their
> original design (i.e they were orignally designed to be operated in an
> "active/passive" scenario) and you may find a number of other features
> (such
> at BT aggregate policing) do not behave as expected.
>
> Dave.
>
>
> On 08/01/2011 14:48, "Richard Clayton" <sledge121 at gmail.com> wrote:
>
> > David
> >
> > Thanks for the response, my environment is dual homed to lac
> network which is
> > loadbalancing over the two links so the lac's can connect over either
> > circuit. Basically its a BT 21cn host link which we are loadbalancing,
> would
> > qos towards the lac still work with this setup.
> >
> > Thanks
> > Rick
> >
> > On 8 January 2011 13:52, David Freedman <david.freedman at uk.clara.net>
> wrote:
> >> The ³load sharing² of l2tp subscribers to multiple cards is when acting
> as
> >> an LNS is the default configuration.
> >>
> >> In order to force subscribers to terminate on a particular card, use the
> >> ³lns card selection² configuration directive under your l2tp-peer to the
> >> LAC/LTS
> >>
> >> ³lns card selection route² will terminate the subscribers on the card to
> >> which the route to the LAC is present as the preferred option,
> >>
> >> "lns card X preference Y" will set preference of Y for card X either in
> >> addition to or instead of the route selection algorithm above.
> >>
> >> The goal if you are doing any kind of QoS back to the LAC is to have the
> >> subscriber terminating on the same card as the route back to the LAC
> exists,
> >> since the box does not do any QoS across linecards internally (between
> the
> >> PPAs) , this means configuring the "lns card selection route" on all LAC
> >> peers ideally , but this puts you in the unfortunate situation of having
> to
> >> change cards automatically (and dump all subs on the card) should you be
> >> multihomed to the LAC and lose the link on this card (i.e the route
> >> changes).
> >>
> >> Dave.
> >>
> >>
> >> On 08/01/2011 13:13, "Richard Clayton" <sledge121 at gmail.com> wrote:
> >>
> >>> Looking at the output of the command 'show l2tp global ipc' we have 5
> >>> subscribers on card 1 and 17 on card 2, what part of the configuration
> >>> steers
> >>> subscribers to card 1 and 2 and what is best practice for
> subscriber/card
> >>> termination.
> >>>
> >>> I have read that if subscriber shaping is configured the subscribers
> have to
> >>> terminate on the same card the L2TP tunnel terminates on, how do I
> ensure
> >>> that
> >>> L2TP and subs all connect to same card (or is this not best practice)
> >>>
> >>> Num Circuits (L2TP):
> >>> 1: 5, 2: 17, 3: 0, 4: 0, 5:
> 0,
> >>> 6: 0, 7: 0, 8: 0, 9: 0, 10:
> 0,
> >>> 11: 0, 12: 0, 13: 0, 14: 0,
> >>> Thanks
> >>> Rick
> >>>
> >>>
> >>> _______________________________________________
> >>> redback-nsp mailing list
> >>> redback-nsp at puck.nether.net
> >>> https://puck.nether.net/mailman/listinfo/redback-nsp
> >>
> >> --
> >>
> >> David Freedman
> >> Group Network Engineering
> >>
> >> david.freedman at uk.clara.net
> >> Tel +44 (0) 20 7685 8000
> >>
> >> Claranet Group
> >> 21 Southampton Row
> >> London - WC1B 5HA - UK
> >> http://www.claranet.com <http://www.claranet.com/>
> >>
> >> Company Registration: 3152737 - Place of registration: England
> >>
> >> All the information contained within this electronic message from
> Claranet
> >> Ltd is covered by the disclaimer at
> http://www.claranet.co.uk/disclaimer
> >>
> >>
> >
> >
>
> --
>
> David Freedman
> Group Network Engineering
>
> david.freedman at uk.clara.net
> Tel +44 (0) 20 7685 8000
>
> Claranet Group
> 21 Southampton Row
> London - WC1B 5HA - UK
> http://www.claranet.com
>
> Company Registration: 3152737 - Place of registration: England
>
> All the information contained within this electronic message from Claranet
> Ltd is covered by the disclaimer at http://www.claranet.co.uk/disclaimer
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/redback-nsp/attachments/20110109/b4fa9bdd/attachment-0001.html>
More information about the redback-nsp
mailing list