[c-nsp] output queues for MLPPP T1 links

Rodney Dunn rodunn at cisco.com
Wed Nov 24 12:26:52 EST 2004


On Wed, Nov 24, 2004 at 09:13:57AM -0800, Mark Kent wrote:
> Hello,
> 
> I'm looking for guidance on changing the output queues for
> MLPPP using two to four T1 links (PA-MC-T3 on 7206vxr/npe300).   
> 
> I note that the default is "Output queue: 0/40 (size/max)" and recent
> ddos attacks here fill these up pretty quickly.

Don't do it.  When you configure a T1 in a bundle the
ring depths of the member links are choked down so that
the bundle knows which link is congested and when so it
knows which link to send the packet down.

> 
> Some questions:
> 
> a) should I care about full output queues during ddos attacks?
>    Will having bigger queues help ride out the storms?

If your problem is output and the traffic is overrunning the
link tweaking the output queue really isn't any good.
You should be dropping traffic in some manner on a more
intelligent manner.  You can attach a service policy with
MQC to the bundle interface and decide what traffic to drop.

Leave the internals of the MLPPP/member link workings as
they are.

> 
> b) if I change the queue for the multilink interface
>    (hold-queue 1024 out) then do I need to do it on the
>    individual T1 interfaces?

See above.

> 
> c) The range presented by IOS is <0-4096>.
>    Given that wide range, it seems strange to me that 40 is the default.
>    Nevertheless, I'ld appreciate any advice on appropriate settings.

It's a balance of queue depths to prevent drops and balance forwarding
latency.

If you are having drops then you can either buffer the packets
longer to try and absorb the traffic or you can do QOS and drop
intelligently to stop the over congestion.  At the end of the
day you still have just that...over congestion.

I never like tweaking output hold queues unless it's just to
help for bursty conditions that are very short lived.

If it's drops due to sustained traffic you need a different
tool for the job.

Rodney


> 
> I know that it's probably the case that the best answer
> to this is "it depends" but I'm looking for more :-)
> 
> Thanks,
> -mark
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list