[j-nsp] Class of Service implementation over MLPPP link
Josef Buchsteiner
josefb at juniper.net
Fri Apr 20 03:08:52 EDT 2007
Friday, April 20, 2007, 8:48:17 AM, you wrote:
FAK> One more question related to Multicalss MLPPP. Suppose if my scenario is
FAK> something like following,
FAK> PE1 ========= PE2 ========PE3
FAK> ||
FAK> ||
FAK> PE4
FAK> In this case, PE2 has total three MLPPP bundles, one with each PE1, PE3 and
FAK> PE4 respectively. Now in this case do my previous configuration works for
FAK> all or do i have to configure Multiclass MLPPP on PE2 to support multiple
FAK> class flows on different bundles.
I'm not sure I understand why you questioned this topo. Your
current configuration will work no matter if you have one,two or
100 bundles on PE2 and there is no dependency if you have regular
ML or multiclass ML since all is bundle specific.
Josef
FAK> I think multiclass will not required, my current configuration will work for
FAK> the other two. Just need to know your comments.
FAK> Regards
FAK> Fahad
FAK> On 4/18/07, Josef Buchsteiner <josefb at juniper.net> wrote:
>>
>>
>>
>> Wednesday, April 18, 2007, 7:47:11 AM, you wrote:
>> >>
>> >> Dear Josef
>> >>
>> >> Thanks for your valuable information, and yes you got right....i was
>> >> checking on interface extensive, which not showing any Q stats...while
>> on
>> >> *sh interface queue, *the packets are actually going to those specific
>> >> queue.
>> >>
>> >> Kindly can you explain this is little bit detail...as i cant get it
>> >> clearly.....
>> >> " On the egress interface we have to put all into Q0 since you
>> >> are not using multiclass mlppp and we have only one SEQ pool
>> >> so we will end up all in one queue to prevent re-order. The queuing
>> is
>> >> done in LSQ prior to putting on the seq stamps."
>> >>
>> >> What is the significance of MultiClass MLPPP,
>>
>>
>> one of the main driver for multiclass is that you can load-share
>> different class of mlppp traffic across the bundles. Without this
>> you can only load-share *one* mlppp class and LFI traffic needs to
>> be hashed on *one* single member link to avoid re-ordering.
>>
>>
>>
>> >> cant i get the
>> >> Gold/Silver/BE/NC traffic with out configuring this parameter?
>>
>>
>> which you have already at the LSQ level. Don't think about the
>> queue on the PIC. Just see the egress interface as one FIFO
>> and traffic is already arriving at the scheduler you have defined.
>>
>> We should not see queuing on the egress PIC and if it does because
>> the line has errors then you will drop but only for queue 0. If you
>> would send the ml traffic with one seq# pool into different egress
>> queues and you start dropping them according to the scheduler you
>> have applied to the LSQ interface we will get massive re-order and
>> huge jitter sine the remote side is waiting for the frames for a
>> certain period of time.
>>
>> The scheduler according to your configuration is applied already
>> *before* the ML Sequence stamps is build which is the right thing
>> to do. Never but ML traffic which has one seq# pool into different
>> queues.
>>
>>
>> >>
>> >> Also while checking on consituent link stats (sh interface extensive or
>> sh
>> >> interface queue) both shows the packets are going through BE queue,
>> where as
>> >> at lsq level they are flowing through Gold or Silver.
>>
>> which is correct. you have done queuing/shaping/scheduler actions
>> already at lsq level.
>>
>>
>> Josef
>>
>>
>>
>>
>> >>
>> >> Can you provide this information.
>> >>
>> >> Regards
>> >>
>> >> Fahad
>> >>
>> >>
>> >> On 4/18/07, Josef Buchsteiner <josefb at juniper.net> wrote:
>> >> >
>> >> > Fahad,
>> >> >
>> >> > the behavior you see is normal and expected.
>> >> >
>> >> >
>> >> > First to see the queue statistic on LSQ interface you most
>> >> > likely forgot to add the subunit number as the interface
>> >> > queue number will be zero all the time since this is the
>> >> > entire LSQ interfaces. That's the reason why you configure
>> >> > per-unit-scheduler on the LSQ interface.
>> >> >
>> >> > On the egress interface we have to put all into Q0 since you
>> >> > are not using multiclass mlppp and we have only one SEQ pool
>> >> > so we will end up all in one queue to prevent re-order. The
>> >> > queuing is done in LSQ prior to putting on the seq stamps.
>> >> >
>> >> > We do recommend once there is LFI traffic to configure
>> >> > scheduler on the egress PIC to make sure it gets the right
>> >> > priority and served prior to the ML packets and the
>> >> > interleaving is done there. So with LFI traffic and the
>> >> > fragmentation-map it would then go into a different egress PIC
>> >> > queue. If you use ML-MLPPP you will then see all going in
>> >> > different egress queues.
>> >> >
>> >> >
>> >> > However the point is that queuing is done on LSQ. So your
>> >> > configuration is ok and most likely all is working correctly.
>> >> > Just check if you get the LSQ queue number
>> >> >
>> >> >
>> >> >
>> >> > <-- example like this, please check on your side
>> >> >
>> >> > josefb at minsk# run show interfaces queue lsq-1/2/0.0
>> >> > Logical interface lsq-1/2/0.0 (Index 76) (SNMP ifIndex 65)
>> >> > Forwarding classes: 4 supported, 4 in use
>> >> > Egress queues: 4 supported, 4 in use
>> >> > Burst size: 0
>> >> > Queue: 0, Forwarding classes: best-effort
>> >> > Queued:
>> >> > Packets : 113479 166
>> >> > pps
>> >> >
>> >>
>>
>>
>>
More information about the juniper-nsp
mailing list