[j-nsp] MX104 limitation
Scott Granados
scott at granados-llc.net
Thu Mar 23 13:50:15 EDT 2017
Hi, in hardware flow sampling like inline Flow, I believe all sampling is done at 1/1 and the sample rate is only a scaling factor and doesn’t effect the physical sampling rate of the card. It’s been a while though, I may be wrong on this but this was the case in code up through 13.2 or so.
> On Mar 23, 2017, at 1:37 PM, Nitzan Tzelniker <nitzan.tzelniker at gmail.com> wrote:
>
> Hi,
>
> If you run with inline jflow what was your sampling rate ?
> IIRC there is some bw limitation for inline sampling but I dont know if it
> include the sampling calculation or not
>
> Nitzan
>
> On Thu, Mar 23, 2017 at 11:12 AM, Saku Ytti <saku at ytti.fi> wrote:
>
>> It's still about 75Gbps (i.e. for example 35Gbps+40Gbps) and 55Mpps.
>>
>> But memory bandwidth is dependant on how well packet aligns into
>> cells, in manufactured example you could have packet which cause
>> singly byte to be transferred on second cell, essentially doubling
>> internal memory bandwidth requirement.
>> Traffic hitting QX will also experience significantly lower memory
>> bandwidth.
>>
>> This is not MX104 specific, same applies to MX80, and MPC1, MPC2, MPC3
>> on per Trio basis.
>>
>> On 23 March 2017 at 03:31, Javier Rodriguez <rodriguezsotelo at gmail.com>
>> wrote:
>>> Hi,
>>>
>>> As Nitzan suggested, I deactivated the inline jflow and the traffic has
>>> increased.
>>> Now I ask, what is the real forwarding capacity of this box? 40G in + 40G
>>> out? (now it didn't reach 40G in total)
>>>
>>> Javier.
>>>
>>> 2017-03-20 12:15 GMT-03:00 Javier Rodriguez <rodriguezsotelo at gmail.com>:
>>>
>>>> Nitzan, thank you very much, I'll keep that in mind.
>>>> Anyway I can not understand how the router "eats" the packets without
>>>> being counted ....That gives me panic!
>>>> I can't find discarded packets anywhere!
>>>>
>>>> JR.
>>>>
>>>> 2017-03-20 2:31 GMT-03:00 Nitzan Tzelniker <nitzan.tzelniker at gmail.com
>>> :
>>>>
>>>>> We saw a limitation around 40Gbps when running MX80 with RE based jflow
>>>>> (inline works good ) we didnt got good explanation why it limit the
>> traffic
>>>>> so try to disable some features and see if it help
>>>>>
>>>>> Nitzan
>>>>>
>>>>> On Mon, Mar 20, 2017 at 6:14 AM, Javier Rodriguez <
>>>>> rodriguezsotelo at gmail.com> wrote:
>>>>>
>>>>>> Mmm no, I think it doesn't work on MX80 / MX104.
>>>>>>
>>>>>> JR.
>>>>>>
>>>>>>
>>>>>> 2017-03-19 23:14 GMT-03:00 Olivier Benghozi <
>> olivier.benghozi at wifirst.fr
>>>>>>> :
>>>>>>
>>>>>>> What about bypass-queuing-chip on MIC interfaces ? Would it work on
>>>>>>> MX80/104 ?
>>>>>>>
>>>>>>>> On 20 march 2017 at 01:32, Saku Ytti <saku at ytti.fi> wrote :
>>>>>>>>
>>>>>>>> Ok that's only 31Gbps total, without having any actual data, my
>> best
>>>>>>>> guess is that you're running through QX. Only quick reason I can
>> come
>>>>>>>> up for HW to limit on so modest traffic levels.
>>>>>>>>
>>>>>>>> On 20 March 2017 at 02:25, Javier Rodriguez <
>>>>>> rodriguezsotelo at gmail.com>
>>>>>>> wrote:
>>>>>>>>> Soku,
>>>>>>>>>
>>>>>>>>> Maybe there was a misunderstanding , the inbound traffic on
>> fpc2's
>>>>>> LAG
>>>>>>> was
>>>>>>>>> 4Gbps , and the outbound traffic was 27Gbps aprox. That outbound
>>>>>> traffic
>>>>>>>>> enters by the fpc1 and fpc0.
>>>>>>>>> It's IMIX traffic, the average packet size is 1250Bytes (out)
>>>>>> 200Bytes
>>>>>>> (in).
>>>>>>>>> I tried to see dropped packets with "show precl-eng 5 statistics
>> "
>>>>>> and
>>>>>>> "show
>>>>>>>>> mqchip 0 drop stats" at pfe shell but it's 0. Does it save
>>>>>> historical
>>>>>>> data?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> <--27G-- | | <--27G--
>>>>>>>>> |FPC2 FPC 0/1 |
>>>>>>>>> --4G--> | | --4G-->
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>>
>>>>>>>>> Javier.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2017-03-19 20:43 GMT-03:00 Saku Ytti <saku at ytti.fi>:
>>>>>>>>>>
>>>>>>>>>> Hey,
>>>>>>>>>>
>>>>>>>>>> There aren't multiple FPCs on the box really, there is only
>> single
>>>>>> MQ
>>>>>>>>>> chip out of where all ports sit, usually MIC ports behind
>>>>>> additional
>>>>>>>>>> IX chip, which is not congested. It's architecturally single
>>>>>> linecard
>>>>>>>>>> fabricless box.
>>>>>>>>>> You're saying you're pushing on the 4x10GE fixed ports
>> 31+31Gbps,
>>>>>> e.g.
>>>>>>>>>> 62Gbps? It might be possible on (perhaps artificially)
>> unfortunate
>>>>>>>>>> cell alignment that it could be congested on so low values. Are
>> all
>>>>>>>>>> the packets same size, i.e is this lab scenario or just IMIX
>>>>>> traffic?
>>>>>>>>>> MQ pfe exceptions and MQ=>LU counters might be interesting to
>> see.
>>>>>>>>>>
>>>>>>>>>> If you use QX chip, 62Gbps would be really good, QX chip is not
>>>>>>>>>> dimensioned for line rate _unidir_ (i.e. can't do even 40Gbps).
>> If
>>>>>> you
>>>>>>>>>> don't know if you're using QX or not, just deactive whole
>>>>>>>>>> class-of-service and scheduer config in interfaces.
>>>>>>>>>>
>>>>>>>>>> On 20 March 2017 at 01:26, Javier Rodriguez <
>>>>>> rodriguezsotelo at gmail.com
>>>>>>>>
>>>>>>>>>> wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> Thanks for your reply Saku.
>>>>>>>>>>> The problem is that fpc2 (fixed ports) can't overcome 31Gbps
>> (in +
>>>>>>> out)
>>>>>>>>>>> with 6Mpps. The graph shows a straight line as if it were being
>>>>>>> limited.
>>>>>>>>>>> I have moved some interfaces from LAG to fpc1 and fpc0 and the
>>>>>> traffic
>>>>>>>>>>> has
>>>>>>>>>>> incresed. (It only has a tunnel-service in fpc0 of 1g)
>>>>>>>>>>> It's as if it were being limited by the MQ, but I do not see
>>>>>> discarded
>>>>>>>>>>> packages, or I do not know where to look at them.
>>>>>>>>>>>
>>>>>>>>>>> JR.
>>>>>>>>>>>
>>>>>>>>>>> 2017-03-19 6:53 GMT-03:00 Saku Ytti <ytti at ntt.net>:
>>>>>>>>>>>
>>>>>>>>>>>> Hey Javier,
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> MX104 and MX80 (1st gen Trio MQ/LU) should do about 55Mpps and
>>>>>> 75Gbps
>>>>>>>>>>>> (in+out).
>>>>>>>>>>>>
>>>>>>>>>>>> On 19 March 2017 at 09:12, Javier Rodriguez <
>>>>>>> rodriguezsotelo at gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>> Hi everyone,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I need a bit of your knowledge.
>>>>>>>>>>>>> I have a MX104 as PE router with 4 LAGs.
>>>>>>>>>>>>> One LAG facing to P router on FPC2 (fixed ports). The other
>> LAGs
>>>>>>>>>>>>> distributed in FPC0 and FPC1.
>>>>>>>>>>>>> The problem is that traffic is being limited when reach 28G
>>>>>> out/ 4G
>>>>>>>>>>>>> in
>>>>>>>>>>>>> (31Gbps total).
>>>>>>>>>>>>> I changed one interface (10G) of the LAG (to P router) to
>> FPC1
>>>>>> and
>>>>>>>>>>>>> the
>>>>>>>>>>>>> traffic has grown a little more.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Where is the limitation? In the MQ chip?
>>>>>>>>>>>>> Where can I see those discarded packages?
>>>>>>>>>>>>> How much traffic will the router support on FPC2?
>>>>>>>>>>>>> Where could I get a graphic of its internal architecture?
>>>>>>>>>>>>> Does a MX80 have the same behavior?
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Atte.
>>>>>>
>>>>>> Javier I. Rodríguez Sotelo
>>>>>> _______________________________________________
>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Atte.
>>>>
>>>> Javier I. Rodríguez Sotelo
>>>>
>>>>
>>>
>>>
>>> --
>>> Atte.
>>>
>>> Javier I. Rodríguez Sotelo
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>>
>>
>> --
>> ++ytti
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list