[j-nsp] MX104 and NetFlow - Any horror story to share?

Michael Hare michael.hare at wisc.edu
Tue May 1 10:30:14 EDT 2018


Alain,

Do you want to collect IPv6?  You are probably passed 14.X code on MX104 but I observed that I was unable to change the ipv6-flow-table-size at all (including after reboot).  I was able to set flow-table-size in 16.X but my load average on 16.X on MX104 is pretty terrible; seems like I got all of the performance penalty of threading in 16.X without an additional core unlocked on the MX104 RE.  Since 14.X is near EOL I didn't harass JTAC.

Thanks and a nod to Olivier, I hadn't seen "flex-flow-sizing" before, seems like I really wanted that, not the explicit flow-table-size commands.

abbreviated code example below.

        chassis {
            afeb {
                slot 0 {
                    inline-services {
                        flow-table-size {
                            ipv4-flow-table-size 7;
                            ipv6-flow-table-size 7;
                        }
                    }
                }
            }
        }

-Michael

>>-----Original Message-----
>>From: juniper-nsp [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf
>>Of Alain Hebert
>>Sent: Tuesday, May 01, 2018 8:23 AM
>>To: juniper-nsp at puck.nether.net
>>Subject: Re: [j-nsp] MX104 and NetFlow - Any horror story to share?
>>
>>     Yeah I had the feeling I would break those MX's.
>>
>>     At this point it is worth it to rebuilt our vMX lab to test the
>>IPFIX variant...
>>
>>     Thanks for the input.
>>
>>
>>     As for routing we have a pretty good mix of T1/T2 providers and we
>>rarely drop sessions so it is providing a pretty good uptime...  And
>>that's why we got a pair of MX960 coming down anytime this year.
>>
>>
>>     PS: Unrelated quote - Yeah fat fingers sorry list.
>>
>>-----
>>Alain Hebert                                ahebert at pubnix.net
>>PubNIX Inc.
>>50 boul. St-Charles
>>P.O. Box 26770     Beaconsfield, Quebec     H9W 6G7
>>Tel: 514-990-5911  http://www.pubnix.net    Fax: 514-990-9443
>>
>>On 04/30/18 19:41, Olivier Benghozi wrote:
>>> Hi Alain,
>>>
>>> While you seem to already be kind of suicidal (5 full tables peers on an
>>MX104), on an MX you must not use netflow v9 (CPU based) but use inline
>>IPFIX (Trio / PFE based).
>>> I suppose that Netflow-v9 on an MX104 could be quickly an interesting
>>horror story with real traffic due to its ridiculously slow CPU, by the way.
>>> With inline IPFIX it should just take some more RAM, and FIB update could
>>be a bit slower.
>>>
>>> By the way on MX104 you don't configure «fpc» (bigger MXs) of «tfeb»
>>(MX80) in chassis hierarchy, but «afeb», so you can remove your fpc line and
>>fix your tfeb line.
>>>
>>> So you'll need something like that in services, instead of version9:
>>> set services flow-monitoring version-ipfix template ipv4 template-refresh-
>>rate
>>> set services flow-monitoring version-ipfix template ipv4 option-refresh-
>>rate
>>> set services flow-monitoring version-ipfix template ipv4 ipv4-template
>>>
>>> And these ones too, to allocate some memory for the flows in the Trio and
>>to define how it will speaks with the collector:
>>> set chassis afeb slot 0 inline-services flex-flow-sizing
>>> set forwarding-options sampling instance NETFLOW-SI family inet output
>>inline-jflow source-address a.b.c.d
>>>
>>> Of course you'll remove the line with «output flow-server <snip> source
>><Mgmt>».
>>>
>>>
>>>
>>> I don't see why you quoted the mail from Brijesh Patel about the Routing
>>licences, by the way :P
>>>
>>>
>>> Olivier
>>>
>>>> On 30 apr. 2018 at 21:34, Alain Hebert <ahebert at pubnix.net> wrote :
>>>>
>>>>
>>>> Anyone has any horror stories with something similar to what we're about
>>to do?
>>>>      We're planning to turn up the following Netflow config (see below) on
>>our MX104s (while we wait for our new MX960 =D), it worked well with
>>everything else (SRX mostly), the "*s**et chassis"* are making us wonder
>>how high would be the possibility to render those system unstable, at short
>>and long term.
>>>>
>>>>      Thanks again for your time.
>>>>
>>>>      PS: We're using Elastiflow, and its working great for our needs atm.
>>>>
>>>>
>>>> ------ A bit of context
>>>>
>>>>          Model: mx104
>>>>          Junos: 16.1R4-S1.3
>>>>
>>>>      They're routing about 20Gbps atm, with 5 full tables peers, ~0.20 load
>>average, and 700MB mem free.
>>>>
>>>>
>>>> ------ The Netflow config
>>>>
>>>> *set chassis tfeb0 slot 0 sampling-instance NETFLOW-SI*
>>>>
>>>> *set chassis fpc 1 sampling-instance NETFLOW-SI*
>>>>
>>>> set services flow-monitoring version9 template FM-V9 option-refresh-
>>rate seconds 25
>>>> set services flow-monitoring version9 template FM-V9 template-refresh-
>>rate seconds 15
>>>> set services flow-monitoring version9 template FM-V9 ipv4-template
>>>>
>>>> set forwarding-options sampling instance NETFLOW-SI input rate 1 run-
>>length 0
>>>> set forwarding-options sampling instance NETFLOW-SI family inet output
>>flow-server <snip> port 2055
>>>> set forwarding-options sampling instance NETFLOW-SI family inet output
>>flow-server <snip> source <Mgmt>
>>>> set forwarding-options sampling instance NETFLOW-SI family inet output
>>flow-server <snip> version9 template FM-V9
>>>> set forwarding-options sampling instance NETFLOW-SI family inet output
>>inline-jflow source-address <Mgmt>
>>>>
>>>> set interfaces <X> unit <Y> family inet sampling input
>>>> set interfaces <X> unit <Y> family inet sampling output
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>>_______________________________________________
>>juniper-nsp mailing list juniper-nsp at puck.nether.net
>>https://puck.nether.net/mailman/listinfo/juniper-nsp


More information about the juniper-nsp mailing list