[j-nsp] inline-jflow monitoring

Aaron Gould aaron1 at gvtc.com
Wed Jan 2 11:06:15 EST 2019


I recently did this on operational/live MX960's on my 100 gig mpls ring with
no problem.  ...no service impact, no card reboots.

set chassis fpc 0 inline-services flow-table-size ipv4-flow-table-size 4

I run...

agould at 960> show system information
Model: mx960
Family: junos
Junos: 17.4R1-S2.2
Hostname: 960

{master}
agould at 960> show chassis hardware models | grep "fpc|engine"
Routing Engine 0 REV 15   750-054758   (removed)          RE-S-X6-64G-S
Routing Engine 1 REV 15   750-054758   (removed)          RE-S-X6-64G-S
FPC 0            REV 43   750-056519   (removed)          MPC7E-MRATE
FPC 11           REV 43   750-056519   (removed)          MPC7E-MRATE

Yeah, prior to this, you see lots of creation failures...

{master}[edit]
agould@ 960# run show services accounting errors inline-jflow fpc-slot 0 |
grep creation
    Flow Creation Failures: 1589981308
    IPv4 Flow Creation Failures: 1582829194
    IPv6 Flow Creation Failures: 7152114

During change, if you look closely, you will see PFE-0 and PFE-1
"reconfiguring"....then "steady"

And flow count will change from 1024 to whatever you change it to

show services accounting status inline-jflow fpc-slot 0

these are my notes when I did this a few months ago...

...these numbers didn't look right at first considering they say that the
unit is a multiplier for 256K base number.... i set v4 to 4 and v6 to 1...
so i thought the number would simply be...

256k * 4 ... (but "k" = 1024) so... (256 * 1024 = 262,144).... 262,144 * 4 =
1,048,576

but new ipv4 flow limit is .... 1,466,368 so.... 1,466,368 - 1,048,576 =
417,792  

...what is this strange extra 417,792 ?  interestling if you divide it be
1024 you get... 408....

417,792 / 1024 = 408

and i know i used a 4 for ipv4 multiplier...so i assume 408 / 4 = 102

so let's check ipv6... 

256 * 1024 = 262,144

ipv6 flow limit is now 366,592

366,592 - 262,144 = 104,448

104,448 / 1024 = 102

there's our nice little 102 again :)



- Aaron




More information about the juniper-nsp mailing list