[j-nsp] vMX and SR-IOV, VFP xml dump

Simon Dixon dicko at highway1.com.au
Thu Aug 22 17:45:14 EDT 2019


https://github.com/pinggit/vmx-tutorial

does a pretty good job of explaining what is happening in the background

Simon.


On Thu, 22 Aug 2019 at 18:34, <adamv0025 at netconsultings.com> wrote:

> > From: Chris <lists+j-nsp at gbe0.com>
> > Sent: Thursday, August 22, 2019 6:44 AM
> >
> > Hi
> >
> > On 21/08/2019 3:32 pm, adamv0025 at netconsultings.com wrote:
> > > Thank you, much appreciated.
> > > Out of curiosity what latency you get when pinging through the vMX
> > please?
> >
> > It's less than 1/10th of a millisecond (while routing roughly 3gbit of
> traffic and
> > this via a GRE tunnel running over IPSEC terminated on the vMX), I
> haven't
> > done more testing to get exact figures though as this is good enough for
> my
> > needs.
> >
> For some reason mine is acting as if there's some kind of throttling or
> pps rate performance issue.
> This is pinging not to the vMX but rather through the vMX so only VFP is
> at play.
>
> ping 192.0.2.6 source 192.0.2.2 interval 1
> PING 192.0.2.6 (192.0.2.6) from 192.0.2.2: 56 data bytes
> 64 bytes from 192.0.2.6: icmp_seq=0 ttl=253 time=1.021 ms
> 64 bytes from 192.0.2.6: icmp_seq=1 ttl=253 time=0.861 ms
> 64 bytes from 192.0.2.6: icmp_seq=2 ttl=253 time=0.83 ms
> 64 bytes from 192.0.2.6: icmp_seq=3 ttl=253 time=0.85 ms
> 64 bytes from 192.0.2.6: icmp_seq=4 ttl=253 time=1.115 ms
>
> ping 192.0.2.6 source 192.0.2.2
> PING 192.0.2.6 (192.0.2.6) from 192.0.2.2: 56 data bytes
> 64 bytes from 192.0.2.6: icmp_seq=0 ttl=253 time=1.202 ms
> 64 bytes from 192.0.2.6: icmp_seq=1 ttl=253 time=7.988 ms
> 64 bytes from 192.0.2.6: icmp_seq=2 ttl=253 time=7.968 ms
> 64 bytes from 192.0.2.6: icmp_seq=3 ttl=253 time=8.047 ms
> 64 bytes from 192.0.2.6: icmp_seq=4 ttl=253 time=7.918 ms
>
>
> > I am actually curious though, why not use the vmx.sh script to
> start/stop it? I
> > don't think JTAC will support more than basic troubleshooting with that
> > configuration but I could be wrong.
> >
> Unfortunately I have to say that so far the JTAC support has been
> useless.
> My biggest problem with the vmx.sh is that it does a lot of stuff behind
> the scenes which are not documented anywhere.
> It would be much better if the documentation explained how the information
> in the .conf file translates into actions or settings.
>
>
> >
> > If you are doing a new deployment I strongly recommend you jump to
> > 19.1R1 or higher. The reason for this is the Juniper supplied drivers
> for i40e
> > (and ixgbe) are no longer required (actually they are deprecated). All
> > releases before 19.1R1 I have had constant issues with the vFP crashing
> and
> > the closest to a fix I got was a software package that would restart the
> vFPC
> > automatically. When the crash occured it would show in the hosts kernel
> log
> > file that a PF reset has occured.
> > This happened across multiple Ubuntu and CentOS releases. After deploying
> > 19.1R1 with the latest Intel supplied i40e and iavf (replacement for
> i40evf)
> > drivers it has been stable for me.
> >
> Hmm good to know but yes using 19.2R1 currently for testing (has support
> for 40G interfaces)
>
> > Since deploying 19.1R1, on startup I create the VF's and mark them as
> trusted
> > instead of letting the vmx.sh script handle it. Happy to supply the
> startup
> > script I made if its helpful.
> >
> Yes please if you could share the .conf file that would be great.
>
> adam
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>


-- 

Dicko.


More information about the juniper-nsp mailing list