[j-nsp] vMX questions - vCPU math
Aaron1
aaron1 at gvtc.com
Sun Dec 30 12:12:43 EST 2018
With vMX I understand that as more performance is needed, more vcpu, network card(s) and memory are needed. As you scale up, a single vcpu is still used for control plane, any additional vcpu‘s are used for forwarding plane. The assignment of resources is automatic and not configurable.
Aaron
> On Dec 30, 2018, at 2:53 AM, Robert Hass <robhass at gmail.com> wrote:
>
> Hi
> I have few questions regarding vMX deployed on platform:
> - KVM+Ubuntu as Host/Hypervisor
> - server with 2 CPUs, 8 core each, HT enabled
> - DualPort (2x10G) Intel X520 NIC (SR-IOV mode)
> - DualPort Intel i350 NIC
> - vMX performance-mode (SR-IOV only)
> - 64GB RAM (4GB Ubuntu, 8GB vCP, 52GB vFPC)
> - JunOS 18.2R1-S1.5 (but I can upgrade to 18.3 or even 18.4)
>
> 1) vMX is using CPU-pinning technique. Can vMX use two CPUs for vFPC ?
> Eg. machine with two CPUs, 6 cores each. Total 12 cores. Will vMX
> use secondary CPU for packet processing ?
>
> 2) Performance mode for VFP requires cores=(4*number-of-ports)+3.
> So in my case (2x10GE SR-IOV) it's (4*2)+3=11. Will vMX count the
> cores resulting from HT (not physical) in that case?
>
> 3) How JunOS Upgrade process looks like on vMX ? Is it regular
> request system software add ...
>
> Rob
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list