[j-nsp] QFX CRB

Laurent Dumont laurentfdumont at gmail.com
Wed Nov 11 21:36:15 EST 2020


How are you measuring IRQ on the servers? If it's network related IRQs, it
should be seen during a packet capture.

On Tue, Nov 10, 2020 at 4:40 PM Cristian Cardoso <
cristian.cardoso11 at gmail.com> wrote:

> I running 19.1R2.8 version on Junos.
> Today I was in contact with Juniper support about a route depletion
> problem and it seems to be related to the IRQs problem.
> When the IPv4 / IPv6 routes are exhausted in the LTM table, the IRQ
> increment begins.
> I did an analysis of the packages trafficked on the servers, but I
> found nothing out of the ordinary.
>
> Em ter., 10 de nov. de 2020 às 17:47, Nitzan Tzelniker
> <nitzan.tzelniker at gmail.com> escreveu:
> >
> > Looks ok to me
> > Which junos version you are running ? and which devices ?
> > Did you capture on the servers to see what is the traffic that causes
> the high CPU utilization ?
> >
> >
> > On Tue, Nov 10, 2020 at 9:07 PM Cristian Cardoso <
> cristian.cardoso11 at gmail.com> wrote:
> >>
> >> > show configuration protocols evpn
> >> vni-options {
> >>     vni 810 {
> >>         vrf-target target:888:888;
> >>     }
> >>     vni 815 {
> >>         vrf-target target:888:888;
> >>     }
> >>     vni 821 {
> >>         vrf-target target:888:888;
> >>     }
> >>     vni 822 {
> >>         vrf-target target:888:888;
> >>     }
> >>     vni 827 {
> >>         vrf-target target:888:888;
> >>     }
> >>     vni 830 {
> >>         vrf-target target:888:888;
> >>     }
> >>     vni 832 {
> >>         vrf-target target:888:888;
> >>     }
> >>     vni 910 {
> >>         vrf-target target:666:666;
> >>     }
> >>     vni 915 {
> >>         vrf-target target:666:666;
> >>     }
> >>     vni 921 {
> >>         vrf-target target:666:666;
> >>     }
> >>     vni 922 {
> >>         vrf-target target:666:666;
> >>     }
> >>     vni 927 {
> >>         vrf-target target:666:666;
> >>     }
> >>     vni 930 {
> >>         vrf-target target:666:666;
> >>     }
> >>     vni 932 {
> >>         vrf-target target:666:666;
> >>     }
> >>     vni 4018 {
> >>         vrf-target target:4018:4018;
> >>     }
> >> }
> >> encapsulation vxlan;
> >> default-gateway no-gateway-community;
> >> extended-vni-list all;
> >>
> >>
> >> An example of configuring the interfaces follows, all follow this
> >> pattern with more or less IP's.
> >> > show configuration interfaces irb.810
> >> proxy-macip-advertisement;
> >> virtual-gateway-accept-data;
> >> family inet {
> >>     mtu 9000;
> >>     address 10.19.11.253/22 {
> >>         preferred;
> >>         virtual-gateway-address 10.19.8.1;
> >>     }
> >> }
> >>
> >> Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
> >> <nitzan.tzelniker at gmail.com> escreveu:
> >> >
> >> > Can you show your irb and protocols evpn configuration please
> >> >
> >> > Nitzan
> >> >
> >> > On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <
> cristian.cardoso11 at gmail.com> wrote:
> >> >>
> >> >> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging
> topology?
> >> >> I have two spine switches and two leaf switches, when I use the
> >> >> virtual-gateway in active / active mode in the spines, the servers
> >> >> connected only in leaf1 have a large increase in IRQ's, generating
> >> >> higher CPU consumption in the servers.
> >> >> I did a test by deactivating spine2 and leaving only the gateway
> >> >> spine1, and the IRQ was zeroed out.
> >> >> Did anyone happen to go through this?
> >> >> _______________________________________________
> >> >> juniper-nsp mailing list juniper-nsp at puck.nether.net
> >> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>


More information about the juniper-nsp mailing list