[j-nsp] QFX CRB

Cristian Cardoso cristian.cardoso11 at gmail.com
Tue Nov 10 14:07:05 EST 2020


> show configuration protocols evpn
vni-options {
    vni 810 {
        vrf-target target:888:888;
    }
    vni 815 {
        vrf-target target:888:888;
    }
    vni 821 {
        vrf-target target:888:888;
    }
    vni 822 {
        vrf-target target:888:888;
    }
    vni 827 {
        vrf-target target:888:888;
    }
    vni 830 {
        vrf-target target:888:888;
    }
    vni 832 {
        vrf-target target:888:888;
    }
    vni 910 {
        vrf-target target:666:666;
    }
    vni 915 {
        vrf-target target:666:666;
    }
    vni 921 {
        vrf-target target:666:666;
    }
    vni 922 {
        vrf-target target:666:666;
    }
    vni 927 {
        vrf-target target:666:666;
    }
    vni 930 {
        vrf-target target:666:666;
    }
    vni 932 {
        vrf-target target:666:666;
    }
    vni 4018 {
        vrf-target target:4018:4018;
    }
}
encapsulation vxlan;
default-gateway no-gateway-community;
extended-vni-list all;


An example of configuring the interfaces follows, all follow this
pattern with more or less IP's.
> show configuration interfaces irb.810
proxy-macip-advertisement;
virtual-gateway-accept-data;
family inet {
    mtu 9000;
    address 10.19.11.253/22 {
        preferred;
        virtual-gateway-address 10.19.8.1;
    }
}

Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
<nitzan.tzelniker at gmail.com> escreveu:
>
> Can you show your irb and protocols evpn configuration please
>
> Nitzan
>
> On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <cristian.cardoso11 at gmail.com> wrote:
>>
>> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
>> I have two spine switches and two leaf switches, when I use the
>> virtual-gateway in active / active mode in the spines, the servers
>> connected only in leaf1 have a large increase in IRQ's, generating
>> higher CPU consumption in the servers.
>> I did a test by deactivating spine2 and leaving only the gateway
>> spine1, and the IRQ was zeroed out.
>> Did anyone happen to go through this?
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp


More information about the juniper-nsp mailing list