[j-nsp] info VC QFX
Paris Arau
parau at juniper.net
Wed Mar 25 05:51:28 EDT 2015
Hi,
A reboot solved the problem.
Thank you for following on this.
Thanks,
Paris
From: Plamen Stoev <plamen.stoev at profitbricks.com<mailto:plamen.stoev at profitbricks.com>>
Reply-To: "plamen.stoev at profitbricks.com<mailto:plamen.stoev at profitbricks.com>" <plamen.stoev at profitbricks.com<mailto:plamen.stoev at profitbricks.com>>
Date: Wednesday 25 March 2015 11:27
To: Microsoft Office User <parau at juniper.net<mailto:parau at juniper.net>>
Cc: Tore Anderson <tore at fud.no<mailto:tore at fud.no>>, james list <jameslist72 at gmail.com<mailto:jameslist72 at gmail.com>>, "juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>" <juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>
Subject: Re: [j-nsp] info VC QFX
Hi Paris,
The VC is 2 members QFX5100-VC:
> show version | grep "fpc|model|os software"
fpc0:
--------------------------------------------------------------------------
Model: qfx5100-48s-6q
JUNOS Base OS Software Suite [13.2X51-D25.2]
fpc1:
--------------------------------------------------------------------------
Model: qfx5100-48s-6q
JUNOS Base OS Software Suite [13.2X51-D25.2]
Following configuration is applied:
> show configuration groups | display set
set groups node0 when member member0
set groups node0 system host-name member0
set groups node0 interfaces em0 unit 0 family inet address 10.10.10.1/24<http://10.10.10.1/24>
set groups node1 when member member1
set groups node1 system host-name member1
set groups node1 interfaces em0 unit 0 family inet address 10.10.10.2/24<http://10.10.10.2/24>
> show configuration apply-groups | display set
set apply-groups node0
set apply-groups node1
So I can ping and access both routing engines remotely:
> ssh 10.10.10.1
Password:
--- JUNOS 13.2X51-D25.2 built 2014-07-26 05:43:18 UTC
{master:0}
>
> ssh 10.10.10.2
Password:
--- JUNOS 13.2X51-D25.2 built 2014-07-26 05:43:18 UTC
warning: This chassis is operating in a non-master role as part of a virtual-chassis (VC) system.
warning: Use of interactive commands should be limited to debugging and VC Port operations.
warning: Full CLI access is provided by the Virtual Chassis Master (VC-M) chassis.
warning: The VC-M can be identified through the show virtual-chassis status command executed at this console.
warning: Please logout and log into the VC-M to use CLI.
{backup:1}
>
Thanks,
Plamen
On Wed, Mar 25, 2015 at 10:06 AM, Paris Arau <parau at juniper.net<mailto:parau at juniper.net>> wrote:
Hi Plamen,
I tried similar configuration(both with and without condition) and I¹m not
able to get this working.
set groups node0 when member member0
set groups node0 system host-name member0
set groups node0 interfaces em0 unit 0 family inet address
172.30.159.250/23<http://172.30.159.250/23>
set groups node1 when member member1
set groups node1 system host-name member1
set groups node1 interfaces em0 unit 0 family inet address
172.30.159.251/23<http://172.30.159.251/23>
If FPC1 is the master, it¹s working just fine but when I¹m pinging FPC0, I
get this:
lab at UBUNTU:~$ ping 172.30.159.251
PING 172.30.159.251 (172.30.159.251) 56(84) bytes of data.
64 bytes from 172.30.159.251<http://172.30.159.251>: icmp_seq=1 ttl=64 time=0.319 ms
^C
--- 172.30.159.251 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms
lab at UBUNTU:~$ ping 172.30.159.250
PING 172.30.159.250 (172.30.159.250) 56(84) bytes of data.
From 172.30.159.251<http://172.30.159.251>: icmp_seq=1 Redirect Host(New nexthop: 172.30.159.250)
From 172.30.159.251<http://172.30.159.251>: icmp_seq=2 Redirect Host(New nexthop: 172.30.159.250)
^C
--- 172.30.159.250 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1008ms
lab at UBUNTU:~$
If I¹ll do the master switchover, then it will be the other way around.
Is there anything else that needs to be configured? What release are you
using?
Thanks,
Paris
On 25/03/15 10:40, "Plamen Stoev" <plamen.stoev at profitbricks.com<mailto:plamen.stoev at profitbricks.com>> wrote:
>Hi All,
>
>It is possible to configure each VC member to have its own em0
>configuration and it works very fine.
>
>You need to apply something like following:
>
>node0 {
> when {
> member member0;
> }
> system {
> host-name member0;
> }
> interfaces {
> em0 {
> description "member0 em0 interface";
> unit 0 {
> family inet {
> address 10.10.10.1/24<http://10.10.10.1/24>;
> }
> }
> }
> }
>}
>node1 {
> when {
> member member1;
> }
> system {
> host-name member1;
> }
> interfaces {
> em0 {
> description "member1 em0 interface";
> unit 0 {
> family inet {
> address 10.10.10.2/24<http://10.10.10.2/24>;
> }
> }
> }
> }
>}
>apply-groups [ node0 node1 ];
>
>Thanks,
>Plamen
>
>
>On Wed, Mar 25, 2015 at 9:20 AM, Tore Anderson <tore at fud.no<mailto:tore at fud.no>> wrote:
>
>> * james list <jameslist72 at gmail.com<mailto:jameslist72 at gmail.com>>
>>
>> > on QFX VC is there a way to configure VME interface to respond on each
>> > module of the VC instead to be redirected on the master RE ?
>> >
>> > If yes a little configuration example is appreciated.
>>
>> I haven't tried QFX, but on EX you can use apply-groups to match
>> individual members in the VC, and set up different addressing on each
>> member's "me0" interface (note: you will *not* be using "vme").
>>
>> See http://kb.juniper.net/InfoCenter/index?page=content&id=KB15556
>>
>> Try it and let the list know if it works on QFX too?
>>
>> Tore
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>_______________________________________________
>juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
>https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list