[j-nsp] SXR 650 Redundancy Group Problem

Humair Ali humair at premier.com.pk
Mon Mar 21 00:46:41 EDT 2011


Dear Walaa,

Try to use   redundancy-group 0 for same purpose

Thanks
Humair Ali
________________________________________
From: juniper-nsp-bounces at puck.nether.net [juniper-nsp-bounces at puck.nether.net] On Behalf Of juniper-nsp-request at puck.nether.net [juniper-nsp-request at puck.nether.net]
Sent: Sunday, March 20, 2011 3:50 PM
To: juniper-nsp at puck.nether.net
Subject: juniper-nsp Digest, Vol 100, Issue 57

Send juniper-nsp mailing list submissions to
        juniper-nsp at puck.nether.net

To subscribe or unsubscribe via the World Wide Web, visit
        https://puck.nether.net/mailman/listinfo/juniper-nsp
or, via email, send a message with subject or body 'help' to
        juniper-nsp-request at puck.nether.net

You can reach the person managing the list at
        juniper-nsp-owner at puck.nether.net

When replying, please edit your Subject line so it is more specific
than "Re: Contents of juniper-nsp digest..."


Today's Topics:

   1. SXR 650 Redundancy Group Problem (Walaa Abdel razzak)
   2. Re: disable status vector on juniper router (meryem Z)
   3. Re: 10.0 or 10.4? (Paul Zugnoni)
   4. Re: 10.0 or 10.4? (Doug Hanks)
   5. Re: 10.0 or 10.4? (Paul Zugnoni)
   6. snmp fan bug? and are environmental thresholds configurable? (bas)


----------------------------------------------------------------------

Message: 1
Date: Sat, 19 Mar 2011 19:34:47 +0300
From: "Walaa Abdel razzak" <walaaez at bmc.com.sa>
To: <juniper-nsp at puck.nether.net>
Subject: [j-nsp] SXR 650 Redundancy Group Problem
Message-ID:
        <E2C120A806ED3349A9F9E9913E0C8C1FA346DD at bmcserver.bmc.com.sa>
Content-Type: text/plain;       charset="windows-1256"

Hi Experts



I am configuring redundancy group to trigger failover in case of interface failure. I have reth interface for trust zone that has two physical interfaces, one interface on the active node and the other on the passive and the same for reth1 on untrust zone. The target is to make traffic go through the passive node in case of any physical interface failure in the active node. The problem I am facing is that the failover happen normally when any interface goes down but there is no traffic from trust to untrust or vice versa, when the down interface comes to up again, the traffic flows without problems.



The RG configuration is as follows:



test at FW1# show chassis

cluster {

    reth-count 2;

    redundancy-group 0 {

        node 0 priority 100;

        node 1 priority 1;

    }

    redundancy-group 1 {

        node 0 priority 100;

        node 1 priority 1;

        preempt;

        gratuitous-arp-count 4;

        interface-monitor {

            ge-2/0/0 weight 255;  ? Interface on the active node

            ge-2/0/1 weight 255; ? Interface on the active node

        }

    }

}



When the active interface goes down:



Mar 20 03:43:51  FW1 jsrpd[1085]: JSRPD_RG_STATE_CHANGE: Redundancy-group 1 transitioned from 'primary' to 'secondary-hold' state due to Monitor failed: IF

Mar 20 03:43:52  FW1 jsrpd[1085]: JSRPD_RG_STATE_CHANGE: Redundancy-group 1 transitioned from 'secondary-hold' to 'secondary' state due to Back to back failover interval expired





Interface belonging to the reth:



test@ FW1# show interfaces ge-2/0/0  ? active node

gigether-options {

    redundant-parent reth1;

}



{primary:node0}[edit]

test@ FW1# show interfaces ge-2/0/1    ? active node

gigether-options {

    redundant-parent reth0;

}



{primary:node0}[edit]

test@ FW1# show interfaces ge-11/0/0   ? passive node

gigether-options {

    redundant-parent reth1;

}



{primary:node0}[edit]

test@ FW1# show interfaces ge-11/0/1    ? passive node

gigether-options {

    redundant-parent reth0;

}



test at FW1# run show interfaces terse | match reth

ge-2/0/0.15             up    down aenet    --> reth1.15   ? active interface down

ge-2/0/0.20             up    down aenet    --> reth1.20

ge-2/0/0.32767          up    down aenet    --> reth1.32767

ge-2/0/1.5              up    up   aenet    --> reth0.5

ge-2/0/1.32767          up    up   aenet    --> reth0.32767

ge-11/0/0.15            up    up   aenet    --> reth1.15

ge-11/0/0.20            up    up   aenet    --> reth1.20

ge-11/0/0.32767         up    up   aenet    --> reth1.32767

ge-11/0/1.5             up    up   aenet    --> reth0.5

ge-11/0/1.32767         up    up   aenet    --> reth0.32767

reth0                   up    down

reth0.5                 up    down inet     172.16.0.2/30

reth0.32767             up    down

reth1                   up    down

reth1.15                up    down inet     192.168.0.2/30

reth1.20                up    down inet     192.168.1.2/30

reth1.32767             up    down



Any suggestions?



BR,





------------------------------

Message: 2
Date: Sat, 19 Mar 2011 16:52:41 +0000
From: meryem Z <meryem_z at hotmail.com>
To: <stacy at netfigure.com>
Cc: juniper-nsp at puck.nether.net
Subject: Re: [j-nsp] disable status vector on juniper router
Message-ID: <BLU156-w39D865BF2A23190B7C4876E1B30 at phx.gbl>
Content-Type: text/plain; charset="Windows-1252"



hello,

control-word and status-vector arenot the same. when soing "show vpls connections" you get both :


control flags?Indicates whether the
 control word and sequenced delivery of packets are required. Control
flags have 8-bit fields, with the last two bits being C (control word
flag) and S (sequenced frames flag). For example, the value 2 (binary
10) means C=1 and S=0, indicating that the control word is required and
the sequenced delivery of frames is not required in the BGP updates.
   C=0?Control word is not required for encapsulating Layer 2 frames. C=1?Control word is required for encapsulating Layer 2 frames. S=0?Sequence number is not used for sequenced delivery of packets. S=1?Sequence number must be used for sequenced delivery of packets.
 status-vector?Bit vector advertising the state of local PE-CE circuits to remote PE routers. A bit value of 0 indicates that the local circuit and LSP tunnel to the remote PE router is up, while a value of 1 indicates either one or both are down.

Thanks.



> Subject: Re: [j-nsp] disable status vector on juniper router
> From: stacy at netfigure.com
> Date: Fri, 18 Mar 2011 11:18:07 -0600
> CC: sfouant at shortestpathfirst.net; juniper-nsp at puck.nether.net
> To: meryem_z at hotmail.com
>
> I'm not certain, but I think this will fix your issue...
>
> [edit routing-instances foo protocols vpls]
> user at R1# show
> no-control-word;
>
> --Stacy
>
> On Mar 18, 2011, at 10:47 AM, meryem Z wrote:
>
> >
> > This problem happened when trying to implement VPLS between juniper and huawei routers.
> > It seems that there is an extra byte on the BGP packets sent by juniper. On huawei routers under the vpls session it is possible to disable it.
> >
> >
> > Thank you.
> >
> >
> >> From: sfouant at shortestpathfirst.net
> >> To: meryem_z at hotmail.com
> >> CC: juniper-nsp at puck.nether.net
> >> Subject: RE: [j-nsp] disable status vector on juniper router
> >> Date: Fri, 18 Mar 2011 12:37:19 -0400
> >>
> >>> -----Original Message-----
> >>> From: juniper-nsp-bounces at puck.nether.net [mailto:juniper-nsp-
> >>> bounces at puck.nether.net] On Behalf Of meryem Z
> >>> Sent: Friday, March 18, 2011 12:26 PM
> >>> Cc: juniper-nsp at puck.nether.net
> >>> Subject: [j-nsp] disable status vector on juniper router>
> >>>
> >>> Hello Community,
> >>>
> >>> For compatibility reasons with huawei routers we need to disable the
> >>> BGP status vector (or bit vector) on juniper router. Is it possible ?
> >>> and how ?
> >>
> >> I've only heard this when referring to authentication w/ key-chain based
> >> signatures.  Are you referring to this or something else.  Please be more
> >> specific what exactly you are trying to disable from being
> >> advertised/negotiated/etc.
> >>
> >> Stefan Fouant, CISSP, JNCIEx2
> >> www.shortestpathfirst.net
> >> GPG Key ID: 0xB4C956EC
> >>
> >>
> >
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>


------------------------------

Message: 3
Date: Sat, 19 Mar 2011 12:20:33 -0700
From: Paul Zugnoni <paul.zugnoni at onlive.com>
To: juniper-nsp <juniper-nsp at puck.nether.net>
Cc: Richard A Steenbergen <ras at e-gerbil.net>
Subject: Re: [j-nsp] 10.0 or 10.4?
Message-ID: <A6CF8063-BD41-435A-AA59-761DA80B17DC at onlive.com>
Content-Type: text/plain; charset="us-ascii"

After upgrading from 10.1 to 10.4R1.9 on a set of our dual-RE2000 MX960's we observed that the re1's fxp's were no longer IP-reachable. Console and session to "other-route-engine" both work fine, as does GRES. Same behavior on multiple dual-RE MXs. JTAC has confirmed the group config as OK, but hasn't been able to recreate the problem. I'd love to hear from anyone that has seen similar.

Paul Z

On Mar 17, 2011, at 16:52 , Keegan Holley wrote:

> Are these all 10.4R2 bugs or 10.2?
>
>>
>> PR588115 - Changing the forwarding-table export policy twice in a row
>> quickly (while the previous change is still being evaluated) will cause
>> rpd to coredump.
>>
>> PR581139 - Similar to above, but causes the FPC to crash too. Give it
>> several minutes before you commit again following a forwarding-table
>> export policy change.
>>
>> PR523493 - Mysterious FPC crashes
>>
>> PR509303 - Massive SNMP slowness and stalls, severely impacting polling
>> of 10.2R3 boxes with a decent number of interfaces (the more interfaces
>> the worse the situation).
>>
>> PR566782 PR566717 PR540577 - Some more mysterious rpd and pfem crashes,
>> with extra checks added to prevent it in the future.
>>
>> PR559679 - Commit script transient change issue, which sometimes causes
>> changes to not be picked up correctly unless you do a "commit full".
>>
>> PR548166 - Sometimes most or all BGP sessions on a CPU loaded box will
>> drop to Idle following a commit and take 30+ minutes to come back up.
>>
>> PR554456 - Sometimes netconf connections to EX8200's will result in junk
>> error messages being logged to the XML stream, corrupting the netconf
>> session.
>>
>> PR550902 - On a CPU loaded box sometimes BGP policy-statement evaluation
>> will simply stop working, requiring a hard clear of the neighbor (or
>> ironically enough, sometimes just renaming the term in the policy will
>> fix it :P) to restore normal evaluation.
>>
>> PR521993 - Ports on EX8200 FPCs will sometimes not initialize correctly,
>> resulting in situations where for example ports 4 and 5 on every FPC
>> will be able to receive packets but never transmit them. If you continue
>> to try and transmit packets down a wedged port (such as would happen if
>> the port is configured for L2), it will cause the FPC to crash.
>>
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp




------------------------------

Message: 4
Date: Sat, 19 Mar 2011 13:56:01 -0700
From: Doug Hanks <dhanks at juniper.net>
To: Paul Zugnoni <paul.zugnoni at onlive.com>, juniper-nsp
        <juniper-nsp at puck.nether.net>
Cc: Richard A Steenbergen <ras at e-gerbil.net>
Subject: Re: [j-nsp] 10.0 or 10.4?
Message-ID:
        <01E990EBBCA96443B7181E382C024A06598A235F61 at EMBX02-HQ.jnpr.net>
Content-Type: text/plain; charset="us-ascii"

You have a backup-router configured in the re1 group?

-----Original Message-----
From: juniper-nsp-bounces at puck.nether.net [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of Paul Zugnoni
Sent: Saturday, March 19, 2011 12:21 PM
To: juniper-nsp
Cc: Richard A Steenbergen
Subject: Re: [j-nsp] 10.0 or 10.4?

After upgrading from 10.1 to 10.4R1.9 on a set of our dual-RE2000 MX960's we observed that the re1's fxp's were no longer IP-reachable. Console and session to "other-route-engine" both work fine, as does GRES. Same behavior on multiple dual-RE MXs. JTAC has confirmed the group config as OK, but hasn't been able to recreate the problem. I'd love to hear from anyone that has seen similar.

Paul Z

On Mar 17, 2011, at 16:52 , Keegan Holley wrote:

> Are these all 10.4R2 bugs or 10.2?
>
>>
>> PR588115 - Changing the forwarding-table export policy twice in a row
>> quickly (while the previous change is still being evaluated) will cause
>> rpd to coredump.
>>
>> PR581139 - Similar to above, but causes the FPC to crash too. Give it
>> several minutes before you commit again following a forwarding-table
>> export policy change.
>>
>> PR523493 - Mysterious FPC crashes
>>
>> PR509303 - Massive SNMP slowness and stalls, severely impacting polling
>> of 10.2R3 boxes with a decent number of interfaces (the more interfaces
>> the worse the situation).
>>
>> PR566782 PR566717 PR540577 - Some more mysterious rpd and pfem crashes,
>> with extra checks added to prevent it in the future.
>>
>> PR559679 - Commit script transient change issue, which sometimes causes
>> changes to not be picked up correctly unless you do a "commit full".
>>
>> PR548166 - Sometimes most or all BGP sessions on a CPU loaded box will
>> drop to Idle following a commit and take 30+ minutes to come back up.
>>
>> PR554456 - Sometimes netconf connections to EX8200's will result in junk
>> error messages being logged to the XML stream, corrupting the netconf
>> session.
>>
>> PR550902 - On a CPU loaded box sometimes BGP policy-statement evaluation
>> will simply stop working, requiring a hard clear of the neighbor (or
>> ironically enough, sometimes just renaming the term in the policy will
>> fix it :P) to restore normal evaluation.
>>
>> PR521993 - Ports on EX8200 FPCs will sometimes not initialize correctly,
>> resulting in situations where for example ports 4 and 5 on every FPC
>> will be able to receive packets but never transmit them. If you continue
>> to try and transmit packets down a wedged port (such as would happen if
>> the port is configured for L2), it will cause the FPC to crash.
>>
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp


_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



------------------------------

Message: 5
Date: Sat, 19 Mar 2011 15:17:09 -0700
From: Paul Zugnoni <paul.zugnoni at onlive.com>
To: Doug Hanks <dhanks at juniper.net>
Cc: Richard A Steenbergen <ras at e-gerbil.net>, juniper-nsp
        <juniper-nsp at puck.nether.net>
Subject: Re: [j-nsp] 10.0 or 10.4?
Message-ID: <854187C7-2F30-4E3E-98AD-B714E68F62DE at onlive.com>
Content-Type: text/plain; charset="us-ascii"

backup-router - Originally we did not have this. Our SE and JTAC suggested it, so we added it. Adding it did not change the situation (we didn't originally need a backup router because fxp0 was being used for inbound-only emergency access from a local subnet IP). I forgot to mention in this thread that fxp0 is added to a logical system to keep the mgmt subnet out of inet.0 as a direct route. Removing the logical system in our lab case made re1 fxp0 reachable again, though, its reachability didn't survive a reboot.

Paul Z

On Mar 19, 2011, at 13:56 , Doug Hanks wrote:

> You have a backup-router configured in the re1 group?
>
> -----Original Message-----
> From: juniper-nsp-bounces at puck.nether.net [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of Paul Zugnoni
> Sent: Saturday, March 19, 2011 12:21 PM
> To: juniper-nsp
> Cc: Richard A Steenbergen
> Subject: Re: [j-nsp] 10.0 or 10.4?
>
> After upgrading from 10.1 to 10.4R1.9 on a set of our dual-RE2000 MX960's we observed that the re1's fxp's were no longer IP-reachable. Console and session to "other-route-engine" both work fine, as does GRES. Same behavior on multiple dual-RE MXs. JTAC has confirmed the group config as OK, but hasn't been able to recreate the problem. I'd love to hear from anyone that has seen similar.
>
> Paul Z
>
> On Mar 17, 2011, at 16:52 , Keegan Holley wrote:
>
>> Are these all 10.4R2 bugs or 10.2?
>>
>>>
>>> PR588115 - Changing the forwarding-table export policy twice in a row
>>> quickly (while the previous change is still being evaluated) will cause
>>> rpd to coredump.
>>>
>>> PR581139 - Similar to above, but causes the FPC to crash too. Give it
>>> several minutes before you commit again following a forwarding-table
>>> export policy change.
>>>
>>> PR523493 - Mysterious FPC crashes
>>>
>>> PR509303 - Massive SNMP slowness and stalls, severely impacting polling
>>> of 10.2R3 boxes with a decent number of interfaces (the more interfaces
>>> the worse the situation).
>>>
>>> PR566782 PR566717 PR540577 - Some more mysterious rpd and pfem crashes,
>>> with extra checks added to prevent it in the future.
>>>
>>> PR559679 - Commit script transient change issue, which sometimes causes
>>> changes to not be picked up correctly unless you do a "commit full".
>>>
>>> PR548166 - Sometimes most or all BGP sessions on a CPU loaded box will
>>> drop to Idle following a commit and take 30+ minutes to come back up.
>>>
>>> PR554456 - Sometimes netconf connections to EX8200's will result in junk
>>> error messages being logged to the XML stream, corrupting the netconf
>>> session.
>>>
>>> PR550902 - On a CPU loaded box sometimes BGP policy-statement evaluation
>>> will simply stop working, requiring a hard clear of the neighbor (or
>>> ironically enough, sometimes just renaming the term in the policy will
>>> fix it :P) to restore normal evaluation.
>>>
>>> PR521993 - Ports on EX8200 FPCs will sometimes not initialize correctly,
>>> resulting in situations where for example ports 4 and 5 on every FPC
>>> will be able to receive packets but never transmit them. If you continue
>>> to try and transmit packets down a wedged port (such as would happen if
>>> the port is configured for L2), it will cause the FPC to crash.
>>>
>>>
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp




------------------------------

Message: 6
Date: Sun, 20 Mar 2011 11:50:13 +0100
From: bas <kilobit at gmail.com>
To: juniper-nsp at puck.nether.net
Subject: [j-nsp] snmp fan bug? and are environmental thresholds
        configurable?
Message-ID:
        <AANLkTikcCAuh+b70wXWDYjD_Q1qG48=5fkx-q_WW9L+O at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi,

We have a bunch of mx480's running either 10.3r3 and 10.4r2
About 20 times a day the fans in these boxes switch from normal speed
to intermediate speed.

When this happens our nagios reports unknown status on the fans.
When I do a snmpwalk of oids .1.3.6.1.4.1.2636.3.1.13.1.6.4 and
.1.3.6.1.4.1.2636.3.1.13.1.6 I indeed see integer 1 (unknown)
This would suggest a bug right?
Or is it an old mib? In jnxOperatingState I see running and
runningAtFullSpeed only, or should intermediate be reported as
"running"?
I cannot find a mention of this issue in the release notes of 10.3 or 10.4

Either way, do any of you have a PR about the issue? It would save a
lot of time and effort if I open a case and mention a PR number.

My other question is related to the env thresholds.
Apparently the temperature in the facility where these boxes are
located is just around the threshold where these boxes switch from
normal to intermediate.
Does anyone know if these thresholds are configurable? I'd like to
change the threshold 2c lower.

Thanks,

Bas


------------------------------

_______________________________________________
juniper-nsp mailing list
juniper-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

End of juniper-nsp Digest, Vol 100, Issue 57
********************************************



More information about the juniper-nsp mailing list