[j-nsp] SRX650 Clustering Issue

Walaa Abdel razzak walaaez at bmc.com.sa
Wed Mar 9 11:29:35 EST 2011


Thanks All

Now if I need to configure the reth interface using ae interface instead
of physical interface as I need more than one gig on each node. The
problem is that I can't issue the command to join ae0 to reth0 as
follows:

admin at FW1# set interfaces ae0 aggregated-ether-options ?
Possible completions:
+ apply-groups         Groups from which to inherit configuration data
+ apply-groups-except  Don't inherit configuration data from these
groups
> ethernet-switch-profile  Ethernet virtual LAN/media access
control-level options
  flow-control         Enable flow control
> lacp                 Link Aggregation Control Protocol configuration
  link-protection      Enable link protection mode
  link-speed           Link speed of individual interface that joins the
AE
  loopback             Enable loopback
  minimum-links        Minimum number of aggregated links (1..8)
  no-flow-control      Don't enable flow control
  no-link-protection   Don't enable link protection mode
  no-loopback          Don't enable loopback



Note: The target is to have more than one gig link for each node and
load balance between them, I tried to use ae0 interface as mentioned
above but it didn't work. Any other ideas are welcomed.

BR,

-----Original Message-----
From: Stefan Fouant [mailto:sfouant at shortestpathfirst.net] 
Sent: Wednesday, March 09, 2011 3:26 PM
To: Walaa Abdel razzak
Cc: Ben Dale; juniper-nsp at puck.nether.net
Subject: Re: [j-nsp] SRX650 Clustering Issue

You do not need to configure an IP address on the fab link for proper
operation.

Stefan Fouant

Sent from my iPad

On Mar 9, 2011, at 2:16 AM, "Walaa Abdel razzak" <walaaez at bmc.com.sa>
wrote:

> Thanks, Now HA is configured, but regarding the fab link, is it 
> necessary to have L3 address or not.
> 
> BR,
> -----Original Message-----
> From: Ben Dale [mailto:bdale at comlinx.com.au]
> Sent: Sunday, March 06, 2011 12:12 PM
> To: Walaa Abdel razzak
> Cc: Scott T. Cameron; juniper-nsp at puck.nether.net
> Subject: Re: [j-nsp] SRX650 Clustering Issue
> 
> This is a pretty common error when you are bringing pre-configured 
> devices together in a chassis cluster.
> 
> My advice would be to run the following from edit mode on each box:
> 
> delete interfaces
> delete vlans
> delete security
> delete protocols
> 
> Then commit and cable them together (control AND fabric).  If you've 
> run the set chassis cluster command correctly, the boxes should now 
> come together.
> 
> After that you should be able to make all configuration changes from 
> the primary box, so the assign fabric interfaces:
> 
> set interfaces fab0 fabric-options member-interfaces ge-0/0/2 set 
> interfaces fab1 fabric-options member-interfaces ge-5/0/2
> 
> And then build some redundancy groups
> 
> set chassis cluster control-link-recovery set chassis cluster 
> reth-count
> 15 set chassis cluster redundancy-group 0 node 0 priority 100 set 
> chassis cluster redundancy-group 0 node 1 priority 1 set chassis 
> cluster redundancy-group 1 node 0 priority 100 set chassis cluster 
> redundancy-group 1 node 1 priority 1 set chassis cluster 
> redundancy-group 1 preempt
> 
> then build reth interfaces and assign them to redundancy groups etc.
> 
> On 06/03/2011, at 12:17 AM, Walaa Abdel razzak wrote:
> 
>> Hi Scott
>> 
>> 
>> 
>> The old configuration was test config (very simple) like hostname, 
>> aggregate ethernet,.....as its fresh FW. After enabling clusterign 
>> using the standard command set chassis clustering......and reboot, we

>> got the
>> following:
>> 
>> 
>> 
>> {hold:node0}
>> 
>> root at -FW1> edit
>> 
>> warning: Clustering enabled; using private edit
>> 
>> error: shared configuration database modified
>> 
>> Please temporarily use 'configure shared' to commit
>> 
>> outstanding changes in the shared database, exit,
>> 
>> and return to configuration mode using 'configure'
>> 
>> 
>> 
>> when I issue most commands I got the following:
>> 
>> 
>> 
>> {hold:node0}
>> 
>> root at -FW1> show version
>> 
>> error: Could not connect to node0 : No route to host
>> 
>> 
>> 
>> The JUNOS version is 10.3.
>> 
>> 
>> 
>> Also here is a sample of Chassisd log:
>> 
>> 
>> 
>> Mar  5 19:32:49 completed chassis state from ddl
>> 
>> Mar  5 19:32:49 ch_set_non_stop_forwarding_cfg:Setting
>> non-stop-forwarding to Disabled, source DDL
>> 
>> Mar  5 19:32:49 ch_do_multichassis_overrides:Setting multichassis 
>> replication to Disabled
>> 
>> Mar  5 19:32:49 config_do_overrides: Keepalives not set. Setting it 
>> to
>> 300 secs
>> 
>> Mar  5 19:32:49 if_init
>> 
>> Mar  5 19:32:49 Skip cleaning pic state on LCC
>> 
>> Mar  5 19:32:49 chassis_alarm_module_init
>> 
>> Mar  5 19:32:49 timer_init
>> 
>> Mar  5 19:32:49 main_snmp_init
>> 
>> Mar  5 19:32:49 snmp_init: snmp_chassis_id = 0, chas_type = 1
>> 
>> Mar  5 19:32:49 chas_do_registration: or_obj = 0xdfe400, or_rows = 23
>> 
>> Mar  5 19:32:49 chas_do_registration: or_obj = 0xdfe800, or_rows = 23
>> 
>> Mar  5 19:32:49 chas_do_registration: or_obj = 0xe04000, or_rows = 23
>> 
>> Mar  5 19:32:49 chas_do_registration: or_obj = 0xd58940, or_rows = 2
>> 
>> Mar  5 19:32:49 chas_do_registration: or_obj = 0xdfec00, or_rows = 23
>> 
>> Mar  5 19:32:49 CHASSISD_SYSCTL_ERROR: ch_srxsme_mgmt_port_mac_init:
>> hw.re.jseries_fxp_macaddr error from sysctlbyname: File exists (errno
>> 17)
>> 
>> Mar  5 19:32:49 CHASSISD_SYSCTL_ERROR: ch_srxsme_mgmt_port_mac_init:
>> hw.re.jseries_fxp_macaddr error from sysctlbyname: File exists (errno
>> 17)
>> 
>> Mar  5 19:33:08
>> 
>> Mar  5 19:33:08 trace flags 7f00 trace file /var/log/chassisd size
>> 3000000 cnt 5 no-remote-trace 0
>> 
>> Mar  5 19:33:08 rtsock_init synchronous socket
>> 
>> Mar  5 19:33:08 disabling rtsock public state on sync socket (LCC)
>> 
>> Mar  5 19:33:08 rtsock_init asynchronous socket
>> 
>> Mar  5 19:33:08 disabling rtsock public state on async socket (LCC)
>> 
>> Mar  5 19:33:08 rtsock_init non ifstate async socket
>> 
>> Mar  5 19:33:08 disabling rtsock public state on non ifstate async 
>> socket (LCC)
>> 
>> Mar  5 19:33:08 BCM5910X (bcm5910x_driver_init): Driver 
>> initialization
> 
>> succeeded
>> 
>> Mar  5 19:33:08 POE (ch_poe_srxsme_check_pem_status): POE power good 
>> signal for power supply 1 not asserted
>> 
>> Mar  5 19:33:08 ch_srxsme_poe_blob_delete: fpc 2
>> 
>> Mar  5 19:33:08 ch_srxsme_poe_blob_delete: fpc 4
>> 
>> Mar  5 19:33:08 ch_srxsme_poe_blob_delete: fpc 6
>> 
>> Mar  5 19:33:08 ch_srxsme_poe_blob_delete: fpc 8
>> 
>> Mar  5 19:33:08 POE (ch_srxsme_poe_init): poe init done
>> 
>> Mar  5 19:33:08 parse_configuration ddl
>> 
>> Mar  5 19:33:08 cfg_ddl_chasd_handle_config_option: Found {chassis,
>> aggregated-devices}: Object Config action: DAX_ITEM_CHANGED
>> 
>> Mar  5 19:33:08 Walking Object {aggregated-devices,  }
>> 
>> Mar  5 19:33:08 cfg_ddl_chasd_handle_config_option: Found 
>> {aggregated-devices, ethernet}: Object Config action: 
>> DAX_ITEM_CHANGED
>> 
>> Mar  5 19:33:08 Walking Object {ethernet, device-count}
>> 
>> Mar  5 19:33:08 configured aggregated ethernet device count 3
>> 
>> Mar  5 19:33:08 aggregated-device ethernet
>> 
>> Mar  5 19:33:08 configured aggregated ethernet state
>> 
>> Mar  5 19:33:08 cfg_ddl_chasd_handle_config_option: Did not find 
>> {chassis, cluster}: Object Config action: DAX_ITEM_CHANGED
>> 
>> Mar  5 19:33:08 No routing-options source_routing configuration 
>> options set
>> 
>> Mar  5 19:33:08 protocol-id queue-depth delete-flag
>> 
>> Mar  5 19:33:08 Total Queue Allocation: 0/1024
>> 
>> Mar  5 19:33:08 POE (poe_handle_maxpower_on_fpc):  FPC 2, max-power 0
>> 
>> Mar  5 19:33:08 POE (poe_handle_maxpower_on_fpc):  FPC 4, max-power 0
>> 
>> Mar  5 19:33:08 POE (poe_handle_maxpower_on_fpc):  FPC 6, max-power 0
>> 
>> Mar  5 19:33:08 POE (poe_handle_maxpower_on_fpc):  FPC 8, max-power 0
>> 
>> Mar  5 19:33:08 POE (poe_handle_maxpower_on_fpc):  FPC 2, max-power 0
>> 
>> Mar  5 19:33:08 POE (poe_handle_maxpower_on_fpc):  FPC 4, max-power 0
>> 
>> Mar  5 19:33:08 POE (poe_handle_maxpower_on_fpc):  FPC 6, max-power 0
>> 
>> Mar  5 19:33:08 POE (poe_handle_maxpower_on_fpc):  FPC 8, max-power 0
>> 
>> Mar  5 19:33:08 completed chassis state from ddl
>> 
>> Mar  5 19:33:08 ch_set_non_stop_forwarding_cfg:Setting
>> non-stop-forwarding to Disabled, source DDL
>> 
>> Mar  5 19:33:08 ch_do_multichassis_overrides:Setting multichassis 
>> replication to Disabled
>> 
>> Mar  5 19:33:08 config_do_overrides: Keepalives not set. Setting it 
>> to
>> 300 secs
>> 
>> Mar  5 19:33:08 if_init
>> 
>> Mar  5 19:33:08 Skip cleaning pic state on LCC
>> 
>> Mar  5 19:33:08 chassis_alarm_module_init
>> 
>> Mar  5 19:33:08 timer_init
>> 
>> Mar  5 19:33:08 main_snmp_init
>> 
>> Mar  5 19:33:08 snmp_init: snmp_chassis_id = 0, chas_type = 1
>> 
>> Mar  5 19:33:08 chas_do_registration: or_obj = 0xdfe400, or_rows = 23
>> 
>> Mar  5 19:33:08 chas_do_registration: or_obj = 0xdfe800, or_rows = 23
>> 
>> Mar  5 19:33:08 chas_do_registration: or_obj = 0xe04000, or_rows = 23
>> 
>> Mar  5 19:33:08 chas_do_registration: or_obj = 0xd58940, or_rows = 2
>> 
>> Mar  5 19:33:08 chas_do_registration: or_obj = 0xdfec00, or_rows = 23
>> 
>> Mar  5 19:33:09 hup_init:Hupping init!
>> 
>> Mar  5 19:33:09 JACT_INFO: Created re (h=9) Anti-Counterfeit FSM 
>> object
>> 
>> Mar  5 19:33:09  ---cb_reset----re (h=9): reason=SUCCESS (0)
>> 
>> Mar  5 19:33:09 mbus_srxmr_reset_sre_dev: Resetting SRE DEV 5
>> 
>> Mar  5 19:33:09 Resetting anti-counterfeit chip
>> 
>> Mar  5 19:33:09 smb_open, gpiofd 29
>> 
>> Mar  5 19:33:09 initial startup complete
>> 
>> Mar  5 19:33:09 main initialization done....
>> 
>> Mar  5 19:33:09 alarmd connection completed
>> 
>> Mar  5 19:33:09 send: clear all chassis class alarms
>> 
>> Mar  5 19:33:09 craftd connection completed
>> 
>> Mar  5 19:33:13 JACT_INFO:  re (h=9): enter state: HOLD
>> 
>> Mar  5 19:34:13 JACT_INFO:  re: Read public key info...
>> 
>> Mar  5 19:34:13 JACT_INFO:  re: Prepare and send encrypted random 
>> messsage
>> 
>> Mar  5 19:34:13 JACT_INFO:  re (h=9): enter state: DOING
>> 
>> Mar  5 19:36:09 Attempting md comp chunkbuf pool shrink
>> 
>> Mar  5 19:37:18 JACT_INFO:  re: Read and check decrypted  messsage
>> 
>> Mar  5 19:37:18  ---cb_done----re (h=9): auth=passed
>> 
>> Mar  5 19:37:18 re (h=9): AC authentication passed
>> 
>> Mar  5 19:37:18 JACT_INFO:  re (h=9): enter state: PASSED
>> 
>> 
>> 
>> -----Original Message-----
>> From: juniper-nsp-bounces at puck.nether.net
>> [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of Scott T.
>> Cameron
>> Sent: Saturday, March 05, 2011 4:46 PM
>> To: juniper-nsp at puck.nether.net
>> Subject: Re: [j-nsp] SRX650 Clustering Issue
>> 
>> 
>> 
>> I don't think this is enough information to really help you.
>> 
>> 
>> 
>> What does chassisd log say?
>> 
>> Can you provide a sanitized config?
>> 
>> 
>> 
>> Scott
>> 
>> 
>> 
>> On Sat, Mar 5, 2011 at 8:24 AM, Walaa Abdel razzak 
>> <walaaez at bmc.com.sa
> 
>> <mailto:walaaez at bmc.com.sa> >wrote:
>> 
>> 
>> 
>>> Hi All
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>>> We were connecting two SRX650 to work in Active/passive mode. Before
>> 
>>> they were having old configuration and once we enabled clustering 
>>> and
>> 
>>> rebooted the boxes, they became in hold mode and we get a message of
>> 
>>> shared violations even after reboot again and no user logged in, any
>> 
>>> suggestions?
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>>> BR,
>> 
>>> 
>> 
>>> _______________________________________________
>> 
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> <mailto:juniper-nsp at puck.nether.net>
>> 
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> <https://puck.nether.net/mailman/listinfo/juniper-nsp>
>> 
>>> 
>> 
>> _______________________________________________
>> 
>> juniper-nsp mailing list juniper-nsp at puck.nether.net 
>> <mailto:juniper-nsp at puck.nether.net>
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> <https://puck.nether.net/mailman/listinfo/juniper-nsp>
>> 
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net 
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> 
> 
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net 
> https://puck.nether.net/mailman/listinfo/juniper-nsp



More information about the juniper-nsp mailing list