[f-nsp] FCX and target path 8.0.10m (and an aside)

Youssef Bengelloun-Zahr bengelly at gmail.com
Thu Feb 22 06:20:26 EST 2018


Personally, I never moved away from 7.4.

Best regards.



> Le 22 févr. 2018 à 11:16, Jethro R Binks <jethro.binks at strath.ac.uk> a écrit :
> 
> The silence was deafening!
> 
> So bit of a development with this.  We had three stack failure events 
> which required a hard reboot to sort.  We made the decision to upgrade to 
> 8.0.30q (we also replaced the CX4 cable, just in case it is degraded in 
> some way).  Upgrade was all fine.
> 
> Initially after the reboot, we didn't see the ping loss issues.  But over 
> the past few hours it has started to creep in again, much the same as 
> previously.  I've not re-done all the tests like shutting down one then 
> the other ospf interface to see if it makes any difference to the problem, 
> but my gut feeling is it will be just the same.
> 
> Anyone any thoughts?  Could there be some sort of hardware failure in one 
> of the units that might cause these symptoms?  Maybe I might have more 
> diagnostic tools available.  What might also be interesting is trying to 
> downgrade back to the 7.4 version we were running previously, where we 
> didn't see these issues.  But that's more service-affecting downtime.
> 
> Jethro.
> 
> 
> 
>> On Fri, 16 Feb 2018, Jethro R Binks wrote:
>> 
>> I thought I was doing the right thing by upgrading a couple of my slightly 
>> aging FCXs to target path release 8.0.10m, which tested fine on an 
>> unstacked unit with a single OSPF peering.
>> 
>> The ones I am running on are stacks of two, each with two 10Gb/s 
>> connections to core, one OSPF peering on each.
>> 
>> Since the upgrade, both stacks suffer packet loss every 2 minutes (just 
>> about exactly) for about 5-10 seconds, demonstrated by pinging either a 
>> host through the stack, or an interface on the stack.  There are no log 
>> messages or changes in OSPF status or spanning tree activity.  When it 
>> happens, of course a remote session to the box stalls for the same period.
>> 
>> Shutting down either one of the OSPF links doesn't make a difference.  
>> CPU never changes from 1%.  No errors on ints.  I've used dm commands to 
>> catch packets going to CPU at about the right time and see nothing 
>> particularly alarming and certainly no flooding of anything.
>> 
>> This only started after the upgrade to 8.0.10m on each of them.  I have 
>> other FCX stacks on other code versions not exhibiting this issue.
>> 
>> Some of the comments in this thread seem to be reflective of my issue:  
>> 
>> https://www.reddit.com/r/networking/comments/4j47uo/brocade_is_ruining_my_week_i_need_help_to/
>> 
>> I'm a little dismayed to get these problems on a Target Path release, 
>> which I assumed would be pretty sound.  I've been eyeing a potential 
>> upgrade to something in the 8.0.30 (recommendations?), with the usual 
>> added excitement of bringing a fresh set of bugs.
>> 
>> Before I consider reporting it, I wondered if anyone had any useful 
>> observations or suggestions.
>> 
>> And, as an aside, I wonder how we're all getting along in our new homes 
>> for our dissociated Brocade family now.  Very sad to see the assets of a 
>> once good company scattered to the four winds like this.
>> 
>> Jethro.
>> 
>> .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
>> Jethro R Binks, Network Manager,
>> Information Services Directorate, University Of Strathclyde, Glasgow, UK
>> 
>> The University of Strathclyde is a charitable body, registered in
>> Scotland, number SC015263.
>> _______________________________________________
>> foundry-nsp mailing list
>> foundry-nsp at puck.nether.net
>> http://puck.nether.net/mailman/listinfo/foundry-nsp
>> 
> 
> .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
> Jethro R Binks, Network Manager,
> Information Services Directorate, University Of Strathclyde, Glasgow, UK
> 
> The University of Strathclyde is a charitable body, registered in
> Scotland, number SC015263.
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp


More information about the foundry-nsp mailing list