[j-nsp] What is this ethernet switching trace telling us?
John Neiberger
jneiberger at gmail.com
Mon Jun 10 10:23:28 EDT 2013
I think they do have them configured differently, but they swear they don't
and I don't have the access or the knowledge on that platform to disagree
with them. The network side is straightforward switching. You are correct
that we have multiple groups managing these devices. This group is the only
one having these problems, but even then it's only with a couple of
locations. The rest of theirs are fine. We've been having major conference
calls to discuss this problem and those have included several people from
Acme Packet. I can only assume that they've verified the high availability
config.
I was hoping that they were running linux under the hood since this is
apparently the default behavior of linux. However, I just heard that
they're running Vxworks. I'll have to do some digging to see if it also
behaves similarly by default.
Thanks!
John
On Mon, Jun 10, 2013 at 2:37 AM, Phil Mayers <p.mayers at imperial.ac.uk>wrote:
> On 06/09/2013 04:59 PM, John Neiberger wrote:
>
> We have several of these throughout our network and we're only seeing this
>> problem in a couple of cases. The rest work just fine. Most of the SBCs
>> are
>>
>
> Obvious question: are you sure those two aren't configured different
> (wrongly) compared to the others, or running different software?
>
> Reading between the lines it sounds like a different group runs these
> devices, and in my experience, kit like this with odd network interface
> capabilities can be misconfigured by staff that don't fully understand
> networking - and it can be exacerbated by vendors using odd terminology
> (e.g. "PHY" used in a non-standard way) and confusing people.
>
> Are you sure they haven't just setup these two devices in active-active
> mode? Or more likely, something that *sounds* like an active-passive mode
> in the UI, to a non-expert, but is really some kind of weird active-active
> spatchcock.
>
> Our storage team did this with a couple of NetApps and the symptoms were
> pretty much identical. It only really brought things crashing down when
> they started sending 1Gbit/sec of performance testing traffic from them...
>
> ______________________________**_________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/**mailman/listinfo/juniper-nsp<https://puck.nether.net/mailman/listinfo/juniper-nsp>
>
More information about the juniper-nsp
mailing list