[c-nsp] Issues with NIC teams on Unix and ESX servers
Kevin Blackham
blackham at gmail.com
Tue Sep 4 09:47:02 EDT 2007
You might also look at setting up the pair of nics in a bridge
(brctl/br0) and running spanning tree on a vlan trunked between the
two switches with hsrp/vrrp for svi gateway failover.
On 9/4/07, Kamal Dissanayaka <kamalasiri at gmail.com> wrote:
> Thanks Phil,
>
> Unix team informed that unix server configured to work on round robin and
> both nic has same mac on both interfaces. So that should be the reason,
> switch learned same mac address from directly connected intefaces and
> through the interswitch link.
> Still the unix team argue that this config worked fine with cat os without
> any issues!!. I will rquest them to go for for active stanby.
> regarding the VMware havent got outage to play around, I will see this also
> can have the same issue as the unix server.
>
> Best Regards
>
> Kamal
>
>
> On 9/4/07, Phil Mayers <p.mayers at imperial.ac.uk> wrote:
> >
> > On Mon, 2007-09-03 at 23:12 +1000, Kamal Dissanayaka wrote:
> > > Hi,
> > >
> > > We have recently migrated two of 6513 in our datacenter to Native IOS
> > from
> > > catos. They were with catos 7 sup 2, msfc with 12.1 and now ios 12.2
> > (18)sxf10.
> > > After the change we got strange issues with a Unix and ESX server.
> > > Sw1 and sw2 (6513 s) connected with etherchannel trunk and HSRP
> > configured
> > > on them.
> > >
> > > 1. ESX server ( see the diagram) has 6 VMware server and has three NIC
> > > configured a NIC team. two interfaces connected to sw1 and other
> > connected
> > > to sw2. when all three interfaces are up servers 3,4 & 6 cannot
> > reachable. I
> > > can see increasing output drops on sw1 interfaces. I can see the MAC
> > > addresses of unreachable servers learned through SW1 interfaces. SW1 has
> > > correct arp entries as well but servers doesnt have arp entries to
> > default
> > > gateway. when I shut down the interfaces connected to sw1 everything
> > works
> > > fine through the interface connected to sw2.
> >
> > What happens if you just connect it to sw1 or sw2
> >
> > You may be trying to do similar things to below, which won't work - see
> > my answer below
> >
> > >
> > > 2. With unix server it has two teamed network interfaces has high packet
> > > losses when ping to it. User complain slow performance. Here both
> > interfaces
> > > of server shows same MAC addresses when I check on the switches, On SW1
> > unix
> > > server MAC address has learned through the directly connected interface.
> > On
> > > SW2 switch flips the MAC of unix server through Etherchannel and
> > Directly
> > > connected interfaces few times in minute. No spanning tree blocking or
> > > interface flaps occuring.
> >
> > You can't just plug a "NIC teaming" (dumb phrase) into two separate
> > switches like that, when both NICs emit packets. It won't work - it
> > can't work.
> >
> > You need one of:
> >
> > 1. active/standby NIC teaming; standby NIC emits no traffic
> > 2. active/active with separate MAC addresses
> > 3. some kind of IP multipathing (not recommended)
> >
> > >
> > > has anybody in the list faced similar issues. any help is highly
> > > appreciated.
> >
> > It sounds like you're causing FDB entries to flip-flop. As I say, this
> > won't work, you can't do it, and you'll need to find different solutions
> > to your problem. I would recommend active/standby NIC teaming.
> >
> >
> >
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
More information about the cisco-nsp
mailing list