FW: [c-nsp] Cisco Gigabit Ethernet Switch Module (CGESM)fortheHPBladeSystem

Lincoln Dale ltd at cisco.com
Mon Oct 10 07:59:13 EDT 2005


hi Christian,

i'd suggest escalation through your support channels.  i guess this 
means HP initially - but i would also presume that they can escalate to 
Cisco.

my view of an ethernet switch is that it SHOULD be capable of sustaining 
<10Mbps of multicast traffic without problems.


cheers,

lincoln.

christian.macnevin at uk.bnpparibas.com wrote:
>
> These are potential multicast sources. If we have sources multicasting 
> above 100 Mb, then we need to have
> all the sinks running gig as well. And we're not ready for that (would 
> just give the devs an excuse to design their software
> even less efficiently).
>
> The buffer overflow is occurring on both switches in the enclosure, 
> and it's happening with only 1.15Mb of mcast traffic.
> BADNESS.
>
> Did anyone who's deployed these test them very much? I'm really not 
> one for this 'it just works' stuff...
>
>
>
>
> *Internet*
> *ltd at cisco.com*
>
> 10/10/2005 11:26
>
> 	
> To
> 	Christian MACNEVIN
> cc
> 	cisco-nsp
> Subject
> 	Re: FW: [c-nsp] Cisco Gigabit Ethernet Switch        Module       
>  (CGESM)fortheHPBladeSystem
>
>
>
> 	
>
>
>
>
>
> can't help you on support but ...
>
> regarding your first question, the internal ports are 10/100/1000.  why
> would the 'internal' ports be negotiating 100 Mbps and not GbE?
> if the individual blades are only capable of 100 Mbps (not sure if that
> is the case or not..), then are you saying that 'auto-negotiate' on both
> sides (blade & switch) isn't doing the right thing?
>
> regarding your second question, many 'blade centre' chassis, there are
> significant heat dissipation & power restrictions on what can be used.
> i believe this is the most significant factor in the choice of what
> model switch was used.
>
>
> cheers,
>
> lincoln.
>
> cheers,
>
> lincoln.
>
> christian.macnevin at uk.bnpparibas.com wrote:
> > It now seems that our guys testing the CGESMs with the most current
> > feature set are seeing that they don't seem to permit
> > manual config of 100/full (and subsequently negotiate down to half 
> in many
> > cases) and they're seeing nasty buffer overruns.
> >
> > Support doesn't seem to be picking us up here, either.
> >
> > I assume there's no HP people on this list, but why on earth did they
> > chose a 2900 to repackage? These blade enclosures
> > may be used in less than critical arenas by some ISPs, but if I told you
> > the projected number we're looking at over the next year,
> > and the seriousness of the calculations going on, you'd think it was a
> > joke they hadn't taken things a bit more seriously.
> >
> >
> >
> >
> >
> >
> > Internet
> > nick.nauwelaerts at thomson.com
> > Sent by: cisco-nsp-bounces at puck.nether.net
> > 05/10/2005 08:26
> >
> > To
> > cisco-nsp
> > cc
> >
> > Subject
> > RE: FW: [c-nsp] Cisco Gigabit Ethernet Switch Module
> > (CGESM)fortheHPBladeSystem
> >
> >
> >
> >
> >
> >
> >
> >> -----Original Message-----
> >> From: cisco-nsp-bounces at puck.nether.net
> >> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Dave Temkin
> >> Sent: Tuesday, October 04, 2005 08:17 PM
> >> To: Kevin Graham
> >> Cc: Olav.Langeland at active24.com; cisco-nsp at puck.nether.net
> >> Subject: Re: FW: [c-nsp] Cisco Gigabit Ethernet Switch Module
> >> (CGESM)fortheHPBladeSystem
> >>
> >> I found it's easier to skip spanning tree.  Use etherchannels
> >> from each
> >> switch back to the core (or wherever) for redundancy, and to get
> >> cross-switch redundancy have the servers use fail-on-fault
> >> teaming to fail
> >> over to the other switch (which would then be connected to
> >> your alternate
> >> core switch) in the event of a failure.
> >>
> >
> > Do you run "spanning-tree etherchannel guard misconfig" on your core
> > when you use etherchannel links? Does it work as advertised?
> >
> > // nick
> >
> > _______________________________________________
> > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/
> >
> >
> >
> > This message and any attachments (the "message") is
> > intended solely for the addressees and is confidential.
> > If you receive this message in error, please delete it and
> > immediately notify the sender. Any use not in accord with
> > its purpose, any dissemination or disclosure, either whole
> > or partial, is prohibited except formal approval. The internet
> > can not guarantee the integrity of this message.
> > BNP PARIBAS (and its subsidiaries) shall (will) not
> > therefore be liable for the message if modified.
> >
> > 
> **********************************************************************************************
> >
> > BNP Paribas Private Bank London Branch is authorised
> > by CECEI & AMF and is regulated by the Financial Services
> > Authority for the conduct of its investment business in
> > the United Kingdom.
> >
> > BNP Paribas Securities Services London Branch is authorised
> > by CECEI & AMF and is regulated by the Financial Services
> > Authority for the conduct of its investment business in
> > the United Kingdom.
> >
> > BNP Paribas Fund Services UK Limited is authorised and
> > regulated by the Financial Services Authority
> >
> > _______________________________________________
> > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/
> >
> >  
>


More information about the cisco-nsp mailing list