[f-nsp] BR-MLX-10Gx24-DM 24-port 10GbE Module

Niels Bakker niels=foundry-nsp at bakker.net
Sat Feb 1 12:25:41 EST 2014


* tias at netnod.se (Mathias Wolkert) [Thu 30 Jan 2014, 11:39 CET]:
>I just came across BR-MLX-10Gx24-DM and got a bit disappointed.
>
>"BR-MLX-10Gx24-DM interface modules require the "snmp-server
>max-ifindex-per-module 40|64" configured. Otherwise, the cards will not
>come up."
>
>This causes snmp index renumbering of all ports, except the ones in slot 1.

Yes.  I really like how Brocade derives ifIndex from slot/port 
position. I was very disappointed when they broke the system for MLX 
by changing the base for no technical reason at all, and it's sad to 
see that that gratuitous change has come back multiple times now to 
bite them in their own asses too.

If you don't have it configured the module stays down with a 
convoluted error message.  I bet they've had a lot of support 
calls over this as well.  At least they're easy to solve...


>"BR-MLX-10Gx24-DM module is an oversubscribed module. The module can
>support up to 200Gbps when the system fabric mode is in Turbo mode
>(i.e. system has only Gen 2 and Gen 3 modules such as 8x10G, 100G or
>24x10G modules). The module can support up to 12 10G wire-speed
>ports when the system fabric mode is in Normal mode (i.e. system also
>has any Gen 1 modules such as 1G or 4x10G modules)."

Actually I thought this was 18 ports linerate for Turbo mode, not 20.  
Also, they're grouped, so you can't run ports 1-12 linerate in normal 
mode and keep the rest empty, but must use (e.g.) 1-4, 9-12, 17-20.  
Not optimal if you precable as much as possible, as you probably do.

[snip]
The other limitations are mostly caused by this being (presumably) 
merchant silicon ASIC instead of their own FGPAs.  Not that anybody 
should miss Dynamic CAM mode, of course..!


	-- Niels.

-- 


More information about the foundry-nsp mailing list