[f-nsp] Foundry evaluation
Vladimir Litovka
doka.ua at gmail.com
Tue Apr 27 09:02:13 EDT 2010
Colleagues hi
I'm currently evaluating equipment for Metro-Ethernet project, which briefly
looks like on this scheme:
[image: http://vugluskr.mml.org.ua/~doka/.1/netmod.jpg]
i.e. there are 8 core nodes (colored in blue, A-Sn) and 8 aggregation nodes
(everyone consists of 8 stacked switches with 48xGE/2xTGE). Also, there is
one gateway node (BGP-0) which provide connection to other ISPs. There will
be MPLS between core nodes, which will be used to provide L3 VPNs.
Every aggregation node serve up to 400 access switches, connected in star
(i.e. no rings in access) over GE dark fiber (i.e. GE FX must be on
aggregation side). It *MUST NOT* do local switching between access nodes, it
must just pass traffic between access nodes and core, where on core
corresponding VLANs will be connected to corresponding VRFs. Since I don't
want to expand MPLS cloud beyond core (from both price and complexity
perspective), I see two ways to provide such isolation:
1) using QinQ between Aggregation and Core (thus, tunneling access links to
core, where all them will be terminated). In this case switch also must
support:
1.1) *CoS mutation* (copying CoS from C-VLAN to S-VLAN and vice versa)
1.2) *selective tunneling* (i.e. some VLANs will be tunneled using second
tag, some VLANs - switched locally)
2) using kind of port protection (PVLAN or something similar) between all
8x48 ports in aggregation stack (i.e. no traffic between these ports, just
to/from core-facing interfaces) and using Proxy ARP on core nodes.
While second way is supported on more switches, it looks not so flexible and
scalable as Selective QinQ. May be, anyone can comment these ways? Or
suggest other ones? Personally I'd prefer using Selective QinQ.
Connection between aggregation & core will be done with 10:1
oversubscription and redundancy will be done in the following way:
[image: http://vugluskr.mml.org.ua/~doka/.1/coreagg.jpg]
i.e. 400 GE links to access switches will be served with 4 TenGig links to
core, from different switches in stack, to different (as possible) linecards
in core node.
So, I'm evaluating MLX-series on core and XMR on internet gateway. My
questions regarding these devices:
1) how much bandwidth per slot they support? There are max 4xTGE linecards
available - is it chassis limit?
2) if chassis is 100G-per-slot capable - are there 8xTGE and 1x100G
linecards on roadmap?
3) whether MLX support termination of QinQ, in way like to:
int TenGig 0/1.5
* encapsulation dot1q 5 second-dot1q 10*
ip vrf forwarding <VRF name>
ip address x.x.x.x/N
4) are there ways to control CoS in C-VLAN/S-VLAN on egress (depending on
MPLS Exp or DSCP) in case of double-tagging?
Regarding aggregation: it looks for me, that using CES or CER on aggregation
is overkill in my case. I don't need MPLS, while these devices built with
MPLS in mind. FastIron aren't GE/TGE capable. Whether BigIron RX-series
supports Selective QinQ / CoS mutation mechanisms?
Thanks so much! :-)
--
/doka
/*-- http://doka-ua.blogspot.com/ --*/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20100427/4306ffab/attachment.html>
More information about the foundry-nsp
mailing list