[c-nsp] VSS1440 to ASR1002 - MEC issues

Kevin Graham kgraham at industrial-marshmallow.com
Sat May 2 18:20:45 EDT 2009


Your original concern was redundancy, so I'd personally go with two L3 interfaces per ASR over a static GEC. You may end up with more traffic over the VSL (as I don't believe there's a ECMP enhancement to prefer same-chassis ports as there is for MEC), but you'll avoid having to depend on UDLD, etc to protect against this type if failure mode.

[sent from my mobile]

On May 2, 2009, at 12:01 AM, Alasdair McWilliam <alasdairm at gmail.com> wrote:

Even if ASR only supports GEC, surely my apparent 'one way' traffic symptoms aren't right? I only have one Gigabit Ethernet link in the Port-Channel, between the ASR and the active chassis within the VSS. When the channel-group command is removed from the ASR's GE interface, and the config moved onto the GE interface, it starts to work a treat, despite the VSS still thinking it's an EtherChannel !

Also, the 'switch accept mode virtual' command was run on the active node when the switches were first converted to VSS and rebooted.

Many thanks
Alasdair



On 2 May 2009, at 01:43, Daniel de la Rosa (ddelaros) wrote:

That's correct, ASR1000 GEC only support static VLAN LB at the moment
and not LACP. So this can only work if you are ok on just using GEC with
VLANs on both sides as Tassos mentioned. Since you are deploying GEC for
redundancy, this VLAN static LB should be able to give you what you
need. Also you need to have the VSS on GEC mode on.

HTH


-------------
Daniel de la Rosa
CCIE # 4622
Technical Marketing Engineer
ERBU, Cisco Systems





ASR1000 doesn't -yet- support the well-known EtherChannel/LACP. If i
remember right, RLS5
will have it.

There is a feature called VLAN Mapping to Gigabit EtherChannel (GEC)
Member Links, but i
don't think it would help you much, since you have L3 portchannels on
both sides.

http://www.cisco.com/en/US/docs/ios/lanswitch/configuration/guide/lsw_c
fg_gecvlan.html

--
Tassos

Alasdair McWilliam wrote on 01/05/2009 18:29:
Hello,

I'm currently deploying two Cisco 6509-E chassis with VS-Sup720-10GE
(in
a VSS 1440 cluster/configuration) with dual ASR 1002 routers to
provide
aggregation of multiple upstream links (running multiple BGP and
EIGRP
sessions).

I wanted to utilize MEC between each ASR and each 6509 chassis to
build
in as much resilience as possible. However this configuration seems
to
be playing up and so I thought I'd ask the experts!

Physical Topology:

ASR Gi0/0/0 into 6509 Chassis 1 Module 1 Port 1
ASR Gi0/1/0 into 6509 Chassis 2 Module 1 Port 1

The ASR is running IOS-XE 2.3.0 (IOS 12.2(33)XNC) AISK9 with dual
IOS
processes.
The VSS chassis are running IOS 12.2(33)SXI1 ISK9 with a 4x 10GE VSL
(2
supervisor 10GE interfaces, 2 10GE interfaces on a 6708-10GE line
card).
I'm just using CAT6 between the ASR and the 6748-GE-TX line cards in
the
VSS boxes.

ASR configuration:

interface Port-Channel1
ip address x.x.x.5 255.255.255.252
ip hello-interval eigrp 100 2
ip hold-time eigrp 100 6
ip authentication mode eigrp 100 md5
ip authentication key-chian eigrp 100 vcoresw1-chain
ip summary-address eigrp 100 0.0.0.0 0.0.0.0 255
no ip redirects
no ip unreachables
no ip proxy-arp
no shut
!

interface Gi0/0/0
channel-group 1
no shut

interface Gi0/1/0
channel-group 1
no shut

Cisco VSS configuration:

int Gi1/1/1
no switchport
channel-group 3 mode on

int Gi2/1/1
no switchport
channel-group 3 mode on

int Po3
desc *** MEC to br1-po1 ***
no ip redirects
no ip unreachables
no ip proxy-arp
ip vrf forwarding edge-vrf
ip address x.x.x.6 255.255.255.252
ip hello-interval eigrp 100 2
ip hold-time eigrp 100 6
ip authentication mode eigrp 100 md5
ip authentication key-chain eigrp 100 br1-chain
no shut
!



The problem I am experiencing seems to be one way traffic between
the
VSS cluster and the Border Router. Pinging across this /30 subnet
does
not work in either direction. EIGRP relationships build when the Po
interfaces first come online and then immediately time out moments
later. The VSS cluster then does not see any further EIGRP traffic
from
the ASR. However the ASR seems to think it's successfully building
an
adjacency to the VSS. However this times out due to 'retry limit
exceeded' every minute or so, but seems to think it re-establishes
again.

This problem persists if we drop the PortChannel to just one Gigabit
Ethernet interface. The second interface can be shut down or
actually
removed from the Po config (eg. no channel-group 1).

The really interesting thing is, with one link, if we remove the
channel-group comand from the one remaining ASR interface, all of a
sudden the link springs to life. Pings between the ASR Gi0/0/0
interface
and the Po3 VSS interface are successful. EIGRP relationship comes
up
immediately and is stable, and routes are exchanged as you'd expect.

How does this work? With the ASR thinking it's a non-etherchannel
interface, but the VSS thinking it IS an EtherChannel (with 1
member),
surely it should just fail?

Am I doing something wrong or could this be a bug in either VSS or
the ASR?

It's not earth shattering, we could just configure 2 EIGRP sessions
between the VSS and the ASR (4 in total with 2 ASRs) but don't think
this is as clean an implementation as MEC across fully redundant
chassis
and line cards (one of the big selling points of the VSS !!)

Any help would be much appreciated!

Thanks
Alasdair


_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list