[c-nsp] 3560 leaking broadcasts

Ian Henderson ianh at ianh.net.au
Wed Mar 10 02:48:01 EST 2010


Hi folks,

Has anyone ever seen broadcasts leaking from an SVI into a layer 3 
interface on a 3560?

We've got a managed Ethernet link between a 3560G-48TS (Auckland, 
12.2(50)SE1 IP Services) and a 3750G-24TS (Sydney, 12.2(53)SE IP Services) 
configured as a /31 layer 3 interface on both sides. The link runs OSPF in 
area 64, and PIM sparse mode. Both Sydney and Auckland have a number of 
SVIs.

[Hosts] -- VLAN 11 -- SVI11[Sydney]L3 -- /31 link -- L3[Auckland]

Sydney config:
interface GigabitEthernet1/0/25
  description Auckland:Gi0/47
  no switchport
  ip address x.x.x.193 255.255.255.254
  no ip redirects
  no ip proxy-arp
  ip pim sparse-mode
  ip ospf cost 50
  speed nonegotiate
  priority-queue out
  service-policy input SET-DSCP-TRUST

Auckland config:
interface GigabitEthernet0/47
  description Sydney:Gi1/0/25
  no switchport
  ip address x.x.x.192 255.255.255.254
  no ip redirects
  no ip proxy-arp
  ip pim sparse-mode
  ip ospf cost 200
  speed 100
  duplex full
  priority-queue out
  service-policy input SET-DSCP-TRUST

On the Auckland 3560, OSPF constantly reports a mismatched area ID, even 
though the area 64 session is up. PIM shows two neighbors, even though its 
a point to point link. The IP address listed in both messages is the 
Sydney 3750's Vlan11 address.

Mar 10 19:53:14.662 NZDT: %OSPF-4-ERRRCV: Received invalid packet:
mismatch area ID, from backbone area must be virtual-link but not found
from x.x.x.138, GigabitEthernet0/47

Auckland#show ip pim nei
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
       P - Proxy Capable, S - State Refresh Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address
Prio/Mode
x.x.x.138     GigabitEthernet0/47      02:25:20/00:01:20 v2    1 / S P
x.x.x.193     GigabitEthernet0/47      02:25:21/00:01:37 v2    1 / DR S P

Some debugging revealed something odd - when performing 'show mac- address 
xxxx' on the internally assigned VLAN for Gi1/0/25 on Sydney, I see MAC 
addresses listed against VLAN 11.

Sydney#show vlan int usage

VLAN Usage
---- --------------------
1006 GigabitEthernet1/0/3
1007 GigabitEthernet1/0/25

Sydney#show mac- vlan 1007
           Mac Address Table
-------------------------------------------

Vlan    Mac Address       Type        Ports
----    -----------       --------    -----
  All    0100.0ccc.cccc    STATIC      CPU
  All    0100.0ccc.cccd    STATIC      CPU
  All    0180.c200.0000    STATIC      CPU
  All    0180.c200.0001    STATIC      CPU
  All    0180.c200.0002    STATIC      CPU
  All    0180.c200.0003    STATIC      CPU
  All    0180.c200.0004    STATIC      CPU
  All    0180.c200.0005    STATIC      CPU
  All    0180.c200.0006    STATIC      CPU
  All    0180.c200.0007    STATIC      CPU
  All    0180.c200.0008    STATIC      CPU
  All    0180.c200.0009    STATIC      CPU
  All    0180.c200.000a    STATIC      CPU
  All    0180.c200.000b    STATIC      CPU
  All    0180.c200.000c    STATIC      CPU
  All    0180.c200.000d    STATIC      CPU
  All    0180.c200.000e    STATIC      CPU
  All    0180.c200.000f    STATIC      CPU
  All    0180.c200.0010    STATIC      CPU
  All    ffff.ffff.ffff    STATIC      CPU
   11    0012.80bf.1718    DYNAMIC     Gi1/0/24
   11    0012.80bf.1743    DYNAMIC     Gi1/0/24
   11    0015.c695.b495    DYNAMIC     Gi1/0/1
   11    0015.c6fa.1e35    DYNAMIC     Gi1/0/24
Total Mac Addresses for this criterion: 24

Sydney#show run int vlan11
Building configuration...

Current configuration : 185 bytes
!
interface Vlan11
  description ASA Network
  ip address x.x.x.138 255.255.255.248
  no ip redirects
  no ip unreachables
  no ip proxy-arp
  ip pim sparse-mode
  ip ospf cost 5
end

I quickly threw it together in the lab and couldn't ping between a host on 
the VLAN and Auckland, so suspect its broadcast/multicast traffic only.

Hunting around the network, this appears to happen on every 3560, 3560E, 
and 3750 I could find. 6500 Sup720 doesn't seem to be impacted. Other than 
the error message (which is uncommon, most links are in the same OSPF 
area) and the PIM neighbors (new rollout), I can't see anything thats 
actually causing a problem. Although I'm concerned if there's a broadcast 
storm, we may exhaust bandwidth on routed links.

So, has anyone seen this before? Is it a bug or design limitation on the 
3560/3750 platform? Is there any other way to make layer 3 interfaces 
work other than a hardware upgrade?

Thanks,



- I.


More information about the cisco-nsp mailing list