[f-nsp] Odd MRP problem

harbor235 harbor235 at gmail.com
Sun Sep 12 08:39:30 EDT 2010


Is spanning tree disabled on vlan 2?

harbor235

On Sun, Sep 12, 2010 at 4:10 AM, George B. <georgeb at gmail.com> wrote:

> All of those units have been in service for over a year passing traffic
> without problems.  We recently added the second metroE and I wanted to use
> MRP as I have had good results with it elsewhere.  MRP works fine as long as
> I have at least one unit that is NOT configured for MRP.  One thing I
> noticed today is that two of the units (the bottom two in the diagram) are
> running 4.0.0 and the top two are running 5.0.0 so my next step is to get
> them all up to current (though a new release for MLX/XMR is due on or about
> Wednesday, Sept 15, according to my little birdies so I might delay for a
> couple of days).
>
> The configuration is as simple as I can get it ... one single vlan running
> MRP, no topology group, only ports in the vlan are the ports running MRP
> (two ports per unit).
>
> Thanks for your response, Jan.
>
> George
>
>
>
> On Sun, Sep 12, 2010 at 12:46 AM, Jan Pedersen <
> Jan.Pedersen at globalconnect.dk> wrote:
>
>>  Hi George,
>>
>>
>>
>> We once had a similar issue, and that was caused by a faulty 4X10G XMR
>> Module.
>>
>>
>>
>> Have you checked that you have valid PBIF, XPP and XGMAC versions on all
>> 10GE modules in the ring?
>>
>>
>>
>> Can you pass normal (non-mrp) traffic across that ring without problems.
>> Do you have a topology group and member-vlans attached to that metro-ring?
>> If yes, double check that the topology group is equally configured on all
>> nodes.
>>
>>
>>
>> You might want  to enable byte accounting on the MRP master vlan on all
>> nodes or try the “dm metro-rhp” debug command to get more information from
>> the nodes.
>>
>>
>>
>>
>>
>> *Best regards
>>
>> **Jan Pedersen
>> **Senior Network Specialist
>> D: +45 7730 2932
>> M: +45 2550 7321
>>
>> *
>>
>> *From:* foundry-nsp-bounces at puck.nether.net [mailto:
>> foundry-nsp-bounces at puck.nether.net] *On Behalf Of *Heath Jones
>> *Sent:* 11. september 2010 21:39
>> *To:* George B.
>> *Cc:* foundry-nsp
>> *Subject:* Re: [f-nsp] Odd MRP problem
>>
>>
>>
>> Hi George
>>
>>
>>
>> I'm really quite a newbie when it comes to MRP. RHP rcvd / sent = 8.22
>> (close to 8). Is that worth noting?
>>
>> If the ring ID was different on all 4 devices, not converging so sending
>> out both interfaces, that would mean that each device should show 8
>> times(ish) the figure of what is sending out??
>>
>>
>>
>> Packet captures might be the way to go, if we can find the protocol spec
>> from foundry..
>>
>>
>>
>> Heath
>>
>> On 11 September 2010 20:02, George B. <georgeb at gmail.com> wrote:
>>
>> See this diagram for reference:
>>
>> http://tinypic.com/r/kb93lj/7
>>
>> This is pretty simple.  I have one vlan in an MRP ring through 4 MLX
>> units.  I configure the master, and it works as expected.  I then configure
>> the members.  The problem is when the last "member" (non-master) is
>> configured in the ring, the master begins to receive thousands of RHP and TC
>> RBPDUs per second.  It doesn't matter which one is the last member
>> configured but as soon as I enable RHP on that last member, the count of RHP
>> and TC RBPDUs goes haywire.  Here is what my master currently shows:
>>
>> RHPs sent            RHPs rcvd            TC RBPDUs rcvd
>> 509883               4193162              3684318
>>
>> As you can see, it has sent about a half a million RHPs but received over
>> 4 million of them!
>>
>> Only one unit is configured as "master".  As long as I have MRP
>> unconfigured on one of the members, the ring works as expected. There is no
>> spanning tree of any sort running on that vlan.  I am just in awe of how RHP
>> packets can seemingly be created in the network somewhere at such an amazing
>> rate!
>>
>> Anyone else seen anything like this?  It is just plain wacky!
>>
>> George
>>
>>
>> _______________________________________________
>> foundry-nsp mailing list
>> foundry-nsp at puck.nether.net
>> http://puck.nether.net/mailman/listinfo/foundry-nsp
>>
>>
>>
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20100912/fdb66434/attachment.html>


More information about the foundry-nsp mailing list