[j-nsp] thoughs on MVRP?

Luca Salvatore Luca at ninefold.com
Sun Mar 3 21:02:44 EST 2013


My issue with Q-in-Q is that the ports connecting to the physical servers will be the ‘customer ports’ which means they will be an access port in a single S-VLAN.
This creates a management problem for the servers as we normally manage (SSH) the servers using a native (untagged) VLAN.

So If I could get around that issue I think Q-in-Q would be suitable.  Anyone know if that’s possible?

Luca

From: Mark Tees [mailto:marktees at gmail.com]
Sent: Monday, 4 March 2013 8:08 AM
To: Luca Salvatore
Subject: Re: [j-nsp] thoughs on MVRP?

Possibly you could use q-in-q to cross the VC cluster. Then the VC cluster only needs to know the outer tags.
http://www.juniper.net/techpubs/en_US/junos9.3/topics/concept/qinq-tunneling-ex-series.html


But .... That log message looks like the box is running of resources somewhere and given the number of VLANs you are talking about, are you maybe hitting MAC Learning limits?


Check with JTAC about that message if you are unsure.

Sent from some sort of iDevice.

On 03/03/2013, at 9:49 PM, Luca Salvatore <Luca at ninefold.com<mailto:Luca at ninefold.com>> wrote:
I don't really need to run STP on them, these are switchports connecting into physical servers which host hundreds of VMs, so I need to trunk all my vlans into about 20 ports per switch

Not quite sure how Q-in-Q would help... How do I configure the ports facing the servers, who does the tagging?

MVRP looks more like cisco's VTP, so probably not what i'm after right?

________________________________________
From: Alex Arseniev [alex.arseniev at gmail.com<mailto:alex.arseniev at gmail.com>]
Sent: Sunday, 3 March 2013 7:41 PM
To: Luca Salvatore; juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
Subject: Re: [j-nsp] thoughs on MVRP?

If you don't need to run STP on these VLANs, why not use
QinQ/dot1q-tunneling?
http://kb.juniper.net/InfoCenter/index?page=content&id=KB21686&actp=RSS
Saves you
Thanks
Alex

----- Original Message -----
From: "Luca Salvatore" <Luca at ninefold.com<mailto:Luca at ninefold.com>>
To: <juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>
Sent: Sunday, March 03, 2013 12:13 AM
Subject: [j-nsp] thoughs on MVRP?



Hi,
We have a requirment to trunk about 3500 VLANs into multiple ports on some
EX4200 switches in VC mode.

This breaches the vmember limit but a huge amout, and once we did this I
have seen lots of errors in the logs such as:

fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
entry
/kernel: RT_PFE: RT msg op 3 (PREFIX CHANGE) failed, err 5 (Invalid)
fpc0 RT-HAL,rt_entry_add_msg_proc,2702: route entry create failed
fpc0 RT-HAL,rt_entry_add_msg_proc,2886: proto L2 bridge,len 48 prefix
06:d4:f2:00:00:cb/48 nh 2850
fpc0 RT-HAL,rt_entry_create,2414: failed to allocate memory for route
entry

These messages worry me.  I have been looking into MVRP which seems like
it will allow us to not need all 3500 VLANs trunked into the switches all
the time, but will dynmicaly register VLANs as needed.

Wondering peoples thoughts on MVRP, is this a good use case?  Is it stable
and reliable?

thanks,

_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp


More information about the juniper-nsp mailing list