[j-nsp] Making juniper handle native vlans
Alexander Arsenyev (GU/ETL)
alexander.arsenyev at ericsson.com
Mon Feb 21 14:06:09 EST 2005
Hello John,
As I said, I don't use RSVP with MTU signaling myself so cannot comment on its
usefulness/suitability/robustness, etc.
My answers:
A1. Simply clearing DF bit could have undesired effect (applications stop working, etc) so
GRE encapsulation and fragmenting GRE packet is a better solution. Even more better solution could be
tweaking an application or host MTU|TCP MSS to send smaller packets :-)
A2. Since ingress LSR is usually a PE, just 1) get together with Customer and 2) agree on
lower MTU on PE-CE link :-)
HTH,
Cheers
Alex
-----Original Message-----
From: John Senior [mailto:js at irishbroadband.ie]
Sent: 21 February 2005 17:08
To: Alexander Arsenyev (GU/ETL); juniper-nsp at puck.nether.net
Subject: RE: [j-nsp] Making juniper handle native vlans
Hi,
In the docs from the link below, it mentions the following:
"Fragment packets-Using the assigned MTU value, packets that exceed the
size of the MTU can be fragmented into smaller packets on the ingress
router before they are sent over the RSVP LSP.
Once both MTU signaling and packet fragmentation have been enabled on an
ingress router, any route resolving to an RSVP LSP on this router uses
the signaled MTU value."
Is there a way to get the ingress router to fragment packets with the DF
bit set before sending packets over an LSP? How do you specifically
enable packet fragmentation?
All the best,
John Senior.
-----Original Message-----
From: Alexander Arsenyev (GU/ETL)
[mailto:alexander.arsenyev at ericsson.com]
Sent: 19 February 2005 16:15
To: juniper-nsp at puck.nether.net
Subject: RE: [j-nsp] Making juniper handle native vlans
hello,
Word "MTU" comes to mind... Try to pre-fragment packets BEFORE they hit
the switch
or use RSVP with MTU signalling
http://www.juniper.net/techpubs/software/junos/junos63/swconfig63-mpls-a
pps/html/rsvp-overview11.html
Disclaimer - I don't use it myself so it might just work!
If majority of Your traffic across this switch is TCP then another
not-so-obvious option
is to lower TCP MSS value - on Solaris 5.8 it could be done with 2
settings:
ip_path_mtu_discovery=0
tcp_mss_def_ipv4=1448 or lower (1460-3x4=1448, to accomodate two MPLS
labels and one .1q tag)
HTH,
Cheers
Alex
-----Original Message-----
From: juniper-nsp-bounces at puck.nether.net
[mailto:juniper-nsp-bounces at puck.nether.net]
Sent: 18 February 2005 22:46
To: juniper-nsp at puck.nether.net
Subject: [j-nsp] Making juniper handle native vlans
I think this has been here asked before, but is there ANY way to trick a
jnpr into handling mixed .1q tagged and untagged frames on the same
interface a la native vlans?
I ask not because I think that native vlans are in any way the "right
thing", but because I need to work around a certain ASIC limitation on
some certain Crisco switches which can only handle an ethernet frame + 4
extra bytes. This is enough room for a .1q tag, or for a single MPLS
tag,
but not both. Thus the only way to pass MPLS tags through said switch
while still doing .1q vlans on the interfaces is to run the MPLS
speaking
portion via a native vlan. This would work fine, except that the other
side is Juniper, which doesn't get along with this.
Someone want to hack up an fpga image that I can slap onto a GE PIC? :)
--
Richard A Steenbergen <ras at e-gerbil.net>
http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1
2CBC)
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
http://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
http://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list