[j-nsp] Juniper MTU Math / eBGP multihop problem

Harry Reynolds harry at juniper.net
Wed May 4 15:44:51 EDT 2011


Correction, 1518 on juni should be 1514. Thanks to Payam for pointing this out off list, btw. The 12 bytes of sa/da MAC plus 2 byte type field is 14.

Sorry, I have no junose-fo either. I see now erx mentioned, which I had read as ex.

Regards

 

-----Original Message-----
From: Paul Stewart [mailto:paul at paulstewart.org] 
Sent: Wednesday, May 04, 2011 12:16 PM
To: Harry Reynolds; 'juniper-nsp List'
Subject: RE: [j-nsp] Juniper MTU Math / eBGP multihop problem

Thanks very much...

Do you have the JunOSe equivalent commands by chance?  I understand what you are saying but my JunOSe kung-fu isn't great yet...;)

Paul



-----Original Message-----
From: Harry Reynolds [mailto:harry at juniper.net]
Sent: Wednesday, May 04, 2011 2:02 PM
To: Paul Stewart; 'juniper-nsp List'
Subject: RE: [j-nsp] Juniper MTU Math / eBGP multihop problem

IIRC, 1500 on ios is == to 1518 on junos as the latter includes link OH, excepting the fcs. Using a vlan tag should increase by 4 bytes.

The direct ebgp works as its using the direct/native mtu. The multihop is apparently hitting an intermediate link with a lesser mtu, leading to lost update messages when using a large table (as bgp is want to do), leading to loss of KA and session flap.

Suggestions:

1. Ping w/no frag to discover the lowest mtu and confine the session to that

{master}[edit]
regress at mse-a# set protocols bgp group internal tcp-mss 

2. enable bgp pmtu discovery and hope the requisite icmp error messages are correctly generated and not filtered so as to allow the pmtu to be discovered.

{master}[edit]
regress at mse-a# set protocols bgp group internal mtu-discover?   
Possible completions:
  mtu-discovery        Enable TCP path MTU discovery
{master}[edit]
regress at mse-a# set protocols bgp group internal mtu-discover 

You can check the results of pmtu/current mss with a show system connection.
Note that we can only discover and lower the pmtu. We never increase it expect after a reboot or after a long idle period for the related connection.

{master}[edit]
regress at mse-a# run show system connections extensive | find 192.168.1.1     
tcp4       0      0  192.168.1.10.58699
192.168.1.1.646                               ESTABLISHED
   sndsbcc:          0 sndsbmbcnt:          0  sndsbmbmax:     131072
. . .

tcp4       0      0  31.31.16.153.52325
31.31.16.154.179                              ESTABLISHED
   sndsbcc:          0 sndsbmbcnt:          0  sndsbmbmax:     131072
sndsblowat:       2048 sndsbhiwat:      16384
   rcvsbcc:          0 rcvsbmbcnt:          0  rcvsbmbmax:     131072
rcvsblowat:          1 rcvsbhiwat:      16384
   proc id:       1466  proc name:        rpd
       iss: 2411398891      sndup: 2411404086
    snduna: 2411404105     sndnxt: 2411404105      sndwnd:      20272
    sndmax: 2411404105    sndcwnd:       7240 sndssthresh: 1073725440
       irs: 2743225428      rcvup: 2743226463
    rcvnxt: 2743226463     rcvadv: 2743242847      rcvwnd:      16384
       rtt:          0       srtt:       1529        rttv:        733
    rxtcur:       1200   rxtshift:          0       rtseq: 2411404086
    rttmin:       1000  mss:       1448 <<<<<<<<

HTHs


-----Original Message-----
From: juniper-nsp-bounces at puck.nether.net
[mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of Paul Stewart
Sent: Wednesday, May 04, 2011 10:29 AM
To: 'juniper-nsp List'
Subject: [j-nsp] Juniper MTU Math / eBGP multihop problem

Hi there..

 

Trying to understand some Juniper MTU related issues we're having.

 

ERX-310 router with dot1q AE connection to a pair of EX4200 switches.

 

AE interface on the ERX-310 is showing MTU of 1522 which using my "bad MTU math" would be correct (1500+21 bytes overhead).

AE interface on the EX4200 switch is showing MTU of 1514 which again using my bad math T is too low with dot1q overhead.

 

On one of the EX4200 switches is a copper GigE connection going out to an ISP.  MTU on this port is 1514.

 

The problem is with BGP.  There are two sessions towards the ISP.  The first is eBGP and works fine - BGP session comes up and stays stable.  The second session though is eBGP multihop (about 5-6 hops away) and tears down after 90 seconds.

 

This is related to another message I posted recently to the list - thought we had this figured out but nope. still having an issue. All we know at this moment is the ISP is using Cisco equipment with an MTU of 1500 - pretty vague.

 

So, with the above default MTU issues should anything really need to be adjusted considering the first session works fine?  The ISP is more than happy to investigate but I wanted to understand the MTU differences on the Juniper equipment first.

 

Thanks for any input. I hate MTU issues ;)

 

Cheers,

 

Paul

 

 

_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp




More information about the juniper-nsp mailing list