[c-nsp] MPLS multilink MTU

Rodney Dunn rodunn at cisco.com
Wed Jul 30 09:47:45 EDT 2008


Ah ha...so with the physical MTU (which please start using it over
mpls mtu) we picked up on that and adjusted the MRRU negotiated
value it appears.


Rodney

On Wed, Jul 30, 2008 at 02:34:41PM +0800, Soon Kian wrote:
> Hi Rodney
> 
> It's works!  after changing physical interface MTU  instead of using "mpls mtu
> xxx" I have attached the debug output before and after changing.
> 
> Before:
> Jul 30 06:07:32.398: Se3/5:0 LCP: O CONFREQ [Listen] id 254 len 23
> Jul 30 06:07:32.398: Se3/5:0 LCP:    MagicNumber 0x3C11DFE2 (0x05063C11DFE2)
> Jul 30 06:07:32.398: Se3/5:0 LCP:    MRRU 1500 (0x110405DC)
> Jul 30 06:07:32.398: Se3/5:0 LCP:    EndpointDisc 1 klp002
> (0x1309016B6C70303032)
> 
> 
> After:
> Jul 30 06:21:53.827: Se4/7:0 PPP: Phase is ESTABLISHING, renegotiate LCP
> Jul 30 06:21:53.827: Se4/7:0 LCP: O CONFREQ [Closed] id 28 len 23
> Jul 30 06:21:53.827: Se4/7:0 LCP:    MagicNumber 0x3C1F0A9B (0x05063C1F0A9B)
> Jul 30 06:21:53.827: Se4/7:0 LCP:    MRRU 1520 (0x110405F0)
> 
> Cheers
> Soon Kian
> 
> On Wed, Jul 30, 2008 at 11:04 AM, Rodney Dunn <rodunn at cisco.com> wrote:
> 
>     Soon,
>    
>     I haven't done this myself but I've seen discussions around
>     it before. From what I remember it has to do with the
>     MRRU negotiated values.
>    
>     Check 'debug ppp negotiation' and let's see what
>     we negotiated for MRU.
>    
>     Also, it's best not to use the "mpls mtu" command anymore
>     and always set the phsyical MTU on the interface to
>     account for the MPLS and/or tunnel overhead.
>    
>     CSCdj40945
>     PPP multilink MRRU value is not configurable
>    
>     added the support for the:
>    
>     [no] ppp multilink mrru remote [num]
>     and
>     [no] ppp multilink mrru local [num]
>    
>     commands.
>    
>     See if that helps and let me know.
>    
>     Rodney
>    
>    
>    
>    
>     On Wed, Jul 30, 2008 at 10:51:30AM +0800, Soon Kian wrote:
>     > Hi Guys,
>     >
>     > Wondering if any one met with such problem before ?
>     >
>     > a. Setup: MPLS PE - MPLS PE (using 2 x E1 with mlppp multilink)
>     > b. If only 1 x E1 is in bundle, I could ping vrf up to 1500 df. However
>     when
>     > both E1 are in the multilink, I only could ping up to 1496
>     > c. IOS: c7200-jk9s-mz.124-18.bin
>     > d. E1 Controller: PA-MC-8E1/120
>     >
>     > Configuration:
>     >
>     > interface Multilink11
>     >  ip address x.x.x.x
>     >  no ip redirects
>     >  no ip proxy-arp
>     >  carrier-delay 10
>     >  mpls label protocol ldp
>     >  mpls ip
>     >  mpls mtu 1600
>     >  no cdp enable
>     >  ppp multilink
>     >  ppp multilink group 11
>     >  no clns route-cache
>     >
>     > interface Serial1/7:0
>     >  bandwidth 2048
>     >  ip address x.x.x.x 255.255.255.252
>     >  encapsulation ppp
>     >  ppp multilink
>     >  ppp multilink group 11
>     >  no clns route-cache
>     >
>     > interface Serial2/5:0
>     > bandwidth 2048
>     >  ip address x.x.x.x 255.255.255.252
>     >  encapsulation ppp
>     >  no fair-queue
>     >  ppp multilink
>     >  ppp multilink group 11
>     >  no clns route-cache
>     >
>     > router>sh ppp multilink
>     > Bundle up for 19:47:41, total bandwidth 4096, load 42/255
>     >   Receive buffer limit 24000 bytes, frag timeout 1000 ms
>     >     0/0 fragments/bytes in reassembly list
>     >     37 lost fragments, 122838 reordered
>     >     632/475896 discarded fragments/bytes, 0 lost received
>     >     0x203DB4 received sequence, 0x16EF7F sent sequence
>     >   Member links: 2 active, 0 inactive (max not set, min not set)
>     >     Se2/5:0, since 19:47:52
>     >     Se1/7:0, since 19:47:49
>     > _______________________________________________
>     > cisco-nsp mailing list  cisco-nsp at puck.nether.net
>     > https://puck.nether.net/mailman/listinfo/cisco-nsp
>     > archive at http://puck.nether.net/pipermail/cisco-nsp/
> 
> 


More information about the cisco-nsp mailing list