[j-nsp] New_title: vpls over gre config sample

Harry Reynolds harry at juniper.net
Fri Apr 20 16:47:12 EDT 2007


Pasting a working config for vpls over gre between pes. Ce-ce pings/ospf
working. MTU on the t1 link adjusted to accommodate 1500 byte pings
between ces. Topo is pasted inline and will wrap. Cannot attach to this
list. This is part of a larger testscript so much of the topo is not
being used. In the GRE tunnel test the traffic flows directly between r1
and r4:

HTHs

{master}[edit]
regress at vpn03-1# run show version 
Hostname: vpn03-1
Model: m20
JUNOS Base OS boot [8.4-20070420.0]


{master}[edit]
regress at vpn03-1# show interfaces | no-more 
traceoptions {
    file interface_trace;
    flag all;
}
so-0/0/0 {
    unit 0 {
        family inet {
            address 10.1.3.1/30;
        }
        family iso;
        family mpls {
            filter {
                output r1_ce1_out;
            }
        }
    }
}
t1-0/2/0 {
    mtu 2000;
    unit 0 {
        family inet {
            address 10.1.4.1/30;
        }
        family iso;
        family mpls {
            filter {
                input r1_ce2_in_ldp_native;
                output r1_ce1_out_ldp_native;
            }
        }
    }
}
so-1/0/0 {
    unit 0 {
        family inet {
            address 10.1.2.1/30;
        }
        family iso;
        family mpls {
            filter {
                input r1_ce2_in;
                output r1_ce1_out_ldp;
            }
        }
    }
}
fe-1/1/1 {
    encapsulation ethernet-vpls;
    unit 0 {
        family vpls {
            filter {
                input r1_ce1_in;
                output r1_ce2_out;
            }
        }
    }
}
gr-3/0/0 {
    unit 0 {
        tunnel {
            source 10.255.71.24;
            destination 10.255.14.182;
        }
        family inet;
        family mpls;
    }
}
{master}[edit]
regress at vpn03-1# show protocols | no-more 
rsvp {
    traceoptions {
        file rsvp;
        flag error detail;
        flag event detail;
    }
    interface so-1/0/0.0;
    interface so-0/0/0.0;
    interface t1-0/2/0.0;
}
mpls {
    traceoptions {
        file mpls;
        flag error;
        flag graceful-restart;
    }
    inactive: label-switched-path r1_vpn03-to-r4_vpn12 {
        to 10.255.14.182;
    }
    interface all;
}
bgp {
    traceoptions {
        file bgp_trace;
        flag all detail;
    }
    group vpls-pe {
        type internal;
        local-address 10.255.71.24;
        family l2vpn {
            signaling;
        }
        neighbor 10.255.14.182;
    }
}
ospf {
    traffic-engineering;
    area 0.0.0.0 {
        interface lo0.0 {
            passive;
        }
        interface so-1/0/0.0 {
            metric 10;
        }
        interface so-0/0/0.0 {
            metric 10;
        }
        interface t1-0/2/0.0 {
            metric 10;
        }
    }
}
ldp {
    traceoptions {
        file ldp;
        flag route detail;
        flag error detail;
        flag event;
    }
}

{master}[edit]
regress at vpn03-1# show routing-instances | no-more 
vpls1 {
    instance-type vpls;
    interface fe-1/1/1.0;
    route-distinguisher 10.255.71.24:1;
    vrf-target target:100:1;
    protocols {
        vpls {
            traceoptions {
                file vpls;
                flag error detail;
                flag state detail;
                flag topology detail;
                flag route detail;
                flag connections detail;
            }
            site-range 10;
            site 1 {
                site-identifier 1;
            }
        }
    }
}

{master}[edit]
regress at vpn03-1# show routing-options 
traceoptions {
    file routing_options;
    flag nsr-synchronization;
}
graceful-restart {
    restart-duration 260;
}
interface-routes {
    rib-group inet ifrg;
}
rib inet.3 {
    static {
        route 10.255.14.182/32 next-hop gr-3/0/0.0;
    }
}
rib-groups {
    ifrg {
        import-rib [ inet.0 inet.3 ];
    }
}
autonomous-system 100;
forwarding-table {
    traceoptions {
        file forwarding_table;
        flag route detail;
    }
}

{master}[edit]
regress at vpn03-1# run show vpls connections 
Layer-2 VPN connections:

Legend for connection status (St)   
EI -- encapsulation invalid      NC -- interface encapsulation not
CCC/TCC/VPLS
EM -- encapsulation mismatch     WE -- interface and instance encaps not
same
VC-Dn -- Virtual circuit down    NP -- interface hardware not present 
CM -- control-word mismatch      -> -- only outbound connection is up
CN -- circuit not provisioned    <- -- only inbound connection is up
OR -- out of range               Up -- operational
OL -- no outgoing label          Dn -- down                      
LD -- local site signaled down   CF -- call admission control failure

RD -- remote site signaled down  SC -- local and remote site ID
collision
LN -- local site not designated  LM -- local site ID not minimum
designated
RN -- remote site not designated RM -- remote site ID not minimum
designated
XX -- unknown connection status  IL -- no incoming label

Legend for interface status 
Up -- operational           
Dn -- down

Instance: vpls1
Local site: 1 (1)
    connection-site           Type  St     Time last up          # Up
trans
    2                         rmt   Up     Apr 20 13:32:43 2007
1
      Local interface: vt-3/0/0.1048576, Status: Up, Encapsulation: VPLS
        Description: Intf - vpls vpls1 local site 1 remote site 2
      Remote PE: 10.255.14.182, Negotiated control-word: No
      Incoming label: 800001, Outgoing label: 800000

{master}[edit]
regress at vpn03-1# run show rsvp session 
Ingress RSVP: 0 sessions
Total 0 displayed, Up 0, Down 0

Egress RSVP: 0 sessions
Total 0 displayed, Up 0, Down 0

Transit RSVP: 0 sessions
Total 0 displayed, Up 0, Down 0

{master}[edit]
regress at vpn03-1# run show route table inet.3 

inet.3: 11 destinations, 11 routes (10 active, 0 holddown, 1 hidden)
Restart Complete
+ = Active Route, - = Last Active, * = Both

10.1.2.0/30        *[Direct/0] 00:02:46
                    > via so-1/0/0.0
10.1.2.1/32        *[Local/0] 00:02:46
                      Local via so-1/0/0.0
10.1.3.0/30        *[Direct/0] 00:02:46
                    > via so-0/0/0.0
10.1.3.1/32        *[Local/0] 00:02:46
                      Local via so-0/0/0.0
10.1.4.0/30        *[Direct/0] 00:02:46
                    > via t1-0/2/0.0
10.1.4.1/32        *[Local/0] 00:02:46
                      Local via t1-0/2/0.0
10.255.14.182/32   *[Static/5] 00:03:29
                    > via gr-3/0/0.0
10.255.71.24/32    *[Direct/0] 00:02:46
                    > via lo0.0
192.168.64.0/21    *[Direct/0] 00:02:46
                    > via fxp0.0
192.168.71.23/32   *[Local/0] 00:02:46
                      Local via fxp0.0

<<<< CE-CE working:

regress at vpn07> ping 1.1.1.2 size 1472    
PING 1.1.1.2 (1.1.1.2): 1472 data bytes
1480 bytes from 1.1.1.2: icmp_seq=0 ttl=64 time=20.406 ms
1480 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=20.350 ms
1480 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=20.367 ms
^C
--- 1.1.1.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 20.350/20.374/20.406/0.023 ms

regress at vpn07> show ospf neighbor 
Address          Interface              State     ID               Pri
Dead
1.1.1.2          fe-1/1/3.0             Full      10.255.14.185    128
36

<<< Topo:


                       +--------+               +--------+
                       |        |               |        |
                       | vpn04  |  so-0/0/1     | vpn02  |
                       |  R2    |---------------|  R3    |
                       |  P1    |     so-0/2/1  |  P2    |
                       |        |               |        |
                       +--------+               +--------+
               so-1/0/0  |   |  so-6/1/2  _______|     |  t1-3/0/1  
                         |   |           /  so-0/2/2   |
                         |   |          /              |
                         |   |         /               |
                         |   |________/_____________   |
                         |           /             |   |
               so-1/0/0  |          /    so-1/2/0  |   |  t1-0/1/2  
+--------+             +--------+  /            +--------+
+--------+
| vpn07  |             | vpn03  |_/  so-0/0/0   | vpn12  |             |
vpn14  |
|  R0    |  fe-1/1/3   |  R1    |               |  R4    |  fe-1/0/1   |
R5    |
|  CE1   |-------------|  PE1   |  t1-0/2/0     |  PE2   |-------------|
CE2   |
|        |   fe-1/1/1  |        |---------------|        |   fe-0/3/1  |
|
|        |             |        |     t1-0/1/1  |        |             |
|
+--------+             +--------+               +--------+
+--------+

 
   R0 (vpn07)        m10            8.4-20070415.0      
                     lo0.0          10.255.14.177
abcd::10:255:14:177
     R0-R1:          fe-1/1/3       1.1.1.1/30                     

   R1 (vpn03)        m20            8.4-20070420.0      
                     lo0.0          10.255.71.24
abcd::10:255:71:24
     R0-R1:          fe-1/1/1                                      
     R1-GR:          gr-2/2/0                                      
     R1-R2:          so-1/0/0       10.1.2.1/30                    
     R1-R3:          so-0/0/0       10.1.3.1/30                    
     R1-R4:          t1-0/2/0       10.1.4.1/30                    

   R2 (vpn04)        m40            8.4-20070420.0      
                     lo0.0          10.255.14.174
abcd::10:255:14:174
     R1-R2:          so-1/0/0       10.1.2.2/30                    
     R2-R3:          so-0/0/1       10.2.3.1/30                    
     R2-R4:          so-6/1/2       10.2.4.1/30                    

   R3 (vpn02)        m40            8.4-20070420.0      
                     lo0.0          10.255.14.172
abcd::10:255:14:172
     R1-R3:          so-0/2/2       10.1.3.2/30                    
     R2-R3:          so-0/2/1       10.2.3.2/30                    
     R3-R4:          t1-3/0/1       10.3.4.1/30                    

   R4 (vpn12)        m10            8.4-20070420.0      
                     lo0.0          10.255.14.182
abcd::10:255:14:182
     R1-R4:          t1-0/1/1       10.1.4.2/30                    
     R2-R4:          so-1/2/0       10.2.4.2/30                    
     R3-R4:          t1-0/1/2       10.3.4.2/30                    
     R4-GR:          gr-0/3/0                                      
     R4-R5:          fe-1/0/1                                      

   R5 (vpn14)        m320           8.4-20070420.0      
                     lo0.0          10.255.14.185
abcd::10:255:14:185
     R4-R5:          fe-0/3/1       1.1.1.2/30                     


> -----Original Message-----
> From: juniper-nsp-bounces at puck.nether.net 
> [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of 
> FAHAD ALI KHAN
> Sent: Friday, April 20, 2007 6:16 AM
> To: Josef Buchsteiner
> Cc: juniper-nsp
> Subject: Re: [j-nsp] Class of Service implementation over MLPPP link
> 
> Dear All
> 
> Thanks for you support.... i also want to start another 
> thread which has ben questioned alot of time but never answered.
> 
> Carrying MPLS VPN  [L2VPN (Kompella) and L2cct (Martini)] 
> traffic over GRE Tunnel.
> 
> As it has been proposed in Juniper Documentation that MPLS 
> over GRE is supported, in practical it is but only for L3VPN. 
> While pushing L2cct/L2vpn traffic over GRE...it cause 
> problem. VPNs stats shows UP and running but their traffic 
> didnt flows...even normal ping fails from CE to CE.
> 
> Does anybody on this group has implemented this scenario? 
> than please share your sample configuration and comments.
> 
> Regards
> 
> Fahad
> 
> 
> On 4/20/07, Josef Buchsteiner <josefb at juniper.net> wrote:
> >
> >
> >
> > Friday, April 20, 2007, 8:48:17 AM, you wrote:
> > FAK> One more question related to Multicalss MLPPP. Suppose if my 
> > FAK> scenario
> > is
> > FAK> something like following,
> >
> > FAK> PE1 ========= PE2 ========PE3
> > FAK>                             ||
> > FAK>                             ||
> > FAK>                            PE4
> >
> > FAK> In this case, PE2 has total three MLPPP bundles, one with each 
> > FAK> PE1,
> > PE3 and
> > FAK> PE4 respectively. Now in this case do my previous 
> configuration 
> > FAK> works
> > for
> > FAK> all or do i have to configure Multiclass MLPPP on PE2 
> to support
> > multiple
> > FAK> class flows on different bundles.
> >
> >     I'm  not  sure  I  understand  why you questioned this 
> topo. Your
> >     current  configuration will work no matter if you have 
> one,two or
> >     100 bundles on PE2 and there is no dependency if you 
> have regular
> >     ML  or  multiclass  ML since all is bundle specific.
> >
> >
> >           Josef
> >
> >
> >
> >
> > FAK> I think multiclass will not required, my current configuration 
> > FAK> will
> > work for
> > FAK> the other two. Just need to know your comments.
> >
> > FAK> Regards
> >
> > FAK> Fahad
> >
> >
> > FAK> On 4/18/07, Josef Buchsteiner <josefb at juniper.net> wrote:
> > >>
> > >>
> > >>
> > >> Wednesday, April 18, 2007, 7:47:11 AM, you wrote:
> > >> >>
> > >> >> Dear Josef
> > >> >>
> > >> >> Thanks for your valuable information, and yes you got 
> right....i 
> > >> >> was checking on interface extensive, which not showing any Q
> > stats...while
> > >> on
> > >> >> *sh interface queue, *the packets are actually going to those
> > specific
> > >> >> queue.
> > >> >>
> > >> >> Kindly can you explain this is little bit detail...as 
> i cant get 
> > >> >> it clearly.....
> > >> >> " On  the  egress interface we have to put all into 
> Q0 since you 
> > >> >> are  not  using multiclass mlppp and we have only one 
> SEQ pool 
> > >> >> so  we  will  end up all in one queue to prevent re-order. The
> > queuing
> > >> is
> > >> >> done in LSQ prior to putting on the seq stamps."
> > >> >>
> > >> >> What is the significance of MultiClass MLPPP,
> > >>
> > >>
> > >>   one  of  the  main driver for multiclass is that you 
> can load-share
> > >>   different  class  of mlppp traffic across the bundles. 
> Without this
> > >>   you  can only load-share *one* mlppp class and LFI 
> traffic needs to
> > >>   be hashed on *one* single member link to avoid re-ordering.
> > >>
> > >>
> > >>
> > >> >>  cant i get the
> > >> >> Gold/Silver/BE/NC traffic with out configuring this parameter?
> > >>
> > >>
> > >>   which  you  have  already  at  the LSQ level. Don't 
> think about the
> > >>   queue   on   the   PIC.  Just  see the egress 
> interface as one FIFO
> > >>   and  traffic is already arriving at the scheduler you 
> have defined.
> > >>
> > >>   We  should not see queuing on the egress PIC and if it 
> does because
> > >>   the line has errors then you will drop but only for 
> queue 0. If you
> > >>   would  send the ml traffic with one seq# pool into 
> different egress
> > >>   queues  and  you start dropping them according to the 
> scheduler you
> > >>   have  applied to the LSQ interface we will get massive 
> re-order and
> > >>   huge  jitter  sine  the remote side is waiting for the 
> frames for a
> > >>   certain period of time.
> > >>
> > >>   The  scheduler  according  to your configuration is 
> applied already
> > >>   *before*  the  ML Sequence stamps is build which is 
> the right thing
> > >>   to  do. Never but ML traffic which has one seq# pool 
> into different
> > >>   queues.
> > >>
> > >>
> > >> >>
> > >> >> Also while checking on consituent link stats (sh interface 
> > >> >> extensive
> > or
> > >> sh
> > >> >> interface queue) both shows the packets are going through BE 
> > >> >> queue,
> > >> where as
> > >> >> at lsq level they are flowing through Gold or Silver.
> > >>
> > >>   which is correct. you have done 
> queuing/shaping/scheduler actions
> > >>   already at lsq level.
> > >>
> > >>
> > >>           Josef
> > >>
> > >>
> > >>
> > >>
> > >> >>
> > >> >> Can you provide this information.
> > >> >>
> > >> >> Regards
> > >> >>
> > >> >> Fahad
> > >> >>
> > >> >>
> > >> >>  On 4/18/07, Josef Buchsteiner <josefb at juniper.net> wrote:
> > >> >> >
> > >> >> > Fahad,
> > >> >> >
> > >> >> >        the behavior you see is normal and expected.
> > >> >> >
> > >> >> >
> > >> >> >        First  to  see  the  queue statistic on LSQ 
> interface 
> > >> >> > you
> > most
> > >> >> >        likely forgot to add the subunit number as 
> the interface
> > >> >> >        queue number will be zero all the time since 
> this is the
> > >> >> >        entire LSQ interfaces. That's the reason why 
> you configure
> > >> >> >        per-unit-scheduler on the LSQ interface.
> > >> >> >
> > >> >> >        On  the  egress interface we have to put all into Q0 
> > >> >> > since
> > you
> > >> >> >        are  not  using multiclass mlppp and we have 
> only one 
> > >> >> > SEQ
> > pool
> > >> >> >        so  we  will  end up all in one queue to 
> prevent re-order.
> > The
> > >> >> >        queuing is done in LSQ prior to putting on 
> the seq stamps.
> > >> >> >
> > >> >> >        We  do  recommend  once  there  is  LFI  traffic  to
> > configure
> > >> >> >        scheduler  on  the  egress  PIC to make sure it gets 
> > >> >> > the
> > right
> > >> >> >        priority
> > and  served  prior  to  the  ML  packets  and  the
> > >> >> >        interleaving  is  done  there.  So  with  
> LFI  traffic 
> > >> >> > and
> > the
> > >> >> >        fragmentation-map it would then go into a different 
> > >> >> > egress
> > PIC
> > >> >> >        queue.  If  you  use  ML-MLPPP  you will 
> then see all 
> > >> >> > going
> > in
> > >> >> >        different egress queues.
> > >> >> >
> > >> >> >
> > >> >> >        However  the  point  is  that  queuing is 
> done on LSQ. 
> > >> >> > So
> > your
> > >> >> >        configuration  is ok and most likely all is working
> > correctly.
> > >> >> >        Just check if you get the LSQ queue number
> > >> >> >
> > >> >> >
> > >> >> >
> > >> >> > <-- example like this, please check on your side
> > >> >> >
> > >> >> > josefb at minsk# run show interfaces queue lsq-1/2/0.0 Logical 
> > >> >> > interface lsq-1/2/0.0 (Index 76) (SNMP ifIndex 65) 
> Forwarding 
> > >> >> > classes: 4 supported, 4 in use Egress queues: 4 
> supported, 4 
> > >> >> > in use Burst size: 0
> > >> >> > Queue: 0, Forwarding classes: best-effort
> > >> >> > Queued:
> > >> >> >    Packets              :                113479
> > 166
> > >> >> > pps
> > >> >> >
> > >> >>
> > >>
> > >>
> > >>
> >
> >
> >
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net 
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 



More information about the juniper-nsp mailing list