[j-nsp] 6PE without family inet6 labeled-unicast

Andrey Kostin ankost at podolsk.ru
Sun Jul 22 15:45:34 EDT 2018


  

Hi Pavel, 

Thanks for replying. I understand how it works as soon
as proper next-hop is present in a route. My attention was attracted by
implicit next-hop conversion from pure IPv4 address to IPv4-mapped IPv6
next-hop from "Nexthop: YYY.YYY.155.141" in the advertised route to
"Protocol next hop: ::ffff:YYY.YYY.155.141" in the received route.


Overwise it all works as expected, considering that family inet6 is
enabled in the core. 

I'm also wondering what could happen is there are
no LSP available, which is rather unreal situation because everything
will be broken anyway in this case.  

Kind regards, 

Andrey 

Pavel
Lunin писал 21.07.2018 06:44: 

> In this setup it's not 6PE but just
classic IP over MPLS, where vanilla inet/inet6 iBGP resolves it's
protocol next-hop with a labeled LDP/RSVP forwarding next hop. 
> It
works much the same way for v6 as for v4, except that the v6 header is
exposed to the last P router, when it performs PHP. It still relies on
MPLS to make the forwarding decision (if we don't take into account the
hashing story), however it "sees" the v6 header when it puts it onto the
wire, and needs to treat it accordingly. E. g. it must set the v6
ethertype or decide what to do if the egress interface MTU can't
accommodate the packet. 
> So you need family inet6 enabled on the
egress interface of the penultimate LSR to make IPv6 over MPLS work. 
>
6PE was invented to work around this. Technically it's the same IPv6
over MPLS but with an explicit (as opposed to implicit) null label at
the tail end, which hides the v6 header from the penultimate LSR. Or you
can just disable PHP in the core. 
> 
> Cheers, 
> Pavel 
> 
> пт, 20
июл. 2018 г., 21:59 Andrey Kostin : 
> 
>> Hello juniper-nsp,
>> 
>>
I've accidentally encountered an interesting behavior and wondering if

>> anyone already seen it before or may be it's documented. So pointing
to 
>> the docs is appreciated.
>> 
>> The story:
>> We began to
activate ipv6 for customers connected from cable network 
>> after cable
provider eventually added ipv6 support. We receive prefixes 
>> from
cable network via eBGP and then redistribute them inside our AS 
>> with
iBGP. There are two PE connected to cable network and receiving 
>> same
prefixes, so for traffic load-balancing we change next-hop to 
>>
anycast loopback address shared by those two PE and use dedicated LSPs

>> to that IP with "no-install" for real PE loopback addresses.
>> IPv6
wasn't deemed to use MPLS and existing plain iBGP sessions between 
>>
IPv6 addresses with family inet6 unicast were supposed to be reused. 
>>
However, the same export policy with term that changes next-hop for 
>>
specific community is used for both family inet and inet6, so it 
>>
started to assign IPv4 next-hop to IPv6 prefixes implicitly.
>> 
>> Here
is the example of one prefix.
>> 
>> ## here PE receives prefix from
eBGP neighbor:
>> 
>> uuuu at re1.agg01.LLL2> show route
XXXX:XXXX:e1bc::/46
>> 
>> inet6.0: 52939 destinations, 105912 routes
(52920 active, 1 holddown, 
>> 24 hidden)
>> + = Active Route, - = Last
Active, * = Both
>> 
>> XXXX:XXXX:e1bc::/46*[BGP/170] 5d 13:16:26, MED
100, localpref 100
>> AS path: EEEE I, validation-state: unverified
>> >
to XXXX:XXXX:ffff:f200:0:2:2:2 via ae2.202
>> 
>> ## Now PE advertises
it to iBGP neighbor with next-hop changed to plain 
>> IP:
>>
uuuu at re1.agg01.LLL2> show route XXXX:XXXX:e1bc::/46 
>>
advertising-protocol bgp XXXX:XXXX:1::1:140
>> 
>> inet6.0: 52907
destinations, 105843 routes (52883 active, 6 holddown, 
>> 24 hidden)
>>
Prefix Nexthop MED Lclpref AS 
>> path
>> * XXXX:XXXX:e1bc::/46
YYY.YYY.155.141 100 100 EEEE 
>> I
>> 
>> ## Same output as above with
details
>> {master}
>> uuuu at re1.agg01.LLL2> show route
XXXX:XXXX:e1bc::/46 
>> advertising-protocol bgp XXXX:XXXX:1::1:140
detail ## Session is 
>> between v6 addresses
>> 
>> inet6.0: 52902
destinations, 105836 routes (52881 active, 3 holddown, 
>> 24 hidden)
>>
* XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
>> BGP group internal-v6
type Internal
>> Nexthop: YYY.YYY.155.141 ## v6 
>> prefix advertised
with plain v4 next-hop
>> Flags: Nexthop Change
>> MED: 100
>>
Localpref: 100
>> AS path: [IIII] EEEE I
>> Communities: IIII:10102
no-export
>> 
>> ## iBGP neighbor receives prefix with tooled next hop
and uses 
>> established LSPs to forward traffic:
>> uuuu at re0.bdr01.LLL>
show route XXXX:XXXX:e1bc::/46
>> 
>> inet6.0: 52955 destinations,
323835 routes (52877 active, 10 holddown, 
>> 79 hidden)
>> + = Active
Route, - = Last Active, * = Both
>> 
>> XXXX:XXXX:e1bc::/46*[BGP/170] 5d
13:01:12, MED 100, localpref 100, from 
>> XXXX:XXXX:1::1:240
>> AS
path: EEEE I, validation-state: unverified
>> to YYY.YYY.155.14 via
ae1.0, label-switched-path 
>> BE-bdr01.LLL-vvvv-agg01.LLL2-1
>> to
YYY.YYY.155.9 via ae12.0, label-switched-path 
>>
BE-bdr01.LLL-vvvv-agg01.LLL2-2
>> to YYY.YYY.155.95 via ae4.0,
label-switched-path 
>> BE-bdr01.LLL-vvvv-agg01.LLL-1
>> to
YYY.YYY.155.9 via ae12.0, label-switched-path 
>>
BE-bdr01.LLL-vvvv-agg01.LLL-2
>> 
>> uuuu at re0.bdr01.LLL> show route
XXXX:XXXX:e1bc::/46 detail | match 
>> "Protocol|XXXX:XXXX|BE-"
>>
XXXX:XXXX:e1bc::/46 (3 entries, 1 announced)
>> Source:
XXXX:XXXX:1::1:240
>> Label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-1
>> Label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL2-2
>> Label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-1
>> Label-switched-path
BE-bdr01.LLL-vvvv-agg01.LLL-2
>> Protocol next hop:
::ffff:YYY.YYY.155.141 
>> ### Seems that IPv4 next hop has been
converted to compatible form
>> Task: BGP_IIII.XXXX:XXXX:1::1:240
>>
Source: XXXX:XXXX:1::7
>> 
>> ## The policy assigning next-hop is the
same for v4 and v6 sessions, 
>> only one term is shown:
>>
uuuu at re1.agg01.LLL2> show configuration protocols bgp group internal-v4

>> export
>> export [ deny-rfc3330 to-bgp ];
>> 
>> {master}
>>
uuuu at re1.agg01.LLL2> show configuration protocols bgp group internal-v6

>> export
>> export [ deny-rfc3330 to-bgp ];
>> 
>>
uuuu at re1.agg01.LLL2> show configuration policy-options policy-statement

>> to-bgp | display inheritance no-comments
>> term vvvv-vvvv {
>> from
{
>> community vvvv-vvvv;
>> tag 33;
>> }
>> then {
>> next-hop
YYY.YYY.155.141;
>> accept;
>> }
>> }
>> 
>> uuuu at re0.bdr01.LLL> show
route forwarding-table destination 
>> XXXX:XXXX:e1bc::/46
>> Routing
table: default.inet6
>> Internet6:
>> Destination Type RtRef Next hop
Type Index NhRef 
>> Netif
>> XXXX:XXXX:e1bc::/46 user 0 indr 1049181
37
>> ulst 1050092 4
>> YYY.YYY.155.14 ucst 1775 1 
>> ae1.0
>>
YYY.YYY.155.9 Push 486887 1859 
>> 1 ae12.0
>> YYY.YYY.155.95 ucst 2380
1 
>> ae4.0
>> YYY.YYY.155.9 Push 486892 2555 
>> 1 ae12.0
>> 
>> The
result is that we have IPv6 traffic forwarded via MPLS without 6PE 
>>
configured properly. ipv6-tunneling is configured under "protocols mpls"

>> but no "family inet6 labeled-unicast explicit-null" under v4 iBGP

>> session.
>> It works as far as we have v6 enabled on all MPLS links,
so packets are 
>> not dropped because of implicit-null label.
>> Looks
sketchy but it works. Has anybody seen/used it before?
>> 
>> -- 
>>
Kind regards,
>> 
>> Andrey Kostin
>>
_______________________________________________
>> juniper-nsp mailing
list juniper-nsp at puck.nether.net [1]
>>
https://puck.nether.net/mailman/listinfo/juniper-nsp [2]

 


Links:
------
[1] mailto:juniper-nsp at puck.nether.net
[2]
https://puck.nether.net/mailman/listinfo/juniper-nsp
[3]
mailto:ankost at podolsk.ru


More information about the juniper-nsp mailing list