[j-nsp] BGP import policy not refreshing properly
Yevgeniy Voloshin
yevgeniy.voloshin at gmail.com
Thu Jul 16 00:35:55 EDT 2009
Hi Truman,
*tboyes at manhattan> show configuration policy-options policy-statement
set-med
term 1 {**
from metric 0;
then {
metric 30000;
**?++++++ACCEPT++++++?**
}
}
*
* term local_pref {**
then {
local-preference 110;
accept;
}
}
*
* term default {
then reject;
}
*
---
Yev.
2009/7/15 Truman Boyes <truman at suspicious.org>
> Hi,
>
> I ran a quick test with 9.2R2.15 between two BGP peers and I see BGP metric
> (MED) changes take effect immediately.
>
> tboyes at brooklyn> show configuration protocols bgp
> group test {
> type internal;
> local-address 50.50.50.1;
> family inet {
> unicast;
> }
> family inet-vpn {
> unicast;
> }
> export static-export;
> ipsec-sa bgp-secure;
> multipath;
> neighbor 50.50.50.254;
> }
>
> tboyes at manhattan> show configuration protocols bgp
> group test {
> type internal;
> local-address 50.50.50.254;
> import set-med;
> family inet {
> unicast;
> }
> family inet-vpn {
> unicast;
> }
> ipsec-sa bgp-secure;
> neighbor 50.50.50.1;
> }
>
> tboyes at manhattan> show configuration policy-options policy-statement
> set-med
> term 1 {
> from metric 0;
> then {
> metric 30000;
> }
> }
> term local_pref {
> then {
> local-preference 110;
> accept;
> }
> }
> term default {
> then reject;
> }
>
>
> Now I will start with no import policy on manhattan.
>
> Sending 3 routes I see this:
>
> tboyes at manhattan# run show route protocol bgp
>
> inet.0: 10 destinations, 13 routes (10 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
>
> 60.60.60.1/32 [BGP/170] 00:17:08, MED 100, localpref 100
> AS path: I
> > to 50.50.50.1 via em0.0
> 60.60.60.2/32 [BGP/170] 00:17:07, MED 0, localpref 100
> AS path: I
> > to 50.50.50.1 via em0.0
> 60.60.60.3/32 [BGP/170] 00:00:06, MED 300, localpref 100
> AS path: I
> > to 50.50.50.1 via em0.0
>
> So now we want to turn on the import policy on manhattan, commit and see
> what happens.
>
> tboyes at manhattan# run show route protocol bgp
>
> inet.0: 10 destinations, 13 routes (10 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
>
> 60.60.60.1/32 [BGP/170] 00:18:58, MED 100, localpref 110
> AS path: I
> > to 50.50.50.1 via em0.0
> 60.60.60.2/32 [BGP/170] 00:18:57, MED 30000, localpref 110
> AS path: I
> > to 50.50.50.1 via em0.0
> 60.60.60.3/32 [BGP/170] 00:01:56, MED 300, localpref 110
> AS path: I
> > to 50.50.50.1 via em0.0
>
>
> This worked instantly without needing to clear the BGP session.
>
> If you turn on traceoptions on BGP you should see something like this ..
> which shows the new policy being evaluated and then route attributes
> changed:
>
> Jul 14 17:23:55.050048 peer 50.50.50.1 (test): Need to reevaluate import
> policy
> Jul 14 17:23:55.052131 task_timer_uset: timer BGP RT Background_BGP Route
> statistics timer <Touched> set to interval 30 at 17:24:25
> Jul 14 17:23:55.052141 bgp_rt_update_med_igp_init: Deleting MED IGP update
> timer
> Jul 14 17:23:55.052147 group group test type Internal: export eval flag set
> (vpn nlri)
> Jul 14 17:23:55.052151 50.50.50.1 (Internal AS 1): import eval flag set
> (config change)
> Jul 14 17:23:55.052472 init bgp commit sync
>
> Jul 14 17:23:55.052495 bgp_rib_notify: freddy.inet.0 Add - exists
> Jul 14 17:23:55.052531 task_job_create_background: create prio 5 job BGP
> reconfig for task BGP.0.0.0.0+179
> Jul 14 17:23:55.061915 background dispatch running job BGP reconfig for
> task BGP.0.0.0.0+179
> Jul 14 17:23:55.062080 CHANGE 60.60.60.1/32 gw 50.50.50.1 BGP
> pref 170/-111 metric 100/0 <Int Ext> as 1
> Jul 14 17:23:55.062106 CHANGE 60.60.60.2/32 gw 50.50.50.1 BGP
> pref 170/-111 metric 30000/0 <Int Ext> as 1
> Jul 14 17:23:55.062177 CHANGE 60.60.60.3/32 gw 50.50.50.1 BGP
> pref 170/-111 metric 300/0 <Int Ext> as 1
>
> Not sure why it was necessary to hard clear the BGP session; does the
> upstream peer support BGP refresh?
>
>
> Kind regards,
> Truman Boyes
>
>
>
>
>
>
>
>
>
>
>
>
> On 13/07/2009, at 6:35 PM, Will Orton wrote:
>
> I have 2 POPs each with a connection to a common upstream. The upstream
>> is sending me MEDs, but lots of routes have (missing or 0) MEDs and I
>> want to reset those to a fixed value so I can tweak them later.
>>
>> So I have an import policy on each BGP session like so:
>>
>> term setall-meds {
>> from metric 0;
>> then {
>> metric 30000;
>> }
>> }
>> term def {
>> then {
>> local-preference 110;
>> accept;
>> }
>> }
>> term rej {
>> then reject;
>> }
>>
>>
>> I apply this on both routers and get, for example:
>>
>> At POP A (M10i 9.3R1.7):
>> A Destination P Prf Metric 1 Metric 2 Next hop AS path
>> * 64.152.0.0/13 B 170 110 0 >(TO POP B) 3356 I
>> B 170 110 30000 >(UPSTREAM AT A) 3356 I
>>
>> At POP B (M10 9.3R3.8):
>> A Destination P Prf Metric 1 Metric 2 Next hop AS path
>> * 64.152.0.0/13 B 170 110 0 >(UPSTREAM AT B) 3356 I
>>
>>
>> So the M10 at POP B doesn't appear to be applying the import policy and
>> setting the MED to 30000. POP A as a result picks the route through B.
>> (Yes, I waited more than the 15 minutes for POP B's CPU to go back to
>> idle so the RE-333-768 churned through the whole table).
>>
>> This resolved itself with a hard clear of the BGP session to the upstream
>> at POP B. 'soft-intbound' clear at B didn't do it (other than pegging the
>> RE CPU for another 15 minutes).
>>
>> Any ideas? JUNOS bug? Old/decrepit RE getting non-deterministic
>> with age? Do I really have to hard-clear the BGP session on the 'B'
>> router any time I change the import policy now? :/
>>
>>
>> -Will Orton
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
More information about the juniper-nsp
mailing list