[j-nsp] Too much packet loss during switchover on MPLS network

Keegan Holley keegan.holley at sungard.com
Mon Mar 14 20:05:15 EDT 2011


I think this is moot.  Op already says he sees his LSP switch to the standby before 40s.  The only thing left to verify is the customer test and the forwarding tables.

Sent from my iPhone

On Mar 14, 2011, at 7:54 PM, David Ball <davidtball at gmail.com> wrote:

>  Disabling an interface or yanking fibers is certainly quicker....you
> can speed up convergence following a deactivate if you add BFD and
> maybe some LFA, but the loss of light on an interface has been faster
> than a deactivate in every case I've ever tested.
> 
> David
> 
> 
> 
> 2011/3/14 Keegan Holley <keegan.holley at sungard.com>:
>> Deactivating the interface should remove the IP address which should cause
>> the IGP to converge immediately.
>> 
>> On Mon, Mar 14, 2011 at 5:59 PM, Matthew Tighe <matthew.e.tighe at gmail.com>wrote:
>> 
>>> You can *disable *the interface rather than *deactivate *it. That should
>>> show it as down immediately.
>>> 
>>> set interface fe-x/y/z disable
>>> commit
>>> 
>>> 
>>> 
>>> On Mon, Mar 14, 2011 at 2:21 PM, Gökhan Gümüş <ggumus at gmail.com> wrote:
>>> 
>>>> It might make sense...I have been always thinking on it.
>>>> Which way would be useful to test such behaviour?
>>>> To disable circuit or?
>>>> 
>>>> Thanks,
>>>> Gokhan
>>>> 
>>>> On Mon, Mar 14, 2011 at 10:15 PM, Amos Rosenboim <amos at oasis-tech.net
>>>>> wrote:
>>>> 
>>>>> As far as I remember deactivating the interface will not take the link
>>>>> down, so we are relying on igp hold times to detect the failure.
>>>>> If so, does the 45 seconds make any sense ?
>>>>> Can you correlate igp adjacency loss to lsp switchover to customer
>>> pings
>>>> ?
>>>>> 
>>>>> Amos
>>>>> 
>>>>> Sent from my iPhone
>>>>> 
>>>>> On 14 Mar 2011, at 21:55, "Doug Hanks" <dhanks at juniper.net> wrote:
>>>>> 
>>>>>> If it’s VPLS, the customer wouldn’t be using BGP though.  That’s why
>>> I
>>>>> mentioned STP.
>>>>>> 
>>>>>> From: Keegan Holley [mailto:keegan.holley at sungard.com]
>>>>>> Sent: Monday, March 14, 2011 12:47 PM
>>>>>> To: Gökhan Gümüş
>>>>>> Cc: Doug Hanks; Diogo Montagner; juniper-nsp at puck.nether.net
>>>>>> Subject: Re: [j-nsp] Too much packet loss during switchover on MPLS
>>>>> network
>>>>>> 
>>>>>> Another to way to check would be to figure out when you start seeing
>>>>> mac-addresses from the customer in the vpls tables.  That will mean the
>>>>> network has failed over properly.  Do you know what the customer
>>> topology
>>>>> looks like?  They could be waiting for BGP to fail over or something
>>> else
>>>>> that inherently slow.  I doubt this is a problem with your mpls config,
>>>>> especially if you see your lsp switch.  It's hard to guess without
>>>> knowing
>>>>> your's or the customer's topology though.
>>>>>> On Mon, Mar 14, 2011 at 3:42 PM, Gökhan Gümüş <ggumus at gmail.com
>>>> <mailto:
>>>>> ggumus at gmail.com>> wrote:
>>>>>> No, they are not using rapid ping, i can confirm it.
>>>>>> 
>>>>>> I do not agree with Spanning tree issue.
>>>>>> Just for note, i am just de-activating one circuit via CLI to trigger
>>>>> transition from primary to secondary.
>>>>>> 
>>>>>> Gokhan
>>>>>> 
>>>>>> 
>>>>>> 2011/3/14 Doug Hanks <dhanks at juniper.net<mailto:dhanks at juniper.net>>
>>>>>> I'm sure they were using a rapid ping, so it didn't take anywhere
>>> near
>>>> 45
>>>>> seconds.  If they were using a regular ping, however, it maybe a STP
>>>> issue.
>>>>>> 
>>>>>> Also are you using pre-signaled LSPs?
>>>>>> 
>>>>>> -----Original Message-----
>>>>>> From: juniper-nsp-bounces at puck.nether.net<mailto:
>>>>> juniper-nsp-bounces at puck.nether.net> [mailto:
>>>>> juniper-nsp-bounces at puck.nether.net<mailto:
>>>>> juniper-nsp-bounces at puck.nether.net>] On Behalf Of Keegan Holley
>>>>>> Sent: Monday, March 14, 2011 11:15 AM
>>>>>> To: Diogo Montagner
>>>>>> Cc: juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>;
>>>>> Gökhan Gümüş
>>>>>> Subject: Re: [j-nsp] Too much packet loss during switchover on MPLS
>>>>> network
>>>>>> 
>>>>>> On Mon, Mar 14, 2011 at 1:25 PM, Diogo Montagner
>>>>>> <diogo.montagner at gmail.com<mailto:diogo.montagner at gmail.com>>wrote:
>>>>>> 
>>>>>>> Do you have FRR enabled on the LSPs ?
>>>>>>> 
>>>>>> 
>>>>>> Node protection and link-protection is the same thing as fast
>>> re-route.
>>>>>> 
>>>>>> Is it configured correctly though?  You have to configure a secondary
>>>>> path
>>>>>> under protocols mpls and then enable it for FRR/node protection.  You
>>>>> can't
>>>>>> just enable it and have it work.
>>>>>> Also, what does the topology look like?  Could you just be waiting
>>> for
>>>>>> customer routing/spanning tree?  Even without FRR your lsp's failover
>>>> at
>>>>> the
>>>>>> speed of your IGP when a link is shut down.  None of them take 41
>>>>> seconds.
>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On Tue, Mar 15, 2011 at 12:46 AM, Gökhan Gümüş <ggumus at gmail.com
>>>>> <mailto:ggumus at gmail.com>> wrote:
>>>>>>>> Dear all,
>>>>>>>> 
>>>>>>>> I have a problem with one of our customer.
>>>>>>>> 
>>>>>>>> Customer has been deployed with VPLS. We are using primary path and
>>>>>>>> secondary path ( standby ) to handle VPLS traffic between sites.
>>>>>>>> 
>>>>>>>> Within a maintenance window, we made a failover test. Customer was
>>>>>>> pinging
>>>>>>>> remote site continuosly and we would like to test how many packets
>>>> are
>>>>>>> being
>>>>>>>> lost during switchover. When i triggered transition from primary to
>>>>>>>> secondary, customer lost 41 packets during ping test. Then i
>>>>> implemented
>>>>>>>> node-link-protection and link protection in case they help but
>>>> customer
>>>>>>>> experienced same amount of packet loss during transition.
>>>>>>>> 
>>>>>>>> My question, is it a normal behaviour? From my perspective it is
>>> not
>>>> a
>>>>>>>> normal behaviour.
>>>>>>>> 
>>>>>>>> Has anybody such an experince?
>>>>>>>> 
>>>>>>>> Thanks and regards,
>>>>>>>> 
>>>>>>>> Gokhan
>>>>>>>> _______________________________________________
>>>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:
>>>>> juniper-nsp at puck.nether.net>
>>>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>>>>> 
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:
>>>>> juniper-nsp at puck.nether.net>
>>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>>>> 
>>>>>> _______________________________________________
>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:
>>>>> juniper-nsp at puck.nether.net>
>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>> 
>>>> _______________________________________________
>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Matthew Tighe
>>> matthew.e.tighe at gmail.com
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>> 
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 



More information about the juniper-nsp mailing list