LSP load balancing (was Re: [c-nsp] Re: [j-nsp] load balancingbetween multiple BGP links)

William Phang phangjk at hotpop.com
Fri Sep 16 01:51:04 EDT 2005


Hi Rendo,

In this case, you may need Filter-based forwarding. Suppose that from server
A to Client is using LSP A (means link E1-1), server B A to Client is using
LSP B (means link E1-2), and so on. On the other side, from Client to server
A using LSP A, and so on. 

I have tested using this way and it is working.

Regards,

William




-----Original Message-----
From: juniper-nsp-bounces at puck.nether.net
[mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of Rendo
Sent: Thursday, September 15, 2005 6:03 AM
To: juniper-nsp at puck.nether.net
Subject: LSP load balancing (was Re: [c-nsp] Re: [j-nsp] load
balancingbetween multiple BGP links)

Hi all,

May I change the topic for little bit? I hope so :)
Basically, my problem is still very related to bgp load balancing, but, i
need to load balancing data traffic over mpls over multiple e1 links. I use
rsvp for signalling.

Router A --------4E1---------------Router B

the best result that i can get is, the traffic is load balanced by
destination prefix in the routing table as seen below,
this is some of "show route table VRF-A" result
10.2.178.68/30     *[BGP/170] 13:54:28, localpref 100, from 10.2.178.253
                      AS path: I
                      via e1-0/0/2.0, label-switched-path wpi_to_tbs_3
                      via e1-0/0/3.0, label-switched-path wpi_to_tbs_4
                      via e1-0/0/0.0, label-switched-path wpi_to_tbs
                    > via e1-0/0/1.0, label-switched-path wpi_to_tbs_2
10.2.178.247/32    *[BGP/170] 13:54:28, localpref 100, from 10.2.178.253
                      AS path: I
                    > via e1-0/0/2.0, label-switched-path wpi_to_tbs_3
                      via e1-0/0/3.0, label-switched-path wpi_to_tbs_4
                      via e1-0/0/0.0, label-switched-path wpi_to_tbs
                      via e1-0/0/1.0, label-switched-path wpi_to_tbs_2
172.17.128.0/19    *[BGP/170] 13:54:28, MED 0, localpref 100, from
10.2.178.253
                      AS path: I
                      via e1-0/0/2.0, label-switched-path wpi_to_tbs_3
                      via e1-0/0/3.0, label-switched-path wpi_to_tbs_4
                      via e1-0/0/0.0, label-switched-path wpi_to_tbs
                    > via e1-0/0/1.0, label-switched-path wpi_to_tbs_2

and this is some of "show route forwarding-table table VRF-A" result
10.2.178.68/30     user     0                    indr   788     8
                                                 ulst   937     1
                                                Push 210576       e1-0/0/2.0
                                                Push 210576       e1-0/0/3.0
                                                Push 210576       e1-0/0/0.0
                                                Push 210576       e1-0/0/1.0
10.2.178.247/32    user     0                    indr   788     8
                                                 ulst   937     1
                                                Push 210576       e1-0/0/2.0
                                                Push 210576       e1-0/0/3.0
                                                Push 210576       e1-0/0/0.0
                                                Push 210576       e1-0/0/1.0
172.17.128.0/19    user     0                    indr   788     8
                                                 ulst   937     1
                                                Push 210576       e1-0/0/2.0
                                                Push 210576       e1-0/0/3.0
                                                Push 210576       e1-0/0/0.0
                                                Push 210576       e1-0/0/1.0


My current configuration are:
- i create 4 LSP with strict next-hop option for each e1 link
- i already put per packet load balancing policy in forwarding-options
- I have ospf for IGP between two routers

I'm not happy with this, because all the server is located in the same
subnet and the client also aggregated in another same subnet too, so i can
say that most  traffic still use same link.

Any idea how can i load balance this?

thanks.

-rendo-

----- Original Message ----- 
From: "Chris Morrow" <morrowc at ops-netman.net>
To: "sin" <sin at pvs.ro>
Cc: <juniper-nsp at puck.nether.net>; <cisco-nsp at puck.nether.net>
Sent: Thursday, September 15, 2005 3:13 AM
Subject: Re: [c-nsp] Re: [j-nsp] load balancing between multiple BGP links


>
> On Wed, 14 Sep 2005, sin wrote:
>
> > Alok wrote:
> >> It isnt juniper in my case though but I just remebered the per packet
thing
> >> when I ran into this with another vendor
> >>
> >> however, even link bandwidth via BGP , which i perhaps remember has
been
> >> there since 5.6 doesnt consider the "immediate util" at the time of
> >> transmitting the packet....
> >>
> >> Perhaps "per packet" with BGP link bandwidth would be good infact  i
think..
> >> .
> >> or perhaps for flow based, flow setup based on "actual util at the time
of
> >> setup".... though nothing still seems to beat per packet :-) with link
> >> bandwidth i guess...
> >>
> >
> > i know that with cisco you can setup an acl that can match even/odd ip
> > addresses and then use that acl in a route map to distribute the traffic
> > across two links to another bgp speaker (that in it's turn can do the
> > same with the link back to you). this might be an option for you.
>
> uhm, I'm positive another person mentioned this before, but your IGP does
> the loadbalancing, so get some form of IGP to tell you that the 2 links
> are available for the same destination next-hop and be done with it.
> Policy routing is just so not a solution...
>
> Example, your bgp neighbor is over 2 links, static route the /32 across
> both links and reset the next-hop to neighbor IP for all inbound routes
> (this should be the default, but with multihop ebgp the neighbor can send
> you an alternate next-hop so you should reset it for your own protection).
>
> When traffic is sent to the next-hop it'll get automagically loadshared
> across both links... in a 'per flow' way, so it's not 50/50 more like
> 45/55 to 70/30.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net
http://puck.nether.net/mailman/listinfo/juniper-nsp





More information about the juniper-nsp mailing list