[f-nsp] BigIron 15000 load balancing
FAHAD ALI KHAN
fahad.alikhan at gmail.com
Thu Oct 12 00:09:02 EDT 2006
Dear Niels
Sorry for the late reply,
Actually i have BigIron 15000 with JetCore Gig Copper Module and JetCore
Copper E Module (48 Port FastEthernet).
Initailly i go for Aggregated (trunk) interfaces, but it wont work for me.
it is poosible that im missing something in it. This is my scenario,
UpStream --- Juniper M5 === BigIron 15000 ------ Connected to Other
PoPs/Clients/Servers on Fiber and Ethernet
Now the Downward traffic from upstream to my PoPs and Client is arround
90Mbps and will gonna incerase.... i want to terminate bigIron 2
FastEthernet (as M5 has 4 port FE PIC) to M5 and want to do Etherchannel or
Trunk between M5 and BigIron to do proper loadbalancing.......
Now what happen....aggregate link has been successfully established but when
the traffic is through over it.....it goes like this......
Juniper-M5-FE1-input = 2Mbps , Juniper-M5-FE1-output = 0
Juniper-M5-FE2-input = 0 , Juniper-M5-FE2-output = 2Mbps
same on Foundry ethernet interfaces.....it is possible that it is due to the
algo used but it will be surely based on Dest/Src IP address or Dest/Src MAC
address.
But this is not loadbalancing......!
Kindly if you ever try it......kindly send me the sample config...so i can
verify it with mine......
Thanks
Fahad
On 10/6/06, Niels Bakker <niels=foundry-nsp at bakker.net> wrote:
>
> * fahad.alikhan at gmail.com (FAHAD ALI KHAN) [Fri 06 Oct 2006, 05:50 CEST]:
> >We have installed BigIron 15000 in our network. My question is does it
> >support per packet load balancing as Cisco does or it has per packet
> >load balancing as Juniper.
> >
> >As im seeing, it is doing network based load balanced by default which
> >is a per flow actually. Does any one help me in this regard.
>
> You fail to mention some useful information, such as whether you have
> IronCore or JetCore, whether you're using 10GE or not, if you're trying
> to build aggregated links or do equal-cost multipathing.
>
> Per-packet load-balancing can lead to packet reordering, which is
> generally considered a bad thing.
>
> Juniper doesn't do per-packet, what is called per-packet is actually
> per-flow on IPII and up (and you are in trouble if you still try to do
> something useful with an IP 1).
>
> JetCore supports switch and server trunks (aggregated links,
> Port-channel in Cisco talk). The algorithm for the former looks at the
> destination MAC address of each frame and puts it on a member port in
> the trunk based on a hash of it; a server trunk hashes source and
> destination MAC addresses, and depending on software release also on
> source and destination IP address and TCP/UDP port numbers. (Multicast
> is always sent down the primary port.)
>
> I can't think of a case where a switch trunk is preferable over a server
> trunk.
>
> I understand that for the RX platform the frame is XORed and then sent
> down the appropriate modulo port; thus you can have a good distribution
> over aggregated links with an amount of member ports that is not a power
> of two.
>
>
> -- Niels.
>
> --
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20061012/6a603aa1/attachment.html>
More information about the foundry-nsp
mailing list