[f-nsp] BigIron 15000 load balancing

FAHAD ALI KHAN fahad.alikhan at gmail.com
Thu Oct 12 23:13:56 EDT 2006


Dear Niels

BigIron 15000 and M5 connected back to back as a L3 link, and we have Router
Flash code in BigIron.

Lacp trunking protocol is used to dynamically create the aggregate link
between M5 and BigIron

Regarding bandwidth mention 2Mbps and 90 Mbps is that, 90Mbps is a real
traffic which is in my operational Network, what i have done do the same
replication in my test lab and simulate the required scenario, so the
traffic flowing throughing that time is arround 2Mbps.

The configuration done on my end is like that

BigIron 15000

BigIron (config)# interface ethernet 4/1
BigIron (config-if-e100-4/1)# link-aggregate configure key 10000
BigIron (config-if-e100-4/1)# link-aggregate active
 BigIron (config)# interface ethernet 4/1
BigIron (config-if-e100-4/2)# link-aggregate configure key 10000
BigIron (config-if-e100-4/2)# link-aggregate active
BigIron (config)# vlan 10 name aggregate-link
BigIron (config-vlan-10)#untag eth 4/1 to eth 4/2
BigIron (config-vlan-10)#router-interface ve 10
BigIron (config)#interface ve 10
BigIron (config-vif-14)#ip address 192.168.0.1 255.255.255.252

M5

chassis
aggregated-devices {
    ethernet {
        device-count 2;
    }
}


[edit interfaces]
fe-0/0/0 {
  fastether-options {
      802.3ad ae0;
 }
}

fe-0/0/1{
  fastether-options {
      802.3ad ae0;
 }
}

ae0 {
   aggregated-ether-options {
       lacp {
           active;          // also check with passive state at
Juniper end (results same)
       }
   }
    unit 0 {
     family inet {
       address 192.168.0.2/30 {
      }
  }
}

routing-options {
    autonomous-system *abcde*;
    forwarding-table {
        export [ load-balance ];
    }
}
policy-options {
    policy-statement load-balance {
        then {
            load-balance per-packet;
        }
    }
}
forwarding-options {
    hash-key {
        family inet {
            layer-3;
            layer-4;
        }
    }
}
hash-key {
    family inet {
        layer-3 {
            destination-address;
            protocol;
            source-address;
        }
        layer-4 {
            destination-port;
            source-port;
            type-of-service;
        }
    }
}

Aggregated link become active and traffic flow through them, but i
face the issue tell earlier that there is no load balancing.....if i
missing some thing than kindly let me know.

Regards

Fahad





On 10/12/06, Niels Bakker <niels=foundry-nsp at bakker.net> wrote:
>
> * fahad.alikhan at gmail.com (FAHAD ALI KHAN) [Thu 12 Oct 2006, 06:09 CEST]:
> >Actually i have BigIron 15000 with JetCore Gig Copper Module and JetCore
> >Copper E Module (48 Port FastEthernet).
> >
> >Initailly i go for Aggregated (trunk) interfaces, but it wont work for
> me.
> >it is poosible that im missing something in it. This is my scenario,
> >
> >UpStream --- Juniper M5 === BigIron 15000 ------ Connected to Other
> >PoPs/Clients/Servers on Fiber and Ethernet
> >
> >Now the Downward traffic from upstream to my PoPs and Client is arround
> >90Mbps and will gonna incerase.... i want to terminate bigIron 2
> >FastEthernet (as M5 has 4 port FE PIC) to M5 and want to do Etherchannel
> >or Trunk between M5 and BigIron to do proper loadbalancing.......
> >
> >Now what happen....aggregate link has been successfully established but
> >when the traffic is through over it.....it goes like this......
> >
> >Juniper-M5-FE1-input = 2Mbps , Juniper-M5-FE1-output = 0
> >Juniper-M5-FE2-input = 0         , Juniper-M5-FE2-output = 2Mbps
> >
> >same on Foundry ethernet interfaces.....it is possible that it is due to
> >the algo used but it will be surely based on Dest/Src IP address or
> >Dest/Src MAC address.
> >
> >But this is not loadbalancing......!
> >
> >Kindly if you ever try it......kindly send me the sample config...so i
> >can verify it with mine......
>
> You've still not explained whether you are routing or switching on that
> BigIron.  And you state 2 Mbps in your drawing but claim 90 Mbps in your
> text.
>
> It may well be that you misconfigured the aggregated link to
> load-balance only on destination MAC address.  What did you configure?
> I explained the difference in my earlier mail (switch vs server trunk).
>
> "trunk server e 4/1 to 4/2" would do it on the BigIron JetCore side
> (assuming those ports connect to the M5, and assuming you're switching
> not routing).
>
> The 48-port 10/100 blade places no limits on port placement within a
> trunk and it should load-balance fine for server trunks.
>
>
>        -- Niels.
>
> --
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20061013/cfd8e376/attachment.html>


More information about the foundry-nsp mailing list