[c-nsp] traffic distribution on 6748-GE-TX

Arie Vayner ariev at vayner.net
Sat Feb 24 14:40:37 EST 2007


Vince,

The 6748 module has a dual connection to the switch fabric. This means it
has almost no over subscription (40:48). There is no over subscription on
the port ASICs, so you should not worry about that too much.

You should take into account though that unless you have the DFC daughter
card installed on the modules, all switching is done centrally by the
central PFC. This means that even if the traffic has to be switches between
2 ports connected to the same ASIC/module/etc, the packet would go all the
way to the PFC (well, its header...) and the decision would be made there.
This means that there is no real importance on how you split the ports on
this card (note: this is not true to other card models like the 6548-GE-TX
for example).

It is a definite best practice to split the risk by connecting redundant
links/services to different modules. If you have, for example, 2 uplinks -
connect each on a different module.

Hope this helps.

Arie


On 2/24/07, vince anton <mvanton at gmail.com> wrote:
>
> Hi,
>
> I have a quick and potentially basic question about a system with
> SUP720-3BXL and 2 x 6748-GE-TX.
>
> for a number of devices (15-20) which transfer considerable (500Mbps and
> growing) traffic between themselves, does it make more sense to have these
> devices in the same 24port set (1-24 or 25-48) on a single 6748, or to
> have
> some on the first 24 port set (read 20G channel 0), and some on the 2nd 24
> port set (read 20G channel1), or is there any point distributing across
> both
> cards (read all 4 20G channels) ?
>
> similar question in a situation where Ive got two links coming in from the
> core of the network used for delivery of internet bound traffic to the
> network edge (bunch of aggregation/access routers and hosted servers,
> aggregated L2-wise on the 6748 with an SVI) - does it make sense to have
> each of these upstream links on separate card to protect somewhat against
> card failure without hurting performance as we grow ?
>
> if anyone can point to a best practise document or share what works for
> their network, that will be great.
>
>
> Thanks,
>
> anton
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>


More information about the cisco-nsp mailing list