[c-nsp] Nexus 5596 architecture
Jiri Prochazka
jiri.prochazka at superhosting.cz
Thu Feb 9 04:52:39 EST 2012
John,
we are considering these nexus switches as a core for a small (for now)
exchange point, so there will definitely be multiple ports talking to
one and vice versa. Let's say the switch would be utilized up to 90% (45
ports in case of 5548, 90 in case of 5596), half of the active ports
would handle around 8 Gbps. The rest would be utilized up to 3 Gbps.
Whole amount of a traffic would be standard Internet flows.
This gives me real utilization of 240 Gbps or 500 Gbps in case of 5596.
These are of course extreme values which will not be reached, but I need
to know limits of this platform regarding to this use-case.
Thanks,
Jiri Prochazka
Dne 31.1.2012 0:52, John Gill napsal(a):
> Hi Jiri,
> This sounds pretty straightfoward, the thing you need to look at most
> closely now is the traffic flows. Being all 10G is good because you
> will be cut-through switching unless there is congestion, which causes
> you to queue (store and forward).
>
> Do you expect multiple ports to be talking to one port at the same
> time? If this is the case, your decision will need to include how
> much buffering you require to be able to handle this congestion.
>
> The common misunderstandings I see on this platform vs some other
> switches are the concept of cut-through switching and ingress queuing.
> Cut-through switching results in errors not being detected until after
> a frame is transmitted, so we may see one initial error increment
> errors across multiple switch hops - the advantage is low latency and
> low jitter. With ingress queuing, you have to look for queuing
> discards on the input port so knowing your traffic patterns is very
> helpful for understanding where congestion could occur. The switch
> spreads congestion out amongst all input ports so every port
> contributing to the congestion is responsible to handle it's own
> queue. To be clear, the fabric is non-blocking, but you can always
> have scenarios where you have 2+ ports sending to 1 port. This can
> happen for a short time, but not indefinitely without queue drops.
>
> This 5548P architecture document certainly applies for the 5596UP,
> except we have more ports and a scaled up fabric. The "U" designates
> the ports are Universal, meaning they can be configured for ethernet
> or fibrechannel.
>
> http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/ps11215/white_paper_c11-622479.html
>
>
> Obviously I am telling you what cisco has already available, but I am
> interested in you getting what you think you're getting. I am not in
> sales, I am involved in support for this platform. I will hear about
> it if something is not up to expectations or goes badly.
>
> Regards,
> John Gill
> cisco
>
>
> On 1/27/12 5:13 PM, Jiri Prochazka wrote:
>> John,
>>
>> thank you for a reply. I am interrested in unicast traffic only, no L3,
>> no QoS requirements, low-latency is not needed, 10G ports only. Switches
>> would be used for a standard Internet traffic flows.
>>
>> I am really interrested in these switches, but I don't want to buy a pig
>> in a poke..
>>
>>
>>
>> Regards,
>>
>> Jiri Prochazka
>>
>> Dne 27.1.2012 22:19, John Gill napsal(a):
>>> Hi Jiri,
>>> The bandwidth to the fabric is dedicated and the expansion modules
>>> have their own forwarding engines on them, so they are no different
>>> than the base ports except that they can be swapped out.
>>>
>>> What kind of traffic are you interested in running? Unicast,
>>> multicast, QoS requirements? Do you have low-latency requirements? 10G
>>> or 1G? Do you know how much buffering you would need with your traffic
>>> flows? Or is this mostly fact finding for now? Let me know if you have
>>> any specific rumors as well.
>>>
>>> Regards,
>>> John Gill
>>> cisco
>>>
>>> On 1/26/12 8:30 PM, Jiri Prochazka wrote:
>>>> Hi,
>>>>
>>>> we are considering investment in a few Nexus 5596 switches. All Cisco
>>>> documents say it has 96 non-blocking 10G ports (for L2). Is it
>>>> _really_
>>>> true? Can the switch reach throughput of 960 Gbps regardless the
>>>> traffic
>>>> distribution? Is't there some hidden limitaion, which is not presented
>>>> by Cisco? :-) I've heard some rumors about this, but nothing
>>>> particular.
>>>>
>>>> First thig which comes to my mind is a doubt, if all three expansion
>>>> modules really do have 160 Gbps connection to the fabric..
>>>>
>>>>
>>>>
>>>>
>>>> Thank you for comments,
>>>>
>>>>
>>>>
>>>> Jiri
>>>>
>>>>
>>>> _______________________________________________
>>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>>
>>
More information about the cisco-nsp
mailing list