[j-nsp] Qfabric
Ben Dale
bdale at comlinx.com.au
Wed Feb 23 21:41:04 EST 2011
My understanding of the Brocade VDX is that they use their own proprietary flavour of TRILL in order to handle the management of the switches? Happy for someone to correct me on this though.
As Stefan pointed out - where the TRILL-based solutions fall down is controlling oversubscription - for every customer-facing revenue port, you need uplink(s) of equal capacity on *every* switch between point A and point B, which gets a bit hairy when your customer wants 10GB.
Even on it's own though, the QFX looks like a pretty sweet box, but I don't think I've ever seen a Juniper Data Sheet with as many roadmap asterisks ; )
It'll be interesting to see if Juniper offer a half-sized QFabric down the road once they realise that not everyone wants / needs 128x 40GB attached switches
Interesting times!
On 24/02/2011, at 12:11 PM, Keegan Holley wrote:
> I think Brocade released nearly the same technology a couple of months ago
> in their VDX product. Cisco can't be far behind. Although, their solution
> will most likely be proprietary. As far as the technology I think
> spanning-tree and the current way of doing ethernet has not been ideal for
> some time.
>
>
> On Wed, Feb 23, 2011 at 9:04 PM, Stefan Fouant <
> sfouant at shortestpathfirst.net> wrote:
>
>> It's more than just a competitive offering to compete with the likes of the
>> Nexus switches from Cisco, and its also quite a bit different from Cisco's
>> FabricPath or other similar TRILL offerings. With FabricPath and TRILL we
>> solve the problem of wasted revenue ports associated with complex 3-Tier
>> architectures and blocked Spanning Tree ports, but you still have a
>> forwarding table lookup taking place on each node along the path. With
>> QFabric we have a set of devices which combine to form a singular unified
>> fabric, all sharing a single control plane and managed via a single pane of
>> glass, but more importantly achieving reduced latency as a result of a
>> single forwarding table lookup taking place on the ingress node. With such a
>> configuration we can achieve end-to-end Data Center latency on the order of
>> 5 microseconds.
>>
>> There is a lot more to it which is obviously covered in the whitepapers,
>> but this is truly something which is going to revolutionize data centers as
>> we know it for some time to come.
>>
>> Stefan Fouant, CISSP, JNCIEx2
>> GPG Key ID: 0xB4C956EC
>>
>> Sent from my HTC EVO.
>>
>> ----- Reply message -----
>> From: "Chris Evans" <chrisccnpspam2 at gmail.com>
>> Date: Wed, Feb 23, 2011 7:28 pm
>> Subject: [j-nsp] Qfabric
>> To: "Keegan Holley" <keegan.holley at sungard.com>
>> Cc: "juniper-nsp" <juniper-nsp at puck.nether.net>
>>
>>
>> Its junipers answer to nexus 5k 2k soltuion with larger scalability
>> essentially.
>> It has a big fabric interconnect at the core and some routing engines that
>> control edge switches acting like remote line cards.
>>
>> On Feb 23, 2011 7:23 PM, "Keegan Holley" <keegan.holley at sungard.com>
>> wrote:
>>> Does anyone know what Qfabric is yet? After the video where Pradeep
>> Sindhu
>>> spends 1:45 talking about how they are going to change the world and 0:45
>>> talking about the technology I gave up trying to cut through the
>> marketing
>>> buffer. It sounds like their implementation or answer to trill with some
>> of
>>> the virtual chassis stuff you see from the nexus thrown in. Anyone else
>> get
>>> more than that?
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>>
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
More information about the juniper-nsp
mailing list