[j-nsp] Qfabric
Keegan Holley
keegan.holley at sungard.com
Sun Feb 27 14:21:18 EST 2011
Sent from my iPhone
On Feb 27, 2011, at 1:06 AM, Joel Jaeggli <joelja at bogus.com> wrote:
> On 2/24/11 7:37 AM, Keegan Holley wrote:
>> I agree, forwarding table lookups have been done in CAM/TCAM for years now.
>
> you'll find no tcam in your high-end MX style platforms and ultimately
> the power requirements for large amounts of cam will doom the current
> iterations of tcam technology. 800w per slot devices are bad enough 2kw
> per slot devices would be a serious pain.
My mistake. The point though was that forwarding table lookups are no big deal in most environments.
>
>> No one is really complaining about the speed of the current technology.
>
> there's a reason why obsolete technology gets switched out, and what was
> considered decentish layer-3 forwarding performance from the
> architecture that is the cat-6k for example 10-12 years ago gets walked
> all over by cheapo merchant silicon doing l3 cut-through forwarding.
>
I agree. I was just saying that the current mouse t
>> Also, infiniband would be more useful than ethernet with lower latency
>
> but not subject to the same commiditization curve.
>
>>
>> On Thu, Feb 24, 2011 at 10:03 AM, Stefan Fouant <
>> sfouant at shortestpathfirst.net> wrote:
>>
>>> Chris,
>>>
>>>
>>>
>>> No offense, but you are dead wrong on this issue. I come in contact with
>>> organizations every single day who have mission critical data requirements
>>> and latency is a VERY big requirement for many of these organizations. And
>>> while this might not be your experience given the financial services
>>> organization you work with, reduced latency was a key
>>> enabler/differentiator
>>> for the NYSE and one of the main reasons that they chose Juniper for their
>>> next-generation data centers.
>>>
>>>
>>>
>>> I am pretty sure there are a lot of others who would agree with me that
>>> latency is more than just a "buzz" word.
>>>
>>>
>>>
>>> Stefan Fouant, CISSP, JNCIEx2
>>> <http://www.shortestpathfirst.net/> www.shortestpathfirst.net
>>> GPG Key ID: 0xB4C956EC
>>>
>>>
>>>
>>> From: Chris Evans [mailto:chrisccnpspam2 at gmail.com]
>>> Sent: Wednesday, February 23, 2011 11:55 PM
>>> To: Stefan Fouant
>>> Cc: Juniper-Nsp List; Ben Dale
>>> Subject: Re: [j-nsp] Qfabric
>>>
>>>
>>>
>>> Low latency is a buzz word. Who really needs it? Very few applications
>>> really need it. I work in the financial industry and the only place we have
>>> a use case for low latency is in the investment bank context.. its like 20
>>> switches out of the thousands we have. retail, treasury, card etc. Couldnt
>>> care.
>>>
>>> Also keep in mind that Juniper is one of the last to meet the low latency
>>> game.They are talking the game finally and people are buying into it.
>>> Everyone else is or has already built lower latency switches than even
>>> these boxes already using the same merchant silicon.
>>>
>>> All in all I sure hope juniper gets this one right. The ex platforms still
>>> have a lot of catching up to do just to match Cisco and brocade features..
>>> I don't care about latency I care about the features that I need to run my
>>> business.
>>>
>>> On Feb 23, 2011 10:11 PM, "Stefan Fouant" <sfouant at shortestpathfirst.net>
>>> wrote:
>>>> Remember, a key differentiator is that TRILL solutions still require
>>>> forwarding table lookups on each node; as such, end-to-end latencies are
>>>> much higher.
>>>>
>>>> Another thing to point out is that QFabric allows exponential scaling in
>>>> that each device added to the fabric contributes additional switching
>>>> capacity, whereby we can achieve n^2 scaling benefits. It is interesting
>>> to
>>>> see the n-squared problem turned on its head - usually meshes are complex
>>>> and cumbersome - here, it only makes things better :)
>>>>
>>>> Stefan Fouant, CISSP, JNCIEx2
>>>> www.shortestpathfirst.net
>>>> GPG Key ID: 0xB4C956EC
>>>>
>>>>> -----Original Message-----
>>>>> From: juniper-nsp-bounces at puck.nether.net [mailto:juniper-nsp-
>>>>> bounces at puck.nether.net] On Behalf Of Ben Dale
>>>>> Sent: Wednesday, February 23, 2011 9:41 PM
>>>>> To: Juniper-Nsp List
>>>>> Subject: Re: [j-nsp] Qfabric
>>>>>
>>>>> My understanding of the Brocade VDX is that they use their own
>>>>> proprietary flavour of TRILL in order to handle the management of the
>>>>> switches? Happy for someone to correct me on this though.
>>>>>
>>>>> As Stefan pointed out - where the TRILL-based solutions fall down is
>>>>> controlling oversubscription - for every customer-facing revenue port,
>>>>> you need uplink(s) of equal capacity on *every* switch between point A
>>>>> and point B, which gets a bit hairy when your customer wants 10GB.
>>>>>
>>>>> Even on it's own though, the QFX looks like a pretty sweet box, but I
>>>>> don't think I've ever seen a Juniper Data Sheet with as many roadmap
>>>>> asterisks ; )
>>>>>
>>>>> It'll be interesting to see if Juniper offer a half-sized QFabric down
>>>>> the road once they realise that not everyone wants / needs 128x 40GB
>>>>> attached switches
>>>>>
>>>>> Interesting times!
>>>>>
>>>>> On 24/02/2011, at 12:11 PM, Keegan Holley wrote:
>>>>>
>>>>>> I think Brocade released nearly the same technology a couple of
>>>>> months ago
>>>>>> in their VDX product. Cisco can't be far behind. Although, their
>>>>> solution
>>>>>> will most likely be proprietary. As far as the technology I think
>>>>>> spanning-tree and the current way of doing ethernet has not been
>>>>> ideal for
>>>>>> some time.
>>>>>>
>>>>>>
>>>>>> On Wed, Feb 23, 2011 at 9:04 PM, Stefan Fouant <
>>>>>> sfouant at shortestpathfirst.net> wrote:
>>>>>>
>>>>>>> It's more than just a competitive offering to compete with the likes
>>>>> of the
>>>>>>> Nexus switches from Cisco, and its also quite a bit different from
>>>>> Cisco's
>>>>>>> FabricPath or other similar TRILL offerings. With FabricPath and
>>>>> TRILL we
>>>>>>> solve the problem of wasted revenue ports associated with complex 3-
>>>>> Tier
>>>>>>> architectures and blocked Spanning Tree ports, but you still have a
>>>>>>> forwarding table lookup taking place on each node along the path.
>>>>> With
>>>>>>> QFabric we have a set of devices which combine to form a singular
>>>>> unified
>>>>>>> fabric, all sharing a single control plane and managed via a single
>>>>> pane of
>>>>>>> glass, but more importantly achieving reduced latency as a result of
>>>>> a
>>>>>>> single forwarding table lookup taking place on the ingress node.
>>>>> With such a
>>>>>>> configuration we can achieve end-to-end Data Center latency on the
>>>>> order of
>>>>>>> 5 microseconds.
>>>>>>>
>>>>>>> There is a lot more to it which is obviously covered in the
>>>>> whitepapers,
>>>>>>> but this is truly something which is going to revolutionize data
>>>>> centers as
>>>>>>> we know it for some time to come.
>>>>>>>
>>>>>>> Stefan Fouant, CISSP, JNCIEx2
>>>>>>> GPG Key ID: 0xB4C956EC
>>>>>>>
>>>>>>> Sent from my HTC EVO.
>>>>>>>
>>>>>>> ----- Reply message -----
>>>>>>> From: "Chris Evans" <chrisccnpspam2 at gmail.com>
>>>>>>> Date: Wed, Feb 23, 2011 7:28 pm
>>>>>>> Subject: [j-nsp] Qfabric
>>>>>>> To: "Keegan Holley" <keegan.holley at sungard.com>
>>>>>>> Cc: "juniper-nsp" <juniper-nsp at puck.nether.net>
>>>>>>>
>>>>>>>
>>>>>>> Its junipers answer to nexus 5k 2k soltuion with larger scalability
>>>>>>> essentially.
>>>>>>> It has a big fabric interconnect at the core and some routing
>>>>> engines that
>>>>>>> control edge switches acting like remote line cards.
>>>>>>>
>>>>>>> On Feb 23, 2011 7:23 PM, "Keegan Holley" <keegan.holley at sungard.com>
>>>>>>> wrote:
>>>>>>>> Does anyone know what Qfabric is yet? After the video where Pradeep
>>>>>>> Sindhu
>>>>>>>> spends 1:45 talking about how they are going to change the world
>>>>> and 0:45
>>>>>>>> talking about the technology I gave up trying to cut through the
>>>>>>> marketing
>>>>>>>> buffer. It sounds like their implementation or answer to trill with
>>>>> some
>>>>>>> of
>>>>>>>> the virtual chassis stuff you see from the nexus thrown in. Anyone
>>>>> else
>>>>>>> get
>>>>>>>> more than that?
>>>>>>>> _______________________________________________
>>>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>>>> _______________________________________________
>>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> _______________________________________________
>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>
>>>> _______________________________________________
>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>
>>>
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
>
More information about the juniper-nsp
mailing list