[j-nsp] Qfabric

Doug Hanks dhanks at juniper.net
Thu Feb 24 12:59:11 EST 2011


This isn't designed to be placed as an aggregated PE device.  I would definitely say use an MX in this situation ;)

From: Keegan Holley [mailto:keegan.holley at sungard.com]
Sent: Thursday, February 24, 2011 9:56 AM
To: Doug Hanks
Cc: Chris Evans; Juniper-Nsp List
Subject: Re: [j-nsp] Qfabric

The problem with that is that there are only 25 of them.  I've got thousands of customers that just want 1G with sub 500M internet and private line with little or no concern for latency that isn't excessive.  I'm not saying there's no money to be made here, but the majority of the consumers of network equipment don't have a need or a budget for something this advanced.  Just my 2 cents.

On Thu, Feb 24, 2011 at 12:34 PM, Doug Hanks <dhanks at juniper.net<mailto:dhanks at juniper.net>> wrote:
All of my large Fortune 25 customers have a large investment in 10Gb not only in the core, but access as well.  This is because virtualization is driving the need for higher speed.  I generally see 4x10Gb connections per server chassis if not more in large virtualized environments.

If you need a 100% copper solution, we have the EX today.

>From what I see on the QMX data sheet it supports 18 ports of copper and 36 ports of 1Gb fiber today.

Doug

From: Chris Evans [mailto:chrisccnpspam2 at gmail.com<mailto:chrisccnpspam2 at gmail.com>]
Sent: Thursday, February 24, 2011 9:24 AM
To: Doug Hanks
Cc: Juniper-Nsp List; Stefan Fouant
Subject: Re: RE: [j-nsp] Qfabric


Yeah and that's great.  As 90% of the installs are still gige copper where is that offering? :)
On Feb 24, 2011 12:17 PM, "Doug Hanks" <dhanks at juniper.net<mailto:dhanks at juniper.net><mailto:dhanks at juniper.net<mailto:dhanks at juniper.net>>> wrote:
> A lot of our customers require low latency: financial, higher education, HPC environments and utility.
>
> Juniper has taken the time to solve more than just the low latency problem. We're trying to solve larger problems such as how do you manage an entire campus or data center as one logical device; that's able to scale; and delivers performance and low latency.
>
> Doug
>
> -----Original Message-----
> From: juniper-nsp-bounces at puck.nether.net<mailto:juniper-nsp-bounces at puck.nether.net><mailto:juniper-nsp-bounces at puck.nether.net<mailto:juniper-nsp-bounces at puck.nether.net>> [mailto:juniper-nsp-bounces at puck.nether.net<mailto:juniper-nsp-bounces at puck.nether.net><mailto:juniper-nsp-bounces at puck.nether.net<mailto:juniper-nsp-bounces at puck.nether.net>>] On Behalf Of Chris Evans
> Sent: Wednesday, February 23, 2011 8:55 PM
> To: Stefan Fouant
> Cc: Juniper-Nsp List
> Subject: Re: [j-nsp] Qfabric
>
> Low latency is a buzz word. Who really needs it? Very few applications
> really need it. I work in the financial industry and the only place we have
> a use case for low latency is in the investment bank context.. its like 20
> switches out of the thousands we have. retail, treasury, card etc. Couldnt
> care.
>
> Also keep in mind that Juniper is one of the last to meet the low latency
> game.They are talking the game finally and people are buying into it.
> Everyone else is or has already built lower latency switches than even
> these boxes already using the same merchant silicon.
>
> All in all I sure hope juniper gets this one right. The ex platforms still
> have a lot of catching up to do just to match Cisco and brocade features..
> I don't care about latency I care about the features that I need to run my
> business.
> On Feb 23, 2011 10:11 PM, "Stefan Fouant" <sfouant at shortestpathfirst.net<mailto:sfouant at shortestpathfirst.net><mailto:sfouant at shortestpathfirst.net<mailto:sfouant at shortestpathfirst.net>>>
> wrote:
>> Remember, a key differentiator is that TRILL solutions still require
>> forwarding table lookups on each node; as such, end-to-end latencies are
>> much higher.
>>
>> Another thing to point out is that QFabric allows exponential scaling in
>> that each device added to the fabric contributes additional switching
>> capacity, whereby we can achieve n^2 scaling benefits. It is interesting
> to
>> see the n-squared problem turned on its head - usually meshes are complex
>> and cumbersome - here, it only makes things better :)
>>
>> Stefan Fouant, CISSP, JNCIEx2
>> www.shortestpathfirst.net<http://www.shortestpathfirst.net><http://www.shortestpathfirst.net>
>> GPG Key ID: 0xB4C956EC
>>
>>> -----Original Message-----
>>> From: juniper-nsp-bounces at puck.nether.net<mailto:juniper-nsp-bounces at puck.nether.net><mailto:juniper-nsp-bounces at puck.nether.net<mailto:juniper-nsp-bounces at puck.nether.net>> [mailto:juniper-nsp-<mailto:juniper-nsp-><mailto:juniper-nsp-<mailto:juniper-nsp->>
>>> bounces at puck.nether.net<mailto:bounces at puck.nether.net><mailto:bounces at puck.nether.net<mailto:bounces at puck.nether.net>>] On Behalf Of Ben Dale
>>> Sent: Wednesday, February 23, 2011 9:41 PM
>>> To: Juniper-Nsp List
>>> Subject: Re: [j-nsp] Qfabric
>>>
>>> My understanding of the Brocade VDX is that they use their own
>>> proprietary flavour of TRILL in order to handle the management of the
>>> switches? Happy for someone to correct me on this though.
>>>
>>> As Stefan pointed out - where the TRILL-based solutions fall down is
>>> controlling oversubscription - for every customer-facing revenue port,
>>> you need uplink(s) of equal capacity on *every* switch between point A
>>> and point B, which gets a bit hairy when your customer wants 10GB.
>>>
>>> Even on it's own though, the QFX looks like a pretty sweet box, but I
>>> don't think I've ever seen a Juniper Data Sheet with as many roadmap
>>> asterisks ; )
>>>
>>> It'll be interesting to see if Juniper offer a half-sized QFabric down
>>> the road once they realise that not everyone wants / needs 128x 40GB
>>> attached switches
>>>
>>> Interesting times!
>>>
>>> On 24/02/2011, at 12:11 PM, Keegan Holley wrote:
>>>
>>> > I think Brocade released nearly the same technology a couple of
>>> months ago
>>> > in their VDX product. Cisco can't be far behind. Although, their
>>> solution
>>> > will most likely be proprietary. As far as the technology I think
>>> > spanning-tree and the current way of doing ethernet has not been
>>> ideal for
>>> > some time.
>>> >
>>> >
>>> > On Wed, Feb 23, 2011 at 9:04 PM, Stefan Fouant <
>>> > sfouant at shortestpathfirst.net<mailto:sfouant at shortestpathfirst.net><mailto:sfouant at shortestpathfirst.net<mailto:sfouant at shortestpathfirst.net>>> wrote:
>>> >
>>> >> It's more than just a competitive offering to compete with the likes
>>> of the
>>> >> Nexus switches from Cisco, and its also quite a bit different from
>>> Cisco's
>>> >> FabricPath or other similar TRILL offerings. With FabricPath and
>>> TRILL we
>>> >> solve the problem of wasted revenue ports associated with complex 3-
>>> Tier
>>> >> architectures and blocked Spanning Tree ports, but you still have a
>>> >> forwarding table lookup taking place on each node along the path.
>>> With
>>> >> QFabric we have a set of devices which combine to form a singular
>>> unified
>>> >> fabric, all sharing a single control plane and managed via a single
>>> pane of
>>> >> glass, but more importantly achieving reduced latency as a result of
>>> a
>>> >> single forwarding table lookup taking place on the ingress node.
>>> With such a
>>> >> configuration we can achieve end-to-end Data Center latency on the
>>> order of
>>> >> 5 microseconds.
>>> >>
>>> >> There is a lot more to it which is obviously covered in the
>>> whitepapers,
>>> >> but this is truly something which is going to revolutionize data
>>> centers as
>>> >> we know it for some time to come.
>>> >>
>>> >> Stefan Fouant, CISSP, JNCIEx2
>>> >> GPG Key ID: 0xB4C956EC
>>> >>
>>> >> Sent from my HTC EVO.
>>> >>
>>> >> ----- Reply message -----
>>> >> From: "Chris Evans" <chrisccnpspam2 at gmail.com<mailto:chrisccnpspam2 at gmail.com><mailto:chrisccnpspam2 at gmail.com<mailto:chrisccnpspam2 at gmail.com>>>
>>> >> Date: Wed, Feb 23, 2011 7:28 pm
>>> >> Subject: [j-nsp] Qfabric
>>> >> To: "Keegan Holley" <keegan.holley at sungard.com<mailto:keegan.holley at sungard.com><mailto:keegan.holley at sungard.com<mailto:keegan.holley at sungard.com>>>
>>> >> Cc: "juniper-nsp" <juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net><mailto:juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>>
>>> >>
>>> >>
>>> >> Its junipers answer to nexus 5k 2k soltuion with larger scalability
>>> >> essentially.
>>> >> It has a big fabric interconnect at the core and some routing
>>> engines that
>>> >> control edge switches acting like remote line cards.
>>> >>
>>> >> On Feb 23, 2011 7:23 PM, "Keegan Holley" <keegan.holley at sungard.com<mailto:keegan.holley at sungard.com><mailto:keegan.holley at sungard.com<mailto:keegan.holley at sungard.com>>>
>>> >> wrote:
>>> >>> Does anyone know what Qfabric is yet? After the video where Pradeep
>>> >> Sindhu
>>> >>> spends 1:45 talking about how they are going to change the world
>>> and 0:45
>>> >>> talking about the technology I gave up trying to cut through the
>>> >> marketing
>>> >>> buffer. It sounds like their implementation or answer to trill with
>>> some
>>> >> of
>>> >>> the virtual chassis stuff you see from the nexus thrown in. Anyone
>>> else
>>> >> get
>>> >>> more than that?
>>> >>> _______________________________________________
>>> >>> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net><mailto:juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>
>>> >>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>> >> _______________________________________________
>>> >> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net><mailto:juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>
>>> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>> >>
>>> >>
>>> >>
>>> > _______________________________________________
>>> > juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net><mailto:juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>
>>> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>>> >
>>>
>>>
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net><mailto:juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net><mailto:juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net><mailto:juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>>
> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
https://puck.nether.net/mailman/listinfo/juniper-nsp



More information about the juniper-nsp mailing list