[Outages-discussion] [EXTERNAL] Re: [outages] Dreamhost MySQL outage?

Chapman, Brad (NBCUniversal) Brad.Chapman at nbcuni.com
Sun Nov 5 11:44:44 EST 2023


What do you make of the DSG power sharing program, with the generators being used to help the utilities?  I had literally never heard of this before, and it seems like a risky idea on paper.  Given PG&E's, uh, shall we say, less than stellar track record with safety and reliability...

Has anyone else working at a DC participated in a program like this with their utility?

—Sent from my iPhone

On Nov 5, 2023, at 8:03 AM, Ross Tajvar <ross at tajvar.io> wrote:


I also read this blog post and had a very different reaction to it, which I think can be summed up as "it's weird to make your post-mortem so much about the things that another company did wrong." Yeah Flexential undoubtedly made some mistakes/poor decisions, but ultimately those details have no bearing on Cloudflare's issues. They could have said "our data center had a power outage" and that would have been enough information to provide context for the Cloudflare parts of the story.

I suspect (and I'm just guessing here) that part of the thought process on Cloudflare's part was to draw attention away from their highly visible failure and toward someone else's failure. But regardless of the reasoning, it just seems unwise to publicly throw your vendor under the bus like that....unless you are actively trying not to do business with them anymore.

On Sun, Nov 5, 2023, 3:24 AM Chapman, Brad (NBCUniversal) via Outages-discussion <outages-discussion at outages.org<mailto:outages-discussion at outages.org>> wrote:
Counter to best practices, Flexential did not inform Cloudflare that they had failed over to generator power.

Off to a good start, then...

It is also unusual that Flexential ran both the one remaining utility feed and the generators at the same time... we haven't gotten a clear answer why they ran utility power and generator power.

Yeah, there's a reason the power company tells homeowners to not improvise by backfeeding their house from a generator using a "suicide cord" when the linemen are working outside.  You're supposed to install a cutover switch, or at least turn off your house main circuit breaker.

Some of what follows is informed speculation based on the most likely series of events as well as what individual Flexential employees have shared with us unofficially.

Oh boy, this is about to get spicy...

One possible reason they may have left the utility line running is because Flexential was part of a program with PGE called DSG ... [which] allows the local utility to run a data center's generators to help supply additional power to the grid.  In exchange, the power company helps maintain the generators and supplies fuel. We have been unable to locate any record of Flexential informing us about the DSG program. We've asked if DSG was active at the time and have not received an answer.

You can't ask what you don't know, but it seems like power generation is one of those important things that should be told to your single largest customer who is leasing 10% of your entire facility.

At approximately 11:40 UTC, there was a ground fault on a PGE transformer at PDX-04... [and] ground faults with high voltage (12,470 volt) power lines are very bad.

That's underselling it a bit.

Fortunately ... PDX-04 also contains a bank of UPS batteries... [that] are supposedly sufficient to power the facility for approximately 10 minutes... In reality, the batteries started to fail after only 4 minutes ... and it took Flexential far longer than 10 minutes to get the generators restored.

Correct me if I'm wrong, but aren't UPS batteries supposed to be exercised with deep-cycling on a regular basis?  It sounds like they were extremely worn out when they were needed most.

While we haven't gotten official confirmation, we have been told by employees that [the generators] needed to be physically accessed and manually restarted because of the way the ground fault had tripped circuits.  Second, Flexential's access control system was not powered by the battery backups, so it was offline.

That sounds objectively dumber than what happened at the Meta/Facebook datacenter outage a while ago, where the doors and badge readers were still online, but the badges couldn't be evaluated via the network due to the BGP crash, and the credentials weren't cached locally either.

And third, the overnight staffing at the site did not include an experienced operations or electrical expert — the overnight shift consisted of security and an unaccompanied technician who had only been on the job for a week.

:picard-facepalm:

Throughout this, Flexential never informed Cloudflare that there was any issue at the facility.  [We] attempted to contact Flexential and dispatched our local team to physically travel to the facility.

Adele: "Hello from the outsiiiiide..."

"We have a number of questions that we need answered from Flexential."

Understatement of the year.  They must be seething.

Cloudflare's report here is fairly even-handed and appears to have been fact-checked as well as possible under the circumstances, with corroborated statements from anonymous employees.

Having read the technical stack in the document and their plans to beef up disaster recovery, Cloudflare has my utmost respect for quickly acknowledging and apologizing for the fact that they didn't assert and require that new services would be fully capable of active, redundant operation in the event of catastrophic service loss at their primary datacenter—a site which they believed to be reliable and indefatigable.

They had planned many disaster exercises to combat the loss of PDX-04, but not in the event of a complete loss of power in excess of 10 minutes, or even 4 with shoddy batteries.

To quote Ricky Ricardo, Flexential has some 'splainin' to do.

-Brad



—Sent from my iPhone

On Nov 4, 2023, at 10:30 PM, Bryan Fields via Outages <outages at outages.org<mailto:outages at outages.org>> wrote:

On 11/3/23 5:27 PM, Martin Hannigan via Outages wrote:
Maybe there are questions?

https://urldefense.com/v3/__https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDkzMvfkc$
That has some info.
--
Bryan Fields

727-409-1194 - Voice
https://urldefense.com/v3/__http://bryanfields.net__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDh36jGqo$
_______________________________________________
Outages mailing list
Outages at outages.org<mailto:Outages at outages.org>
https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/outages__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDOzwod4I$
_______________________________________________
Outages-discussion mailing list
Outages-discussion at outages.org<mailto:Outages-discussion at outages.org>
https://puck.nether.net/mailman/listinfo/outages-discussion<https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/outages-discussion__;!!PIZeeW5wscynRQ!q65maMs6u6nHgsN0QwOLPKiPOwbTmGZtMZ-0y5hsx-eTjgshTYFnh1OaV4GVvC9bm_lqHebIKelQLSJN$>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/outages-discussion/attachments/20231105/b3180002/attachment-0001.htm>


More information about the Outages-discussion mailing list