[Outages-discussion] [EXTERNAL] Re: [outages] Dreamhost MySQL outage?
Chapman, Brad (NBCUniversal)
Brad.Chapman at nbcuni.com
Sun Nov 5 03:24:15 EST 2023
Counter to best practices, Flexential did not inform Cloudflare that they had failed over to generator power.
Off to a good start, then...
It is also unusual that Flexential ran both the one remaining utility feed and the generators at the same time... we haven't gotten a clear answer why they ran utility power and generator power.
Yeah, there's a reason the power company tells homeowners to not improvise by backfeeding their house from a generator using a "suicide cord" when the linemen are working outside. You're supposed to install a cutover switch, or at least turn off your house main circuit breaker.
Some of what follows is informed speculation based on the most likely series of events as well as what individual Flexential employees have shared with us unofficially.
Oh boy, this is about to get spicy...
One possible reason they may have left the utility line running is because Flexential was part of a program with PGE called DSG ... [which] allows the local utility to run a data center's generators to help supply additional power to the grid. In exchange, the power company helps maintain the generators and supplies fuel. We have been unable to locate any record of Flexential informing us about the DSG program. We've asked if DSG was active at the time and have not received an answer.
You can't ask what you don't know, but it seems like power generation is one of those important things that should be told to your single largest customer who is leasing 10% of your entire facility.
At approximately 11:40 UTC, there was a ground fault on a PGE transformer at PDX-04... [and] ground faults with high voltage (12,470 volt) power lines are very bad.
That's underselling it a bit.
Fortunately ... PDX-04 also contains a bank of UPS batteries... [that] are supposedly sufficient to power the facility for approximately 10 minutes... In reality, the batteries started to fail after only 4 minutes ... and it took Flexential far longer than 10 minutes to get the generators restored.
Correct me if I'm wrong, but aren't UPS batteries supposed to be exercised with deep-cycling on a regular basis? It sounds like they were extremely worn out when they were needed most.
While we haven't gotten official confirmation, we have been told by employees that [the generators] needed to be physically accessed and manually restarted because of the way the ground fault had tripped circuits. Second, Flexential's access control system was not powered by the battery backups, so it was offline.
That sounds objectively dumber than what happened at the Meta/Facebook datacenter outage a while ago, where the doors and badge readers were still online, but the badges couldn't be evaluated via the network due to the BGP crash, and the credentials weren't cached locally either.
And third, the overnight staffing at the site did not include an experienced operations or electrical expert — the overnight shift consisted of security and an unaccompanied technician who had only been on the job for a week.
:picard-facepalm:
Throughout this, Flexential never informed Cloudflare that there was any issue at the facility. [We] attempted to contact Flexential and dispatched our local team to physically travel to the facility.
Adele: "Hello from the outsiiiiide..."
"We have a number of questions that we need answered from Flexential."
Understatement of the year. They must be seething.
Cloudflare's report here is fairly even-handed and appears to have been fact-checked as well as possible under the circumstances, with corroborated statements from anonymous employees.
Having read the technical stack in the document and their plans to beef up disaster recovery, Cloudflare has my utmost respect for quickly acknowledging and apologizing for the fact that they didn't assert and require that new services would be fully capable of active, redundant operation in the event of catastrophic service loss at their primary datacenter—a site which they believed to be reliable and indefatigable.
They had planned many disaster exercises to combat the loss of PDX-04, but not in the event of a complete loss of power in excess of 10 minutes, or even 4 with shoddy batteries.
To quote Ricky Ricardo, Flexential has some 'splainin' to do.
-Brad
—Sent from my iPhone
On Nov 4, 2023, at 10:30 PM, Bryan Fields via Outages <outages at outages.org> wrote:
On 11/3/23 5:27 PM, Martin Hannigan via Outages wrote:
Maybe there are questions?
https://urldefense.com/v3/__https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDkzMvfkc$
That has some info.
--
Bryan Fields
727-409-1194 - Voice
https://urldefense.com/v3/__http://bryanfields.net__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDh36jGqo$
_______________________________________________
Outages mailing list
Outages at outages.org
https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/outages__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDOzwod4I$
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/outages-discussion/attachments/20231105/ed6469c2/attachment.htm>
More information about the Outages-discussion
mailing list