[Outages-discussion] [EXTERNAL] Re: [outages] Dreamhost MySQL outage?
Charles Sprickman
spork at bway.net
Sun Nov 5 15:31:17 EST 2023
> On Nov 5, 2023, at 11:03 AM, Ross Tajvar via Outages-discussion <outages-discussion at outages.org> wrote:
>
> I also read this blog post and had a very different reaction to it, which I think can be summed up as "it's weird to make your post-mortem so much about the things that another company did wrong." Yeah Flexential undoubtedly made some mistakes/poor decisions, but ultimately those details have no bearing on Cloudflare's issues. They could have said "our data center had a power outage" and that would have been enough information to provide context for the Cloudflare parts of the story.
>
> I suspect (and I'm just guessing here) that part of the thought process on Cloudflare's part was to draw attention away from their highly visible failure and toward someone else's failure. But regardless of the reasoning, it just seems unwise to publicly throw your vendor under the bus like that....unless you are actively trying not to do business with them anymore.
Yeah, it was kind of shocking to see such a major service have a single point of failure, either due to an error on their part (the RFO includes the line "some critical systems had non-obvious dependencies that made them unavailable") or due to a design that wasn't fully redundant.
I'm on your side here - I've been dealing with smaller clients in various datacenters since the late 90's and I've seen a real reduction in reliability, planning, procedures and overall seriousness about how a 24/7/365 business is run. One datacenter/ISP biz in particular with 40 datacenters bought a local company out some time ago and it's just gone to hell. At night I've gone there and not been able to enter because the sole person on site is a security guard who was MIA. I've got stuck in a mantrap because their fingerprint readers do not like my fingers for some reason. We've had multiple faults with both our transit and and inter-dc connections where a) we had to alert them to a problem (monitoring, what is it?), b) the gear in use is so old that they have to steal parts from other devices to fix line card issues in a 6500-series chassis c) call them every hour for updates and basically beg to have someone more senior fix an obvious issue (no, "sending an MTR" for a layer 2 circuit is dumb and we won't do that, but thanks for showing us frontline support doesn't know anything about networking)... It goes on and on, but the bottom line is there just don't seem to be more than a handful of people in the entire organization that can fix problems and even those feel more like enterprise-y IT people than old school ISP NOC types). As best I can tell, this is a trend, especially with "the cloud" being the default option for most companies these days.
My point I guess is any good planning should take into account the decline in reliability in many of these datacenters and to just assume that at least once a year you're probably going to see some kind of unexpected issue at one of your locations that could be service-impacting...
Charles
>
> On Sun, Nov 5, 2023, 3:24 AM Chapman, Brad (NBCUniversal) via Outages-discussion <outages-discussion at outages.org> wrote:
>> Counter to best practices, Flexential did not inform Cloudflare that they had failed over to generator power.
>
> Off to a good start, then...
>
>> It is also unusual that Flexential ran both the one remaining utility feed and the generators at the same time... we haven't gotten a clear answer why they ran utility power and generator power.
>
> Yeah, there's a reason the power company tells homeowners to not improvise by backfeeding their house from a generator using a "suicide cord" when the linemen are working outside. You're supposed to install a cutover switch, or at least turn off your house main circuit breaker.
>
>> Some of what follows is informed speculation based on the most likely series of events as well as what individual Flexential employees have shared with us unofficially.
>
> Oh boy, this is about to get spicy...
>
>> One possible reason they may have left the utility line running is because Flexential was part of a program with PGE called DSG ... [which] allows the local utility to run a data center's generators to help supply additional power to the grid. In exchange, the power company helps maintain the generators and supplies fuel. We have been unable to locate any record of Flexential informing us about the DSG program. We've asked if DSG was active at the time and have not received an answer.
>
> You can't ask what you don't know, but it seems like power generation is one of those important things that should be told to your single largest customer who is leasing 10% of your entire facility.
>
>> At approximately 11:40 UTC, there was a ground fault on a PGE transformer at PDX-04... [and] ground faults with high voltage (12,470 volt) power lines are very bad.
>
> That's underselling it a bit.
>
>> Fortunately ... PDX-04 also contains a bank of UPS batteries... [that] are supposedly sufficient to power the facility for approximately 10 minutes... In reality, the batteries started to fail after only 4 minutes ... and it took Flexential far longer than 10 minutes to get the generators restored.
>
> Correct me if I'm wrong, but aren't UPS batteries supposed to be exercised with deep-cycling on a regular basis? It sounds like they were extremely worn out when they were needed most.
>
>> While we haven't gotten official confirmation, we have been told by employees that [the generators] needed to be physically accessed and manually restarted because of the way the ground fault had tripped circuits. Second, Flexential's access control system was not powered by the battery backups, so it was offline.
>
> That sounds objectively dumber than what happened at the Meta/Facebook datacenter outage a while ago, where the doors and badge readers were still online, but the badges couldn't be evaluated via the network due to the BGP crash, and the credentials weren't cached locally either.
>
>> And third, the overnight staffing at the site did not include an experienced operations or electrical expert — the overnight shift consisted of security and an unaccompanied technician who had only been on the job for a week.
>
> :picard-facepalm:
>
>> Throughout this, Flexential never informed Cloudflare that there was any issue at the facility. [We] attempted to contact Flexential and dispatched our local team to physically travel to the facility.
>
> Adele: "Hello from the outsiiiiide..."
>
>> "We have a number of questions that we need answered from Flexential."
>
> Understatement of the year. They must be seething.
>
> Cloudflare's report here is fairly even-handed and appears to have been fact-checked as well as possible under the circumstances, with corroborated statements from anonymous employees.
>
> Having read the technical stack in the document and their plans to beef up disaster recovery, Cloudflare has my utmost respect for quickly acknowledging and apologizing for the fact that they didn't assert and require that new services would be fully capable of active, redundant operation in the event of catastrophic service loss at their primary datacenter—a site which they believed to be reliable and indefatigable.
>
> They had planned many disaster exercises to combat the loss of PDX-04, but not in the event of a complete loss of power in excess of 10 minutes, or even 4 with shoddy batteries.
>
> To quote Ricky Ricardo, Flexential has some 'splainin' to do.
>
> -Brad
>
>
> —Sent from my iPhone
>
>> On Nov 4, 2023, at 10:30 PM, Bryan Fields via Outages <outages at outages.org> wrote:
>>
>> On 11/3/23 5:27 PM, Martin Hannigan via Outages wrote:
>>> Maybe there are questions?
>>
>> https://urldefense.com/v3/__https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDkzMvfkc$
>> That has some info.
>> --
>> Bryan Fields
>>
>> 727-409-1194 - Voice
>> https://urldefense.com/v3/__http://bryanfields.net__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDh36jGqo$
>> _______________________________________________
>> Outages mailing list
>> Outages at outages.org
>> https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/outages__;!!PIZeeW5wscynRQ!rIKcGF7oImWvVF7adkn3NY60akAkeAgdFOByQOmkqg-Luu0jLbVDtgLT1VJtx_DR2a0Jb-Kp3CGDOzwod4I$
> _______________________________________________
> Outages-discussion mailing list
> Outages-discussion at outages.org
> https://puck.nether.net/mailman/listinfo/outages-discussion
> _______________________________________________
> Outages-discussion mailing list
> Outages-discussion at outages.org
> https://puck.nether.net/mailman/listinfo/outages-discussion
More information about the Outages-discussion
mailing list