[Outages-discussion] Outages-discussion] Azure Postmortem

Russell Zen russellzen at outlook.com
Wed Sep 12 18:18:40 EDT 2018


Agreed. I've heard stories from DCT's of flying in fuel to a data center during bad weather (such as storms as North Carolina is experiencing right now) just because standard roads are inaccessible. That alone is still a safety risk. Better to be proactive than reactive with your failover strategy.

________________________________
From: Outages-discussion <outages-discussion-bounces at outages.org> on behalf of Mike Christian <michael.c.christian at oracle.com>
Sent: Wednesday, September 12, 2018 3:07 PM
To: surfer at mauigateway.com
Cc: outages-discussion at outages.org
Subject: Re: [Outages-discussion] Outages-discussion] Azure Postmortem

If I had a datacenter in the Carolinas (which I would have built with at least one fully equivalent site elsewhere), I'd have already switched production traffic out by now.  Unplanned failovers don't always work well, and it's mighty hard to truck in fresh fuel for the gennies through floods.

-MikeC

Scott Weeks wrote on 9/12/18 1:48 PM:




"would any cloud even have the capacity to handle
a full DC shutdown and failover? My bet is if one
of these cloud datacenters fails, you will have a
hard time getting a VM on any other cloud provider
as everyone starts DRing at the same time."


I tried to get a DR conversation started on NANOG
a while back to see how other's companies were
treating their DR plans.  Got crickets.  Now, with
the hurricane hitting the US mainland maybe it's
time?  Or continue it here.  It's interesting.

scott




--- Steve.Mikulasik at civeo.com<mailto:Steve.Mikulasik at civeo.com> wrote:

From: Steve Mikulasik <Steve.Mikulasik at civeo.com><mailto:Steve.Mikulasik at civeo.com>
To: Mike Christian <michael.c.christian at oracle.com><mailto:michael.c.christian at oracle.com>
Cc: "outages-discussion at outages.org"<mailto:outages-discussion at outages.org> <outages-discussion at outages.org><mailto:outages-discussion at outages.org>
Subject: Re: [Outages-discussion] Outages-discussion] Azure Postmortem
Date: Wed, 12 Sep 2018 19:37:13 +0000

I have wondered for some time if one of these mega cloud datacenters goes down hard and they actually have to fail over, would any cloud even have the capacity to handle a full DC shutdown and failover? My bet is if one of these cloud datacenters fails, you will have a hard time getting a VM on any other cloud provider as everyone starts DRing at the same time.




From: Outages-discussion <outages-discussion-bounces at outages.org><mailto:outages-discussion-bounces at outages.org> On Behalf Of Mike Christian
Sent: Wednesday, September 12, 2018 1:33 PM
To: aosgood at Streamline-Solutions.net<mailto:aosgood at Streamline-Solutions.net>
Cc: outages-discussion at outages.org<mailto:outages-discussion at outages.org>
Subject: Re: [Outages-discussion] Outages-discussion] Azure Postmortem

This is actually an interesting description.  Not knowing anything about their internals, I can hypothesize a scenario:

Way back in the long long ago, an emergency shutdown involved a flush to disk, with just enough battery to accomplish that, then a clean power down.  This process wouldn’t need to consider the state of cooling infrastructure or whatever buffer was in place.

Now make that a more sophisticated process that pauses writes, flushes the async replication queues, and initiates an automatic switchover to an unaffected site.  Great stuff.  But how long does that take, and what are the implications around temperature management?

I’ve been through similar scenarios, but have never seen actual equipment damage.  Something is certainly new here.

MikeC
Sent from my iPhone

On Sep 12, 2018, at 11:41 AM, Aaron D. Osgood <AOsgood at Streamline-Solutions.net<mailto:AOsgood at Streamline-Solutions.net><mailto:AOsgood at Streamline-Solutions.net><mailto:AOsgood at Streamline-Solutions.net>> wrote:
Perhaps that is “Lawyer-Speak” for “The damned place caught fire”


Aaron D. Osgood

Streamline Communications L.L.C

274 E. Eau Gallie Blvd. #332
Indian Harbour Beach, FL 32937

TEL: 207-518-8455
MOBILE: 207-831-5829
GTalk: aaron.osgood
AOsgood at Streamline-Solutions.net<mailto:AOsgood at Streamline-Solutions.net><mailto:AOsgood at Streamline-Solutions.net><mailto:AOsgood at Streamline-Solutions.net>
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.Streamline-2DSolutions.net&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=R3MGQnHcqyHGsuTbO8DhqtNm79_IqkdPpI0a-N41Q4A&s=A3msx2SKiX-92bddx6gF2Emg1k5nGubAQKuRudo78oU&e=<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.streamline-2Dsolutions.net_&d=DwMFAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=h6PvbEWlm5-2pcHcdCEl95Lm-L8TkZ54pI0DDxPmO2M&s=2To-iOKsc3Lckqa3LLa72stCu1-BWBLEDTLKZ7KmfnA&e=><https://urldefense.proofpoint.com/v2/url?u=http-3A__www.streamline-2Dsolutions.net_&d=DwMFAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=h6PvbEWlm5-2pcHcdCEl95Lm-L8TkZ54pI0DDxPmO2M&s=2To-iOKsc3Lckqa3LLa72stCu1-BWBLEDTLKZ7KmfnA&e=>



Introducing Efficiency to Business since 1986

From: Outages-discussion [mailto:outages-discussion-bounces at outages.org] On Behalf Of Steve Mikulasik
Sent: September 12, 2018 13:22
To: outages-discussion at outages.org<mailto:outages-discussion at outages.org><mailto:outages-discussion at outages.org><mailto:outages-discussion at outages.org>
Subject: [Outages-discussion] Azure Postmortem

MS made a statement about what took them down, sounds like they have some facility upgrades to do https://urldefense.proofpoint.com/v2/url?u=https-3A__azure.microsoft.com_en-2Dus_status_history_&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=R3MGQnHcqyHGsuTbO8DhqtNm79_IqkdPpI0a-N41Q4A&s=bEMAVgH_PmPet803tjoNm5QVcP1DsQlUZTdpK6h06uo&e=<https://urldefense.proofpoint.com/v2/url?u=https-3A__azure.microsoft.com_en-2Dus_status_history_&d=DwMFAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=h6PvbEWlm5-2pcHcdCEl95Lm-L8TkZ54pI0DDxPmO2M&s=U4UOUYvC-y8uF641JWafXARONGcbdj1YnTAQ-JTY3_c&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__azure.microsoft.com_en-2Dus_status_history_&d=DwMFAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=h6PvbEWlm5-2pcHcdCEl95Lm-L8TkZ54pI0DDxPmO2M&s=U4UOUYvC-y8uF641JWafXARONGcbdj1YnTAQ-JTY3_c&e=>

Summary of impact: In the early morning of September 4, 2018, high energy storms hit southern Texas in the vicinity of Microsoft Azure’s South Central US region. Multiple Azure datacenters in the region saw voltage sags and swells across the utility feeds. At 08:42 UTC, lightning caused electrical activity on the utility supply, which caused significant voltage swells.  These swells triggered a portion of one Azure datacenter to transfer from utility power to generator power. Additionally, these power swells shutdown the datacenter’s mechanical cooling systems despite having surge suppressors in place. Initially, the datacenter was able to maintain its operational temperatures through a load dependent thermal buffer that was designed within the cooling system. However, once this thermal buffer was depleted the datacenter temperature exceeded safe operational thresholds, and an automated shutdown of devices was initiated. This shutdown mechanism is intended to preserve infrastructure and data integrity, but in this instance, temperatures increased so quickly in parts of the datacenter that some hardware was damaged before it could shut down. A significant number of storage servers were damaged, as well as a small number of network devices and power units.
While storms were still active in the area, onsite teams took a series of actions to prevent further damage – including transferring the rest of the datacenter to generators thereby stabilizing the power supply. To initiate the recovery of infrastructure, the first step was to recover the Azure Software Load Balancers (SLBs) for storage scale units. SLB services are critical in the Azure networking stack, managing the routing of both customer and platform service traffic. The second step was to recover the storage servers and the data on these servers. This involved replacing failed infrastructure components, migrating customer data from the damaged servers to healthy servers, and validating that none of the recovered data was corrupted. This process took time due to the number of servers damaged, and the need to work carefully to maintain customer data integrity above all else. The decision was made to work towards recovery of data and not fail over to another datacenter, since a fail over would have resulted in limited data loss due to the asynchronous nature of geo replication.
Despite onsite redundancies, there are scenarios in which a datacenter cooling failure can impact customer workloads in the affected datacenter. Unfortunately, this particular set of issues also caused a cascading impact to services outside of the region, as described below.



_______________________________________________
Outages-discussion mailing list
Outages-discussion at outages.org<mailto:Outages-discussion at outages.org><mailto:Outages-discussion at outages.org><mailto:Outages-discussion at outages.org>
https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_outages-2Ddiscussion&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=h6PvbEWlm5-2pcHcdCEl95Lm-L8TkZ54pI0DDxPmO2M&s=dho5g6FOVez-8kH4BhbLdB1Apd1nPg6FVAq5LWR5uSI&e=<https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_outages-2Ddiscussion&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=h6PvbEWlm5-2pcHcdCEl95Lm-L8TkZ54pI0DDxPmO2M&s=dho5g6FOVez-8kH4BhbLdB1Apd1nPg6FVAq5LWR5uSI&e=><https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_outages-2Ddiscussion&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=h6PvbEWlm5-2pcHcdCEl95Lm-L8TkZ54pI0DDxPmO2M&s=dho5g6FOVez-8kH4BhbLdB1Apd1nPg6FVAq5LWR5uSI&e=>


_______________________________________________
Outages-discussion mailing list
Outages-discussion at outages.org<mailto:Outages-discussion at outages.org>
https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_outages-2Ddiscussion&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=R3MGQnHcqyHGsuTbO8DhqtNm79_IqkdPpI0a-N41Q4A&s=TZARZ2F9vVrke5HqcQ4hQ_IWxPY-KyzQYcCN8-W5qYs&e=


_______________________________________________
Outages-discussion mailing list
Outages-discussion at outages.org<mailto:Outages-discussion at outages.org>
https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_outages-2Ddiscussion&d=DwIGaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=R3MGQnHcqyHGsuTbO8DhqtNm79_IqkdPpI0a-N41Q4A&s=TZARZ2F9vVrke5HqcQ4hQ_IWxPY-KyzQYcCN8-W5qYs&e=


--
Sent from Postbox<https://www.postbox-inc.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/outages-discussion/attachments/20180912/9a3c70b9/attachment-0001.html>


More information about the Outages-discussion mailing list