[Outages-discussion] Outages-discussion] Azure Postmortem

Mike Christian michael.c.christian at oracle.com
Wed Sep 12 15:33:17 EDT 2018


This is actually an interesting description.  Not knowing anything about their internals, I can hypothesize a scenario:

Way back in the long long ago, an emergency shutdown involved a flush to disk, with just enough battery to accomplish that, then a clean power down.  This process wouldn’t need to consider the state of cooling infrastructure or whatever buffer was in place.

Now make that a more sophisticated process that pauses writes, flushes the async replication queues, and initiates an automatic switchover to an unaffected site.  Great stuff.  But how long does that take, and what are the implications around temperature management?

I’ve been through similar scenarios, but have never seen actual equipment damage.  Something is certainly new here.

MikeC

Sent from my iPhone

> On Sep 12, 2018, at 11:41 AM, Aaron D. Osgood <AOsgood at Streamline-Solutions.net> wrote:
> 
> Perhaps that is “Lawyer-Speak” for “The damned place caught fire”
>  
>  
> Aaron D. Osgood
> 
> Streamline Communications L.L.C
> 
> 274 E. Eau Gallie Blvd. #332
> Indian Harbour Beach, FL 32937
> 
> TEL: 207-518-8455
> MOBILE: 207-831-5829
> GTalk: aaron.osgood
> AOsgood at Streamline-Solutions.net
> www.Streamline-Solutions.net
> 
> 
> 
> Introducing Efficiency to Business since 1986 
>  
> From: Outages-discussion [mailto:outages-discussion-bounces at outages.org] On Behalf Of Steve Mikulasik
> Sent: September 12, 2018 13:22
> To: outages-discussion at outages.org
> Subject: [Outages-discussion] Azure Postmortem
>  
> MS made a statement about what took them down, sounds like they have some facility upgrades to do https://azure.microsoft.com/en-us/status/history/
>  
> Summary of impact: In the early morning of September 4, 2018, high energy storms hit southern Texas in the vicinity of Microsoft Azure’s South Central US region. Multiple Azure datacenters in the region saw voltage sags and swells across the utility feeds. At 08:42 UTC, lightning caused electrical activity on the utility supply, which caused significant voltage swells.  These swells triggered a portion of one Azure datacenter to transfer from utility power to generator power. Additionally, these power swells shutdown the datacenter’s mechanical cooling systems despite having surge suppressors in place. Initially, the datacenter was able to maintain its operational temperatures through a load dependent thermal buffer that was designed within the cooling system. However, once this thermal buffer was depleted the datacenter temperature exceeded safe operational thresholds, and an automated shutdown of devices was initiated. This shutdown mechanism is intended to preserve infrastructure and data integrity, but in this instance, temperatures increased so quickly in parts of the datacenter that some hardware was damaged before it could shut down. A significant number of storage servers were damaged, as well as a small number of network devices and power units.
> While storms were still active in the area, onsite teams took a series of actions to prevent further damage – including transferring the rest of the datacenter to generators thereby stabilizing the power supply. To initiate the recovery of infrastructure, the first step was to recover the Azure Software Load Balancers (SLBs) for storage scale units. SLB services are critical in the Azure networking stack, managing the routing of both customer and platform service traffic. The second step was to recover the storage servers and the data on these servers. This involved replacing failed infrastructure components, migrating customer data from the damaged servers to healthy servers, and validating that none of the recovered data was corrupted. This process took time due to the number of servers damaged, and the need to work carefully to maintain customer data integrity above all else. The decision was made to work towards recovery of data and not fail over to another datacenter, since a fail over would have resulted in limited data loss due to the asynchronous nature of geo replication.
> Despite onsite redundancies, there are scenarios in which a datacenter cooling failure can impact customer workloads in the affected datacenter. Unfortunately, this particular set of issues also caused a cascading impact to services outside of the region, as described below.
>  
>  
> _______________________________________________
> Outages-discussion mailing list
> Outages-discussion at outages.org
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_outages-2Ddiscussion&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=nhYp4LFzcyp8kP7IBZ0FPKGMvhBKzuMvV2Tq5P57e3Y&m=h6PvbEWlm5-2pcHcdCEl95Lm-L8TkZ54pI0DDxPmO2M&s=dho5g6FOVez-8kH4BhbLdB1Apd1nPg6FVAq5LWR5uSI&e=
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/outages-discussion/attachments/20180912/57f8d61c/attachment-0001.html>


More information about the Outages-discussion mailing list