Service Incident, 2017-Sep-29 - RESOLVED

13:45 UTC | 2017-Sep-29

Some skyWATS accounts cannot load the web application.
We are currently investigating the issue.

14:21 UTC | 2017-Sep-29

There is an issue at the Microsoft Azure Service: "An alert for Virtual Machines in North Europe is being investigated. More information will be provided as it is known."
We will continue to investigate the issue.

15:45 UTC | 2017-Sep-29

Azure status updated to: "Storage Related Incident - North Europe"
We will continue to investigate the issue.

17:15 UTC | 2017-Sep-29

Still no recovery at the Azure Service
We will continue to investigate the issue.

18:40 UTC | 2017-Sep-29

Most accounts are now back online. The service is still unstable and we expect it to be for a period of time. Queued reports are starting to be processed, but it may take a few hours to process the backlog for some accounts. Be aware that the web application will still report clients with pending reports for this period.
We will continue to monitor the service.

09:30 (AM) UTC | 2017-Sep-30 - RESOLVED

All accounts are now running as normal and queued reports should be processed.
If you experience any abnormalities with your account, please contact

Failure summary:

Microsoft Azure North Region reported failure incidents shortly after we lost contact with our service. We started immediate investigation, but could not perform any action until the Azure platform was back online.

Microsoft status description:

Storage Related Incident - North Europe

Summary of impact: Between 13:27 and 20:15 UTC on 29 Sep 2017, a subset of customers in North Europe may have experienced difficulties connecting to resources hosted in this region due to availability loss of a Storage scale unit. Services that depend on the impacted Storage resources in this region that may have seen impact are Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Azure Functions, Time Series Insights, Stream Analytics, HDInsight, Data Factory and Azure Scheduler.

Preliminary root cause: Engineers have determined that this was the result of a facility issue that resulted in physical node reboots as a precautionary measure. The nodes impacted were primarily from a single storage stamp. Recovery took longer than expected, and the full Public RCA will include details on why these nodes did not recover more quickly.


Have more questions? Submit a request


Please sign in to leave a comment.