Quick Start Guide: SQL Server Clusters on Windows Server 2008 R2 in Azure

Apparently Windows Server 2008 R2 lives on in the cloud as I get a calls for this sporadically.  Yes, Azure does support Windows Server 2008 R2 and older versions of SQL Server including 2008 R2 and 2012. Of course Always On Availability Groups wasn’t introduced until SQL 2012 and even then you probably want to avoid Availability Groups due to some of the performance issues associated with that version.

If you find yourself needing to support older versions of SQL Server or Windows you will want to build SANless clusters based on SIOS DataKeeper as mentioned in the Azure documentation.

2018-09-14_12-40-55
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-high-availability-dr

I have written many Quick Start Guides over the years, but sometimes I just want to give someone the 10,000 foot overview of the steps just so they have a general idea before they sit down and roll up their sleeves to do an install. Since it is not everyday I’m dealing with Windows 2008 R2 clusters in Azure, I wanted to publish this 10,000 foot overview just to share with my customers.

In a nutshell here are the steps to cluster SQL Server (any version supported on Windows 2008 R2) in Azure.

  • Provision two cluster servers and a file share witness in the same Availability Set. This places all three quorum votes in different Fault and Update Domains.
  • There is a hotfix for SQL 2008 R2 clusters in Azure to enable the listener used by both AGs and FCIs. https://support.microsoft.com/en-us/help/2854082/update-enables-sql-server-availability-group-listeners-on-windows-serv
  • Install that and all other OS updates.
  • Provision the storage on each server.
  • Format NTFS and give drive letters.
  • Each cluster node needs identical storage.
    Enable Failover CLustering and .Net 3.5 Framework on each server
  • Add the servers to the domain
  • Create the basic cluster, but USE POWERSHELL and specify the cluster IP address. If you use the GUI to create the cluster it will get confused and provision a duplicate IP address. If you do it via the GUI you will only be able to connect to the cluster from one of the nodes. If you connect you can correct the problem by specifying a static IP address to be used by the cluster resource.

    Here is an example of the Powershell usage to create the cluster

    New-Cluster -Name cluster1 -Node sql1,sql2 -StaticAddress 10.0.0.101 -NoStorage-
  • Add a File Share Witness to the cluster
  • Install DataKeeper on both cluster nodes
  • Create the DataKeeper Volume Resources and make sure they are Available Storage
  • Install SQL into the cluster as you normally would in a shared storage cluster.
  • Configure the Azure ILB and run the powershell script to update the SQL Cluster IP resource to listen on the Probe Port.

All of this is fully documented on the SIOS documentation page, Deploying DataKeeper Cluster Edition in Azure

Let me know if this helped you or if you have any questions about high availability for SQL Server or disaster recovery in Azure, AWS or Google Cloud.

Quick Start Guide: SQL Server Clusters on Windows Server 2008 R2 in Azure

Azure Outage Post-Mortem – Part 1

The first official Post-Mortems are starting to come out of Microsoft in regards to the Azure Outage that happened last week. While this first post-mortem addresses the Azure DevOps outage specifically (previously known as Visual Studio Team Service, or VSTS), it gives us some additional insight into the breadth and depth of the outage, confirms the cause of the outage, and gives us some insight into the challenges Microsoft faced in getting things back online quickly. It also hints at some some features/functionality Microsoft may consider pursuing to handle this situation better in the future.

As I mentioned in my previous article, features such as the new Availability Zones being rolled out in Azure, might have minimized the impact of this outage. In the post-mortem, Microsoft confirms what I previously said.

The primary solution we are pursuing to improve handling datacenter failures is Availability Zones, and we are exploring the feasibility of asynchronous replication.

Until Availability Zones are rolled out across more regions the only disaster recovery options you have are cross-region, hybrid-cloud or even cross-cloud asynchronous replication. Software based #SANless clustering solutions available today will enable such configurations, providing a very robust RTO and RPO, even when replicating great distances.

When you use SaaS/PaaS solutions you are really depending on the Cloud Service Provider (CSPs) to have an iron clad HA/DR solution in place. In this case, it seems as if a pretty significant deficiency was exposed and we can only hope that it leads all CSPs to take a hard look at their SaaS/PaaS offerings and address any HA/DR gaps that might exist. Until then, it is incumbent upon the consumer to understand the risks and do what they can to mitigate the risks of extended outages, or just choose not to use PaaS/SaaS until the risks are addressed.

The post-mortem really gets to the root of the issue…what do you value more, RTO or RPO?

I fundamentally do not want to decide for customers whether or not to accept data loss. I’ve had customers tell me they would take data loss to get a large team productive again quickly, and other customers have told me they do not want any data loss and would wait on recovery for however long that took.

It will be impossible for a CSP to make that decision for a customer. I can’t see a CSP ever deciding to lose customer data, unless the original data is just completely lost and unrecoverable. In that case, a near real-time async replica is about as good as you are going to get in terms of RPO in an unexpected failure.

However, was this outage really unexpected and without warning? Modern satellite imagery and improvements in weather forecasting probably gave fair warning that there was going to be significant weather related events in the area.

With hurricane Florence bearing down on the Southeast US as I write this post, I certainly hope if your data center is in the path of the hurricane you are taking proactive measures to gracefully move your workloads out of the impacted region. The benefit of a proactive disaster recovery vs a reactive disaster recovery are numerous, including no data loss, ample time to address unexpected issues, and managing human resources such that employees can worry about taking care of their families, rather than spending the night at a keyboard trying to put the pieces back together again.

Again, enacting a proactive disaster recovery would be a hard decision for a CSP to make on behalf of all their customers, as planned migrations across regions will incur some amount of downtime. This decision will have to be put in the hands of the customer.

Slide 2.png
Hurricane Florence Satellite Image taken from the new GOES-16 Satellite, courtesy of Tropical Tidbits

So what can you do to protect your business critical applications and data? As I discussed in my previous article, cross-region, cross-cloud or hybrid-cloud models with software based #SANless cluster solutions are going to go a long way to address your HA/DR concerns, with an excellent RTO and RPO for cloud based IaaS deployments. Instead of application specific solutions, software based, block level volume replication solutions such SIOS DataKeeper and SIOS Protection Suite replicate all data, providing a data protection solution for both Linux and Windows platforms.

My oldest son just started his undergrad degree in Meteorology at Rutgers University. Can you imagine a day when artificial intelligence (AI) and machine learning (ML) will be used to consume weather related data from NOAA to trigger a planned disaster recovery migration, two days before the storm strikes? I think I just found a perfect topic for his Master’s thesis. Or better yet, have him and his smart friends at the WeatherWatcher LLC get funding for a tech startup that applies AI and ML to weather related data to control proactive disaster recovery events.

I think we are just at the cusp of  IT analytics solutions that apply advanced machine-learning technology to cut the time and effort you need to ensure delivery of your critical application services. SIOS iQ is one of the solutions leading the way in that field.

Batten down the hatches and get ready, Hurricane season is just starting and we are already in for a wild ride. If you would like to discuss your HA/DR strategy reach out to me on Twitter @daveberm.

Azure Outage Post-Mortem – Part 1

Lightning Never Strikes Twice: Surviving the #Azure Cloud Outage

Yesterday morning I opened my Twitter feed to find that many people were impacted by an Azure outage. When I tried to access the resource page that described the outage and the current resources impacted even that page was unavailable. @AzureSupport was providing updates via Twitter.

The original update from @AzureSupport came in at 7:12 AM EDT

Azure Outage 2

Looking back on the Twitter feed it seems as if the problem initially began an hour or two before that.

Azure Support 10

It quickly became apparent that the outages had a wider spread impact than just the SOUTH CENTRAL US region as originally reported. It seems as if services that relied on Azure Active Directory could have been impacted as well and customers trying to provision new subscriptions were having issues.

Azure 11

And 24 hours later the problem has not been completely resolved and it according to the last update this morning…

Azure Outage 1

Untitled design (6)

So what could you have done to minimize the impact of this outage? No one can blame Microsoft for a natural disaster such as a lightning strike. But at the end of the day if your only disaster recovery plan is to call, tweet and email Microsoft until the issue is resolved, you just received a rude awakening. IT IS UP TO YOU to ensure you have covered all the bases when it comes to your disaster recovery plan.

While the dust is still settling on exactly what was impacted and what customers could have done to minimize the downtime, here are some of my initial thoughts.

Availability Sets (Fault Domains/Update Domains) – In this scenario, even if you built Failover Clusters, or leveraged Azure Load Balancers and Availability Sets, it seems the entire region went offline so you still would have been out of luck. While it is still recommended to leverage Availability Sets, especially for planned downtime, in this case you still would have been offline.

Availability Zones – While not available in the SOUTH CENTRAL US region yet, it seems that the concept of Availability Zones being rolled out in Azure could have minimized the impact of the outage. Assuming the lightning strike only impacted one datacenter, the other datacenter in the other Availability Zone should have remained operational. However, the outages of the other non-regional services such as Azure Active Directory (AAD) seems to have impacted multiple regions, so I don’t think Availability Zones would have isolated you completely.

Global Load Balancers, Cross Region Failover Clusters, etc. – Whether you are building SANLess clusters that cross regions, or using global load balancers to spread the load across multiple regions, you may have minimized the impact of the outage in SOUTH CENTRAL US, but you may have still been susceptible to the AAD outage.

Hybrid-Cloud, Cross Cloud – About the only way you could guarantee resiliency in a cloud wide failure scenario such as the one Azure just experienced is to have a DR plan that includes having realtime replication of data to a target outside of your primary cloud provider and a plan in place to bring applications online quickly in this other location. These two locations should be entirely independent and should not rely on services from your primary location to be available, such as AAD. The DR location could be another cloud provider, in this case AWS or Google Cloud Platform seem like logical alternatives, or it could be your own datacenter, but that kind of defeats the purpose of running in the cloud in the first place.

Software as a Service – While Software as service such as Azure Active Directory (ADD), Azure SQL Database (Database-as-Service) or one of the many SaaS offerings from any of the cloud providers can seem enticing, you really need to plan for the worst case scenario. Because you are trusting a business critical application to a single vendor you may have very little control in terms of DR options that includes recovery OUTSIDE of the current cloud service provider. I don’t have any words of wisdom here other than investigate your DR options before implementing any SaaS service, and if recovery outside of the cloud is not an option than think long and hard before you sign-up for that service. Minimally make the business stake owners aware that if the cloud service provider has a really bad day and that service is offline there may be nothing you can do about it other than call and complain.

I think in the very near future you will start to hear more and more about cross cloud availability and people leveraging solutions like SIOS DataKeeper to build robust HA and DR strategies that cross cloud providers. Truly cross cloud or hybrid cloud models are the only way to truly insulate yourself from most conceivable cloud outages.

If you were impacted from this latest outage I’d love to hear from you. Tell me what went down, how long you were down, and what you did to recover. What are you planning to do so that in the future your experience is better?

Lightning Never Strikes Twice: Surviving the #Azure Cloud Outage

“Incomplete Communication with Cluster” with local Storage Space for SQL Server cluster

When building a SANless SQL Server cluster with SIOS DataKeeper, or when configuring Always On Availability Groups for SQL Server, you may consider striping together multiple disk in a Simple Storage Space (RAID 0) for performance. This is very commonly done in the cloud where each instance typically his backed by hardware resiliency, so RAID 0 is not really all that risky.

For instance, I had a recent customer in AWS that wanted to max out his IOPS to 80,000, the maximum IOPS currently available to a single instance. Now keep in mind, only the largest EBS optimized instance sizes supports 80,000 IOPS, so you want to make sure you know what maximum IOPS your particular instance size supports.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html

In this case we had ac5.18xlarge instance which does support 80,000 IOPS. However, any individual EBS Provisioned IOPS volume only supports up to 32,000 IOPS. The only way to achieve 80,000 IOPS when writing to any single volume is to strip three of these volumes together in a Simple Storage Space.

Herein lies the rub, if you try to do that in an existing cluster things are going to go haywire pretty fast. Fellow MVP Joey D’Antoni recently blogged about the issue and it appears to still be an issue in the Windows Server 2019 preview.

Just as Joey suggests, I always advise my customers to build out the nodes and any Storage Spaces BEFORE they start the clustering process. This makes the process go much smoother. It also allows the customer to have some time to benchmark the server’s performance before they add any replication, to  ensure everything is working as expected.

 

 

“Incomplete Communication with Cluster” with local Storage Space for SQL Server cluster

iCalc events still appearing on Wordpress Upcoming Events Widget even after deleted

I sync an iCal calendar feed to a WordPress site using the Upcoming Events Widget. I do this primarily because it is the only calendar widget available with the free version of WordPress this non-profit is using.

The issue I had is that I deleted an event last week, yet it still shows in my WordPress site through this sidebar widget. I would think it was a refresh issue, but edits to other events at the same time have already sync’d to the WordPress site.

wordpress.jpg
This event was deleted from my iCalc calendar, yet it still appears in my Upcoming Events Widget

I even tried deleting the Widget and adding a new one and the new widget also shows the events that I deleted.

I was beginning to get a bit frustrated and of course I had a band parents breathing down my neck questioning my ability to manage the band parent association website. Okay, not really, but they did point out the discrepancy on the calendar and I assumed that they just thought I was an idiot.

After a bit of Googling turned up nothing, it occurred to me that this was a reoccurring event. When I deleted that event, I simply delete that particular occurrence of the events, not the whole series of events.

Sure enough, after I deleted the whole series of recurring events the event disappeared from the Upcoming Events Sidebar widget about an hour later. Longs story short, the Upcoming Events Widget in WordPress does not handle exceptions to recurring events reliably.

iCalc events still appearing on Wordpress Upcoming Events Widget even after deleted

Your student could be the next Doogie Howser of Cloud Computing with free training and cloud computing resources

Students with any interest in Information Technology or Computer Science are going to be joining a world dominated by Cloud Computing. And of course the major cloud service providers (CSP) would all love to see the young people embrace their cloud platform to host the next big thing like Facebook, Instagram or SnapChat. The top three CSP all have free offerings for students, hoping to win their minds and hearts.

But before you jump right in to cloud computing, the novice student might want to start with some basic fundamentals of computer programming at one of the many free online resources, including Khan Academy.

feature_khanacademy.png

Microsoft is offering free Azure services for students. There are two different offerings. The first is targeted at high school students ages 13+ and the second is geared towards college students 18+.

microsoft-azure-1.png

Microsoft Azure for Students Starter Offer is for those high school students that are interested in building applications in the cloud. While there are not as many free services or credits as being offered at the college level, there is certainly enough available for free to really get some hands on experience with some cutting edge technology for the self starter. How cool would it be for your high school to start a Cloud Computing Club, or to integrate this offering into some of the IT classes they may already be taking.

Azure for Students is targeted at the college level student and has many more features available for free. Any student in computer science or information technology should definitely get some hands on experience with these cutting edge cloud technologies and this is the perfect way to do it with no additional out of pocket expense.

A good way to get introduced to the Azure Cloud is to start with some free online training courses Microsoft delivers in partnership with Pluralsight.

logo_aws-educate.812809f63186598d26a56d443d829afa390566d1.png

AWS Educate. Not to be outdone, AWS also offers some free cloud services to students and educators. These seem to be in terms of free cloud credits, which if managed properly can go a long way. AWS also delivers an educational program that can be combined with an AP class in Computer Science if your high school wants to participate.

 

Google-Cloud-Platform.png

Google Cloud Platform (GCP) also has education grants available for computer science majors at accredited universities. These seem to be the most restrictive of the three as they are available for Computer Science Majors only at accredited universities.

GCP does also offer training, but from what I can find I don’t see any free training offerings. If you want some hands on training you will have to register for some classes. The plus side of this is that these classes all seem to be instructor led, either online or in an actual classroom. The downside is I don’t think a lot of 13 year olds are going to shell out any money to start developing on the CGP when there are other free training opportunities available on AWS or Azure.

For the ambitious young student, the resources are certainly there for you to be the next Doogie Howser of Cloud Computing.

Doogie Howser_Cloud MD_.png

 

Your student could be the next Doogie Howser of Cloud Computing with free training and cloud computing resources

Help! I can’t connect to my SQL Server multi-subnet failover cluster

I get that kind of call or email from customers all the time. I have a generic response as follows…

This has everything you need to know.

They don’t go into great detail about what to do if your connection does not support multisubnetfailover=true. If your connection does NOT support that parameter, then set registerallprovidersip to false and cleanup DNS. That procedure is described best here.
I figure I get this question often enough I probably should just flesh out my response a bit, hence the reason for this post.
In general people just aren’t aware of how multi-subnet failover clusters work. Multi-subnet failover clustering support was added in Windows Server 2012 with the addition of the “OR” technology when defining cluster resource dependencies. This allowed people to allow a Cluster Name resource to be dependent upon IP Address x.x.x.x OR IP Address y.y.y.y.
x.x.x.x would be an a cluster IP resource valid in Subnet A and y.y.y.y would be a cluster IP address valid in Subnet B. Only one address will be online at any given time, whichever address was valid for the subnet the resource was currently running on.
Microsoft SQL Server started supporting this concept starting with SQL Server 2012 with both failover cluster instances (FCI) using 3-party SANless clustering solutions like SIOS DataKeeper and SQL Server Always On Availability Groups.
By default if you create a SQL Server multi-subnet failover cluster the cluster should be automatically configured optimally, including setting up the two IP addresses, adding two A records to DNS and setting the registerallprovidersIP to true. However, on the client end you need to tell it that you are connecting to a multi-subnet failover cluster, otherwise the connection won’t be made.

Configuring the client

Configuring the client is done by adding multisubnetfailover=true to the connection string. This Microsoft documentation is a great resource, but if you just search for multisubnetfailover=true you will find a lot of information about that setting.
However, not every application will support adding that to the connection string. If you find yourself in that situation you should ask your application vendor to add support for that or show you how to do it.
However, all is not lost if you find yourself in that situation. You will want to change the behavior of the cluster so that upon failover DNS is update so that the single A record associated with the cluster client access point is updated with the new IP address. This is in lieu of having two A records in DNS, one with each cluster IP address, which is the default behavior in an multi-subnet cluster.
This article reference SharePoint, you can ignore that, the rest of the article is pretty well written to describe the process you should follow.
The highlights of that article are as follows…
Get-ClusterResource “[Network Name]” | Set-ClusterParameter RegisterAllProvidersIP 0
After restarting the cluster-name-object (basically restarting the role) & cleaning up all “A” records manually (clean-up isn’t done automatically) we can see our old A-records are still in DNS so we’ll need to delete those manually.
In addition to those steps I’d advise you to reduce the TTL on the HostRecordTTL as described in this article.
The highlight of that article is as follows.
PS C:\> Get-ClusterResource -Name cluster1FS | Set-ClusterParameter -Name HostRecordTTL -Value 300
With a Value of 300 you could potentially be waiting up to 5 minutes for your clients to reconnect after a failover, or even longer if if have a large Active Directory infrastructure and AD replication takes some time to update all the DNS servers across your infrastructure.
You are going to want to figure out what the optimal TTL is to facilitate quick client reconnections without over burdening your DNS servers with a bunch of DNS Lookup requests.
This type of configuration is common in disaster recovery configurations where your DR site is in a different subnet. It is also very common in HA deployments in AWS because different Availability Zones are in different subnets.
Let me know if you have any questions. You can always reach me on Twitter @daveberm
Help! I can’t connect to my SQL Server multi-subnet failover cluster