TICK TOCK…6 MONTHS UNTIL SQL SERVER 2008/2008 R2 SUPPORT EXPIRES UNLESS YOU TAKE ACTION

If you are still running SQL Server 2008/2008 R2 you probably have heard by now that as of July 9, 2019, you will no longer be supported. However, realizing that there are still a significant number of customers running on this platform that will not be able to upgrade to a newer version of SQL before that deadline, Microsoft has offered two options to provide extended security updates for an additional three years.

The first option you have requires the annual purchase of “Extended Security Updates”. Extended Security Updates will cost 75% of the full license cost annually and also requires that the customer is on active software assurance, which is typically 25% of the license cost annually. So effectively, to receive Extended Security Updates you are paying for new SQL Server licenses annually for three years, or until you migrate off SQL Server 2008/2008 R2.

However, there is another second option. Microsoft has announced that if you move your SQL Server 2008 R2 instances to Azure, you will receive the Extended Security Updates at no additional charge. There is of course the hourly infrastructure charges you will incur in Azure, plus either the cost of pay as you go SQL Server instances or the Software Assurance charges if you want to bring your existing SQL licenses to Azure, but that cost includes the added benefit of running in a state of the art cloud environment which opens up opportunities for enhanced performance and HA/DR scenarios that you may not have had available on premise.

Azure offers many different options in terms of CPU, Memory and Storage configurations. If you are looking for a server or storage upgrade, or your existing on-premise infrastructure was reaching a refresh cycle, now is the perfect time to dip your feet into the Azure cloud and upgrade your performance and availability at the same time as extending the life of your SQL Server 2008/2008 R2 deployment.

In terms of high availability and disaster recovery configurations, Azure offers up to a 99.99% SLA.  To qualify for the SLA you must leveraging their infrastructure appropriately and even then, the SLA only covers “dial tone” to the instance. It is up to you to ensure SQL Server is highly available, which is traditionally done by building a SQL Server Failover Cluster Instance (FCI). Azure has the infrastructure in place which enables you to configure a SQL Server FCI, but due to the lack of cluster aware shared storage in the cloud, you will need to use SIOS DataKeeper to build the FCI.

SIOS DataKeeper takes the place of the shared storage normally required by a SQL Server FCI and instead allows you to leverage the any NTFS formatted volumes that are attached to each instance. SIOS keeps the volumes replicated between the instances and presents the storage to the cluster as a resource called a DataKeeper Volume. As far as the cluster is concerned the DataKeeper Volume looks like a share disk, but instead of controlling SCSI reservations (disk locking), it controls the mirror direction ensuring writes occur on the active server and are synchronously or asynchronously replicated to the other cluster nodes. The end user experience is exactly the same as a traditional shared storage cluster, but under the covers the cluster is leveraging the locally attached storage instead of shared storage.

In Azure your cluster nodes can run in different racks (Fault Domains), data centers (Availability Zones), or even in different geographic regions. SIOS DataKeeper supports all three options: Fault Domains, Availability Zones or cross Region replication to cover both HA and DR requirements. Similar configurations are also possible in the AWS and Google Cloud.

azure ha
Typical 2-node SQL Server FCI configuration in Azure with SIOS DataKeeper

With Azure Site Recovery (ASR) you can replicate standalone or clustered instances of SQL Server between Region Pairs, without the headache and expense of managing your own disaster recovery site. And of course SQL Server seldom lives alone, so at the same time you move your SQL Server instance to Azure you probably want to move your application servers there as well to also take advantage of the performance and availability upgrades available in Azure. Combining SIOS DataKeeper for HA and ASR for DR provides a cost effective HA and DR strategy that would have been impossible, or extremely expensive to implement on premise with SAN replication and your own DR site.

asr - 2
Common configuration leveraging SIOS DataKeeper for HA and Azure Site Recovery for DR

While it only takes a few minutes to spin up a SQL Server instance in Azure, I wouldn’t wait until the last minute to do your migration. Please take the next few months to become familiar with Azure, start doing some testing, and then plan to migrate your workloads well before the July 9, 2019 expiration date. Running SQL Server after that date leaves you susceptible to any new security threats and also puts you out of compliance. Your boss, and more importantly your customers, will be glad to know that their data is still secure, available, and in compliance once you migrate your workload to Azure.

TICK TOCK…6 MONTHS UNTIL SQL SERVER 2008/2008 R2 SUPPORT EXPIRES UNLESS YOU TAKE ACTION

How to Cluster MaxDB on Windows in the Cloud #Azure #AWS #GCP #SAP

Recently I have had a number of customers looking for a high availability solution for MaxDB on Windows in the cloud. Some customers have been in Azure and some in AWS. But regardless of the cloud platform, they all eventually find the post in the SAP Community WIKI that describes the process.

https://wiki.scn.sap.com/wiki/display/MaxDB/HowTo+-+Embed+SAP+MaxDB+in+MSCS

The challenge with this post in a cloud environment is that there is no shared storage (SAN) available in the Azure, AWS or GCP that allows you to build a traditional shared storage cluster. The beauty of HA in the cloud is that cluster nodes typically reside miles away from each other in another data center, AKA, availability zone (AZ). So even if shared storage was available, it wouldn’t make a lot of sense since it would have to reside in a single AZ, defeating the purpose of HA all together.

However, there is an answer. SIOS DataKeeper, a SANless clustering solution from SIOS technology, allows locally attached storage to be used in a Windows Server Failover Cluster, eliminating the need for a SAN. Instead, SIOS keeps locally attached disk in sync using synchronous block level replication technology and presents this storage to WSFC as a clustered disk resource called a DataKeeper volume.

DataKeeper and MaxDB
Typical 2-node WSFC across Availability Zones with a 3rd node in a different Region

As far as the cluster is concerned, a DataKeeper Volume cluster resource looks like a shared disk, but instead of controlling disk locking (SCSI reservations), it controls the mirror direction. So in every sense of the word it is still a true WSFC, except it uses locally attached storage instead of shared storage. The locally attached storage can be anything from EBS block device to Azure premium disk, or even a local Storage Space with multiple disks stripped together. As long as Windows sees an NTFS formatted volume with a drive letter and the volume size is the same on each instance it can be used in the cluster.

This type of cluster is commonly known as a SANless cluster and has been around for many years enabling geo-clusters and clusters where shared storage was not available. Database admins also love it as it enables them to use local high speed storage devices like PCIe flash or SSD drives, yet still use WSFC for high availability.

SIOS also supports asynchronous replication, so if you want to add a node in a different geographic location for disaster recovery you can build a 3-node cluster with 2 nodes in the same region but different fault domains and a 3rd node in an entirely different region, or maybe even back on-prem for disaster recovery options. Or, if you are in Azure you can leverage Azure SIte Recovery (ASR) for disaster recovery as SIOS DataKeeper is compatible for ASR.

Both WSFC and SIOS DataKeeper are very dependent upon IP addresses staying the same, so for ASR configurations you will want to make sure you retain your IP address upon failover as described here.

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-retain-ip-azure-vm-failover

SIOS is no stranger to high availability and disaster recovery for SAP. The SIOS Protection Suite for Linux is a SAP Certified HA solution for SAP and SAP HANA and SIOS DataKeeper is the preferred HA/DR solution for SAP ASCS on Windows in cloud environments. Providing an HA/DR solution for MaxDB on Azure further solidifies SIOS as the SAP high availability experts.

If you have questions about high availability for SAP, Hana, MaxDB, SQL Server in Azure, AWS, GCP or any other platform leave me a comment or reach me on Twitter @daveberm

How to Cluster MaxDB on Windows in the Cloud #Azure #AWS #GCP #SAP

Moving a Google Form Between Google Domains

If you are anything like me, you might have a few different Google accounts that you work with on a regular basis. I ran into an issue recently where I spent a fair amount of time creating a Google Form, just to realize I did this while logged in with my personal account rather than my work account. I didn’t really want to redo the work I had done, but when I searched to try to find out how to move the form between accounts I didn’t come up with anything that addressed my situation.

It’s not hard to do, but I figured I’d write it down just in case it happens to you. I stumbled upon the fix just by trying a few things. Assuming this is a new form with no data all you have to do is the following:

  1. Add your second Google account as a Collaborator on the form
  2. Log in to your second Google account, open the form and “Make a copy” of the form

G Suite

That’s it, now you have a copy of the form in your second Google account. Of course if you had already collected some data on the first form you would want to copy that Sheet and put it in your second Google account as well and attach the form to that copy of the data. Be sure to delete the old form so you don’t accidentally use the old form.

Moving a Google Form Between Google Domains

Email Alerts with SIOS DataKeeper

Over the past few weeks I wrote a 3-part series on how to configure email alerts based on Perfmon Counters, System Event Log Entries and a specific Windows Service Start or Stop Event. While these guides are relevant to any environment all of my examples were geared towards monitoring SIOS DataKeeper and included some specific customer requests including monitoring the SIOS DataKeeper Service as well as being alerted should the RPO exceed 5 seconds. I also included monitoring of the basic DataKeeper events that you would want to know about.

This video shows some of this alerting in action.

Email Alerts with SIOS DataKeeper

Moving SQL Server 2008 and 2008 R2 clusters to #Azure for Extended Support

Earlier this year Microsoft announced extended support for SQL Server 2008 and 2008 R2 at no additional cost. However, the catch is that you must migrate your SQL Server installation to Azure in order to take advantage of the extended support. For all the details, check out https://www.microsoft.com/en-us/sql-server/sql-server-2008. If you choose not to move, your extended support ends on July 9th, 2019, just about 9 months from now.

2018-10-05_16-45-37

Chances are if you are still running SQL Server 2008 R2 it’s simply because you never upgraded your application, so newer versions of SQL are not supported. Or you simply decided not to fix what isn’t broken. Regardless of these reason, you have just bought yourself another three years of support, if you migrate to Azure.

Now migrating workloads to Azure is a pretty well documented procedure, using Azure Site Recovery, so that process should be pretty seamless for you for your standalone instances of SQL Server.

But what about those clustered instances of SQL Server? You certainly don’t want to give up availability when you move to the Azure. Part of the beauty of Azure is that they have infrastructure that you can only dream of. However, it is incumbent upon the user to configure their applications to take full advantage of the infrastructure to ensure that your deployments are highly available.

With SQL Server 2008 and 2008 R2, high availability commonly means SQL Server Failover Clustering on either Windows Server 2008 R2 or Windows Server 2012 R2. If you are new to Azure you will quickly discover that there is no native option that supports  shared storage clusters. Instead, you will need to look at a SANLess cluster solution such as SIOS DataKeeper. Microsoft list SIOS DataKeeper as the HA solution for SQL Server Failover CLustering in their documentation.

2018-10-05_16-59-39
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-high-availability-dr

 

In order to facilitate a simple migration of your existing SQl Server 2008 or 2008 R2 cluster to Azure here are the high level steps you will need to take.

  • Replace the Physical Disk Resource in your existing on premise SQL Server cluster with a DataKeeper Volume Resource. Do the same for MSDTC resources if you use MSDTC.
  • Remove your Disk Witness and replace it with a File Share Witness
  • Use Azure Site Recovery to replicate your cluster nodes into Azure, making sure each replicated node resides in a different Fault Domain or in different Availability Zones in Azure
  • Recovery your replicate cluster nodes in Azure
  • Replace the File Share Witness with a File Share hosted in Azure
  • Configure the Internal Load Balancer in Azure for client redirection, this includes running the Powershell script on the local nodes to update the SQL Cluster IP resource to listen for the ILB probe
  • Assuming the IP addresses and subnet of the SQL Server cluster instances changed as part of this migration you will also need to do some cleanup of the cluster IP address and the DataKeeper job endpoints to reflect the new IP addresses

I know I left out a lot of the details, but if you find yourself in the position of having to do a lift and shift of SQL Server to Azure, or any cloud for that matter, I’d be glad to get on the phone with you to answer any questions you may have. Keep in mind, the same steps apply for any version of SQL that you plan to migrate to Azure.

 

Moving SQL Server 2008 and 2008 R2 clusters to #Azure for Extended Support

#Ignite2018 Session: Ensure application availability with cloud-based disaster recovery, Azure Site Recovery #SAP #BusinessContinuity

I’m a big fan of Azure Site Recovery for Disaster Recovery and was glad to attend the Ignite session today presented by Rochak Mittal and Ashish Gangwar

BRK3304 – Architecting mission-critical, high-performance SAP workloads on Azure

In one of the architecture slides they showed how an entire SAP deployment could be protected by Azure Site Recovery (ASR) and recovered in the event of a disaster in just a few minutes. Using Azure Recovery Plans allows you to have explicit control over recovery, including creating dependencies on resources as well as invoking scripts within a VM to help facilitate the complete recovery.

It seems like yesterday, but it was back in May of 2014 when I first started assisting Microsoft with providing a HA solution for SAP ASCS in Azure. That solution involves using DataKeeper to build a SANless cluster solution for ASCS and still stands today as the only HA solution that also works with ASR for disaster recovery configurations such as the one shown in this demo at Ignite.

1002-wsfc-sios-on-azure-ilb.png
Shared disks in Azure with SIOS DataKeeper

If you need help planning your highly available SAP deployment is Azure definitely reach out to me, I’d be glad to assist.

 

 

#Ignite2018 Session: Ensure application availability with cloud-based disaster recovery, Azure Site Recovery #SAP #BusinessContinuity

Azure Outage Post-Mortem Part 3

My previous blog posts, Azure Outage Post-Mortem – Part 1 and Azure Outage Post-Mortem Part 2,made some assumptions based upon limited information coming from blog posts and twitter. I just attended a session at Ignite which gave a little more clarity as to what actually happened. Sometime tomorrow you should be able to view the session for yourself.

BRK3075 – Preparing for the unexpected: Anatomy of an Azure outage

The official Root Cause Analysis they said will be published soon, but in the meantime here are some tidbits of information gleaned from the session.

The outage was NOT caused by a lightning strike as previously reported. Instead, due to the nature of the storm there were electrical storm sags and swells, which locked out a chiller plant in the 1st datacenter. During this first outage they were able to recover the chiller quickly with no noticeable impact. Shortly thereafter, there was a second outage at a second datacenter which was not recovered properly, which began an unfortunate series of events.

During this 2nd outage, Microsoft states that “Engineers didn’t triage alerts correctly – chiller plant recovery was not prioritized”. There were numerous alerts being triggered at this time, and unfortunately the chiller being offline did not receive the priority it should have. The RCA as to why that happened is still being investigated.

Microsoft states that of course redundant chiller systems are in place. However, the cooling systems were not set to automatically failover. Recently installed new equipment had not been fully tested, so it was set to manual mode until testing had been completed.

After 45 minutes the ambient cooling failed, hardware shutdown, air handlers shut down because they thought there was a fire, and staff had been evacuated due to the false fire alarm. During this time temperature in the data center was increasing and some hardware was not shut down properly, causing damage to some storage and networking.

After manually resetting the chillers and opening the air handlers the temperature began to return to normal. It took about 3 hours and 29 minutes before they had a complete picture of the status of the datacenter.

The biggest issue was there was damage to storage. Microsoft’s primary concern is data protection, so short of the enter datacenter sinking into a sinkhole or a meteor strike taking out the datacenter, Microsoft will work to recover data to ensure no data loss. This of course took some time, which extend the overall length of the outage. The good news is that no customer data was lost, the bad news is that it seemed like it took 24-48 hours for things to return to normal, based upon what I read on Twitter from customers complaining about the prolonged outage.

Everyone expected that this outage would impact customers hosted in the South Central Region, but what they did not expect was that the outage would have an impact outside of that region. In the session, Microsoft discusses some of the extended reach of the outage.

Azure Service Manager (ASM) – This controls Azure “Classic” resources, AKA, pre-ARM resources. Anyone relying on ASM could have been impacted. It wasn’t clear to me why this happened, but it appears that South Central Region hosts some important components of that service which became unavailable.

Visual Studio Team Service (VSTS) – Again, it appears that many resources that support this service are hosted in the South Central Region. This outage is described in great detail by Buck Hodges (@tfsbuck), Director of Engineering, Azure DevOps this blog post.

Postmortem: VSTS 4 September 2018

Azure Active Directory (AAD) – When the South Central region failed, AAD did what it was designed to due and started directing authentication requests to other regions. As the East Coast started to wake up and online, authentication traffic started picking up. Now normally AAD would handle this increase in traffic through autoscaling, but the autoscaling has a dependency on ASM, which of course was offline. Without the ability to autoscale, AAD was not able to handle the increase in authentication requests. Exasperating the situation was a bug in Office clients which made them have very aggressive retry logic, and no backoff logic. This additional authentication traffic eventually brought AAD to its knees.

They ran out of time to discuss this further during the Ignite session, but one feature that they will be introducing will be giving users the ability to failover Storage Accounts manually in the future. So in the case where recovery time objective (RTO) is more important than (RPO) the user will have the ability to recover their asynchronously replicated geo-redundant storage in an alternate data center should Microsoft experience another extended outage in the future.

Until that time, you will have to rely on other replication solutions such as SIOS DataKeeper Azure Site Recovery, or application specific replication solutions which give you the ability to replicate data across regions and put the ability to enact your disaster recovery plan in your control.

 

 

Azure Outage Post-Mortem Part 3