Many people have found themselves settling for SQL Server Standard Edition due to the cost of SQL Server Enterprise Edition. SQL Server Standard Edition has many of the same features, but has a few limitations. One limitation is that it does not support AlwaysOn Availability Groups. Also, it only supports two nodes in a cluster. With Database Mirroring being deprecated and only supporting synchronous replication in Standard Edition, you really have limited disaster recovery options.
One of those options is SIOS DataKeeper Cluster Edition. DataKeeper will work with your existing shared storage cluster and allow you to extend it to a 3rd node using either synchronous or asynchronous replication. If you are using SQL Server Enterprise you can simply add that 3rd node as another cluster member and you have a true multisite cluster. However, since we are talking about SQL Server Standard Edition you can’t add a 3rd node directly to the cluster. The good news is that DataKeeper will allow you to replicate data to a 3rd node so your data is protected.
Recovery in the event of a disaster simply means you are going to use DataKeeper to bring that 3rd node online as the source of the mirror and then use SQL Server Management Studio to mount the databases that are on the replicated volumes. You clients will also need to be redirected to this 3rd node, but it is a very cost effective solution with an excellent RPO and reasonable RTO.
The SIOS documentation talks about how to do this, but I have summarized the steps recently for one of my clients.
Configuration
- Stop the SQL Resource
- Remove the Physical Disk Resource From The SQL Cluster Resource
- Remove the Physical Disk from Available Storage
- Online Physical Disk on SECONDARY server, add the drive letter (if not there)
- Run emcmd . setconfiguration <drive letter> 256
and Reboot Secondary Server. This will cause the SECONDARY server to block access to the E drive which is important because you don’t want two servers having access to the E drive at the same time if you can avoid it. - Online the disk on PRIMARY server
- Add the Drive letter if needed
- Create a DataKeeper Mirror from Primary to DR
You may have to wait a minute for the E drive to appear available in the DataKeeper Server Overview Report on all the servers before you can create the mirror properly. If done properly you will create a mirror from PRIMARY to DR and as part of that process DataKeeper will ask you about the SECONDARY server which shares the volume you are replicating.
In the event of a disaster….
On DR Node
- Run EMCMD . switchovervolume <drive letter>
- The first time make sure the SQL Service account has read/write access to all data and log files. You WILL have to explicitly grant this access the very first time you try to mount the databases.
- Use SQL Management Studio to mount the databases
- Redirect all clients to the server in the DR site, or better yet have the applications that reside in the DR site pre-configured to point to the SQL Server instance in the DR site.
After disaster is over
- Power the servers (PRIMAY, SECONDARY) in the main site back on
- Wait for mirror to reach mirroring state
- Determine which node was previous source (run PowerShell as an administrator)
get-clusterresource -Name “<DataKeeper Volume Resource name>” | get-clusterparameter - Make sure no DataKeeper Volume Resources are online in the cluster
- Start the DataKeeper GUI on one cluster node. Resolve any split brain conditions (most likely there are none) ensuring the DR node is selected as the source during any split-brain recovery procedures
- On the node that was reported as the previous source run EMCMD . switchovervolume <drive letter>
- Bring SQL Server online in Failover Cluster Manager
The above steps assume you have SIOS DataKeeper Cluster Edition installed on all three servers (PRIMARY, SECONDARY, DR) and that PRIMARY and SECONDARY are a two node shared storage cluster and you are replicating data to DR which is just a standalone SQL Server instance (not part of the cluster) with just local attached storage. The DR Server will have a volume(s) that is the same size and drive letter as the shared cluster volume(s). This works rather well and will even let you replicate to a target that is in the cloud if you don’t have your own DR site configured.
You can also build the same configuration using all replicated storage if you want to eliminate the SAN completely.
Here is a nice short video that illustrates the some of the possible configurations. http://videos.us.sios.com/medias/aula05u2fl
Hi, I am in fact trying to achieve the same thing as you described. I am trying to find the instructions on how to do this… but nowhere? I am working with an onprem physical two-node WFCs, with 3 FCIs on it. All on port 1433 and SAN shared storage. In Azure, there is now a one-node WFCs with no disk resources or available disks yet, but all the corresponding volumes with correct size and volume letter are created on the “receiving” end, the will-be target server. But where do I start? should I erase all the available disks on the source WFCs first? I truly do not get it… could you point me in a good direction? Thanks.
Who are you working with at SIOS? They should be able to give you some guidance. You essentially will add the Azure node as 3rd node in your cluster and extend all three SQL Server instances to this node. You must be mistaken though about all three instances using port 1433. On;y the named instances uses 1433, the other two instance must use a unique port number that you should locked down.
You will replace the Physical Disk Resources used by you local cluster with DataKeeper Volume Resources and extend it to the 3rd node as described here.
Once you have all the nodes running in Azure you will need to create seperate rules, probes and endpoints for each instance in the ILB as described in this article. If you have questions let me know.