Clustering For Mere Mortals

Amazon EC2 Storage and Instance Size Considerations

Posted in Amazon, AWS, Cloud, EC2 by daveberm on December 12, 2013

When you launch a new instance you only have two options for the OS storage: Standard or Provisioned IOPS. Both are EBS volumes persistent across reboots. Many instances come with a bunch of extra ephemeral drives attached, which are NOT persistent. I usually delete these ephemeral drives so I am not tempted to store data on them. You will have to add additional EBS volumes for additional persistent storage.

This article seems to indicate that you can launch AMI’s based on the “EC2 Instance Store”, which is NOT persistent, but I’ve never seen that option. All of my instances have always had root devices that are EBS based; I have not seen one that is not EBS based. I’m assuming they mean some of the instances in the Amazon Market Place may use non-persistent volumes. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html

You’ll see the root device when you launch the instance, like I highlighted below. As long as EBS is the root device you are good to go and can be sure your changes will persist across reboots.

 

As far as instance size, it will depend on the needs of the application. The good thing about EC2 is that if you provision an AMI that is under powered, you can go back and increase the instance size, though it does require a reboot. If IOPS are important, you will want to make sure you choose an instance that is EBS optimized. See this page for the instance details. http://aws.amazon.com/ec2/instance-types/#instance-details . You’ll see the first instance type which is EBS optimized is M1.large.

Read this guide for additional tips for optimal storage configuration. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html . One of the best tips for increased IOPS is to use multiple smaller EBS volumes and put them together in a RAID 0 on the Windows server. Because the EBS volumes are RAID1 on the backend, you are essentially deploying RAID 1+0 in your VM for optimal performance and availability.

It is now cheaper to get provisioned IOPS on Amazon EC2 EBS

Posted in Amazon, AWS, Cloud, EC2 by daveberm on November 8, 2013

In the old days if you wanted a guaranteed 4000 IOPS on EBS, you had to provision a minimum of a 400 GB vo0lume. Considering you pay per the GB, and provisioned IOPS are not cheap, if you only needed 100 GB of fast storage you were stuck paying for 300 GB of unused storage.

With this recent announcement Amazon has made it easier to get fast storage in smaller increments. Now if you want 4000 IOPS you can get that in EBS volumes as small as 133 GB up to 1 TB in size. Read the following press release for more information.

http://aws.amazon.com/about-aws/whats-new/2013/10/09/ebs-provisioned-iops-maximum-iops-gb-ratio-increased-to-30-1/?sc_ichannel=EM&sc_icountry=EN&sc_icampaign=13Oct_Newsletter&Campaign_id=35060660&ref_=pe_467350_35060660_14

SQL Server – Massive Speed and High Availability

Posted in DataKeeper, High Availability, SQL by daveberm on August 8, 2013

Check out this great article by SQL Server guru @MrDenny

Title: Massive Speed and HA

Subtitle: Two Things That Usually Don’t Go Together

 

First Paragraph: Question: William asks “I have a SQL Server which needs both high availability and high-speed storage. Even SAN storage with SSD based hard drives isn’t providing the performance levels that we are looking for, while still getting the failover cluster HA solution that we need for our SQL Server 2008 R2 database?”

 

Read More…

Article Link: http://sqlmag.com/blog/massive-speed-and-ha

 

Author: Denny Cheery (@MrDenny)

Source: SQL Server Pro / SQLMag.com

Source Link: http://sqlmag.com/

BIO

Denny Cherry is the owner and principal consultant for Denny Cherry & Associates Consulting and has over a decade of experience working with platforms such as Microsoft SQL Server, Hyper-V, vSphere and Enterprise Storage solutions. Denny’s areas of technical expertise include system architecture, performance tuning, security, replication and troubleshooting. Denny currently holds several of the Microsoft Certifications related to SQL Server for versions 2000 through 2008 including the Microsoft Certified Master as well as being a Microsoft MVP for several years.  Denny has written several books and dozens of technical articles on SQL Server management and how SQL Server integrates with various other technologies.

Webinar Invite: How to Deploy SQL Server AlwaysOn Failover Clusters in Amazon EC2 with @awscloud #amazonaws

Posted in Amazon, AWS, Cloud, DataKeeper, EC2, High Availability by daveberm on May 23, 2013

Deploying Your Business Critical SQL Server Apps on Amazon EC2

 

Amazon Web Services (AWS) and SIOS Technology Corp, an AWS Partner Network (APN) Technology Partner, invite you to attend this live webinar to learn how to optimize mission critical SQL Server deployments on Amazon EC2.

Learn how to take advantage of the cost benefits and flexibility of Amazon EC2 while maintaining protection with native Microsoft Windows Server Failover Clustering – all without shared storage.

Who should attend:

Solution Architects, Developer, Development Leads and other SQL Professionals

Presenters:

Miles Ward, Solutions Architect, Amazon Web Services

Tony Tomarchio, Director of Field Engineering, SIOS Technology Corp

Date / Time:

Wednesday, June 5, 2013 – 10AM PT / 1PM ET

Click here to register

http://bit.ly/10VLtDu

Installing SQL Server 2008 R2 in a Windows Server 2012 Cluster

Posted in DataKeeper, High Availability, SQL, WSFC by daveberm on April 12, 2013

If you want to install ANY version of SQL Server in a Windows Server 2012 environment I highly recommend you read the following KB article.

http://support.microsoft.com/kb/2681562

In particular, I ran into this error while trying to install SQL Server 2008 R2 on Windows Server 2012 and was running into the following error (among others).

 



Figure 1 – Rule “Cluster service verification” failed

The fix is simple, as described in the KB article, simply enable the Failover Cluster Automation Server in the Add Roles and Features wizard or via the following Powershell command:

add-windowsfeature RSAT-Clustering-AutomationServer

That fix will resolve the other Setup Support Rules errors including the cluster validation error and any errors about cluster storage. You should be able to re-run the SQL installation and it will pass all the Setup Support Rules and allow you to continue with the cluster install.

Special offer for Microsoft MVPs – free DataKeeper! #MVPbuzz

Posted in DataKeeper by daveberm on March 7, 2013

I just got the word that this is official and we are ready to ship software…

SIOS Technology is pleased to offer the Microsoft MVP Community a fully functional two NFR copies of SteelEye DataKeeper Cluster Edition (DKCE).  DKCE enables SANless failover clustering solutions using any local attached storage and enables high speed and highly available SMB 3.0 storage solutions for SQL Server and Hyper-V.  

Common use cases of DKCE include SAN-less SQL Server Failover Cluster Instances using local high speed storage solutions like Fusion-io or even building clusters in the public cloud.  Another exciting possibility is highly available file servers on Windows Server 2012 for robust SMB 3.0 storage solutions for Hyper-V or SQL Server without having to purchase a shared SAS Array or SAN.  Once again you can take advantage of the blazing speeds possible with SSD or local flash based storage without sacrificing any availability and be the first kid on your block to migrate your Hyper-V and SQL Server to SMB 3.0 and take advantage of faster failovers and easier storage management.

 Simply email datakeeper-mvp@us.sios.com  to learn how to get started with DataKeeper Cluster Edition!

MVPs grab your copy today and let me know what you think. I will be monitoring the forum you will be invited to join and look forward to your feedback.

 

Why replicated clusters with DataKeeper are better than single copy SAN based clusters

Posted in DataKeeper, High Availability, SQL by daveberm on February 7, 2013

If you have followed the history of clustering as closely as I have for the past 10 years as a Microsoft Cluster MVP, you will notice that Microsoft has been steadily been moving away from single copy clusters. It started with Windows Server 2003 with the elimination of a shared disk quorum and the introduction of majority node set quorums and the file share witness. The complaint with clusters based on shared disk quorums was that if the quorum became unavailable or corrupt, the entire cluster would fail. This was a major complaint and it primarily what gave clustering a bad name in the early days of clustering.

Once the shared disk quorum was eliminated, people were still left with their application data residing on the cluster which was also a problem as the SAN was still a single point of failure in a cluster, a performance impediment and a management headache. Microsoft has begun to address those concerns with the introduction of Exchange 2007 CCR and Exchange 2010 DAGs as well as SQL Server 2008 R2 Database Mirroring. Microsoft has eliminated Exchange 2010 single copy clusters entirely and SQL Server single copy clusters are only still around because they haven’t perfected SQL Server replication yet.

Hyper-V being the most recent cluster resource supported by Microsoft clustering does not yet have a native cluster integrated replication solution. This is where SIOS DataKeeper fits in. We first demonstrated our DataKeeper Hyper-V replication solution at the Microsoft Virtualization launch in September of 2008 and have been providing HA and DR solutions for Hyper-V since Hyper-V was first introduced. Our solution is logo certified for Windows Server 2008 R2 as well as Hyper-V.

DataKeeper fills the gap left by single copy clusters as show in the table below and subsequent paragraphs. The following customer story also highlights some of the reasons why people are adopting DataKeeper in lieu of SAN based solutions.

http://www.computerweekly.com/news/2240177361/University-shuns-HP-array-features-for-SIOS-host-based-replication

Eliminates single point of failure

A SAN is a single entity made up of redundant pieces. To have a truly redundant SAN you need redundant controllers, power supplies, CPU’s, switches, UPS, RAM and the clients connecting to it need to have redundant NICs or HBAs and multi-path solutions configured. Even once you have eliminated hardware as a single point of failure, the SAN is still controlled by firmware which itself is a single point of failure. And then because the SAN resides in a single location, any physical disasters (think water, fire, etc.) also represents a risk.

I/O Performance

Given same disk specs, disk installed locally will perform better than disks stored on a SAN accessed via iSCSI. Also, using local storage opens up the possibility of using even higher speed storage solutions such as flash based PCIe storage which outperforms SANs that costs hundreds of thousands of dollars at a fraction of the costs.

Cost

Not only do you have to facture in the initial investment, which the DataKeeper solution wins by a significant percentage, you have to factor in the ongoing expense involved with maintenance, power and cooling required for any enterprise class SAN.

Supports future expansion for Disaster Recovery

Should disaster recovery solutions become a requirement in the future the DataKeeper solution can easily accommodate adding an addition Hyper-V node in a remote location in a multisite cluster configuration for a robust disaster recovery solution that includes the best RTO and RPO available. The SAN solution would require the purchase of an additional SAN, replication software and might not even include cluster integration as there are only a few solutions that actually integrate with failover clustering as well as DataKeeper does.

Eliminates Planned Downtime

With SAN based cluster solutions, any maintenance on the SAN requires planned downtime. The DataKeeper solution allows for rolling upgrades, meaning planned downtime for hardware maintenance is eliminated.

Eases management

SAN administration usually involves a SAN administrator who is familiar with the features and functionality of a SAN. The DataKeeper solution on the other hand is a simple software solution that is managed by the Windows Server administrator and features complete integration with Windows Server Failover Clustering, meaning the management is controlled through failover cluster, a tool which should be familiar with most Windows Administrators.

Summary

In summary, DataKeeper is able to provide a much more resilient cluster solution at a fraction of the cost of SAN based solutions.

SQL Server 2012 AlwaysOn Multisite Failover Cluster Instance White Paper

Posted in DataKeeper, High Availability, SQL, WSFC by daveberm on February 5, 2013

Here is an excellent white paper on SQL Server Multisite Clusters, however they forget to mention that you can also do this with host based replication. Instead, they assume you have “two EMC Symmetrix VMAX enterprise storage arrays, one at each site. The arrays were both configured with two VMAX storage engines and 240 disk drives”. If you have a million plus dollars in your budget for storage, go ahead and knock yourself out. If not, you may want to look into some Fusion-io PCIe Flash storage and host based replication with DataKeeper cluster edition, faster than a SAN at a fraction of the cost with all the availability. Check out how Polaris Industries did just this http://www.fusionio.com/blog/polaris-sios/

 

 

SQL Server High Availability in AWS #Cloud

Posted in DataKeeper, SQL, WSFC by daveberm on January 11, 2013

Have you been thinking about moving to the cloud? The potential cost savings makes it nearly impossible not to consider. The cost justification is usually easy to figure out and the cloud almost always comes out looking like a good investment. However, after you stop counting the money you are going to save you start thinking about things like security and availability and wonder whether the cloud is for you.

In a traditional data center you have the control and can deploy whatever security and high availability solution you like. However, once you decide to move your servers to the cloud your choices can become much more limited. It doesn’t matter whether you’re with Amazon, Google or Microsoft, outages in the cloud can and do occur and you need to do whatever you can to mitigate such risks.

Let’s take a closer look at Amazon Web Services (AWS) for instance. What are the options you have to ensure that your SQL Server database can survive an unexpected outage? While some applications can be deployed in a load balanced configuration across multiple availability zones, SQL Server is generally not deployed in a load balanced configuration. What this means is that SQL Server itself resides in a single availability zone and if that zone should become unavailable, your whole application stack can come to a grinding halt.

If you read this article by Miles Ward, you will see that with SQL Server 2008 R2 your availability options are pretty limited. In that article on page 11 there is a nice chart that lays out your HA options. As you will see, the options are severely limited and mostly fall outside of the category which would be described as HA. Log shipping, mirroring and transactional replication are pretty much the only options you have, and they are more of a data protection options rather than HA options. If you want Microsoft failover clustering you will find yourself out of luck due to some network limitations (clients can’t connect to a clustered IP address) in AWS and the lack of a shared disk resource required for traditional SQL clusters.

If you are looking to deploy SQL Server 2012, your options get a little better. As described by Jeremy Peschka, with a little manual intervention you can deploy AlwaysOn Availability Groups in AWS to do asynchronous replication from your data center to AWS, or even between AWS availability groups. Of course this assumes you have the SQL 2012 Enterprise license required for AlwaysOn Availability Groups. The only “issue” is that AWS really doesn’t support moving cluster IP address from one server to another, so client redirection has to be done manually using the ec2-unassign-private-ip-addresses and ec2-assign-private-ip-addresses commands after switchover that Peschka describes in his article. All-in-all this is a very manual process, which again does not really fit the description of a highly available system.

If you can live without automated recovery and with the limitations of AlwaysOn Availability Groups that I described in a previous blog post, then you might just want to go ahead and try the AlwaysOn Availability Group deployment in AWS. However, if you are looking for an easier, more affordable, more robust HA solution, I have some really good news. SIOS Technology Corp has been looking at this problem and has developed a solution that overcomes all of the limitations previously described and will be available as an AMI for easy deployment. This solution is currently in private beta, but will be widely available later this year.

The SIOS solution is based on SQL server in a Microsoft Failover Clustering using DataKeeper Cluster Edition host based replication. By using hosted based replication they have overcome the first obstacle of clustering in EC2 – lack of shared storage. The second obstacle that SIOS had to overcome was the issue of client redirection described by Peschka; the client access point needs to be manipulated from within EC2, not failover clustering. SIOS has built intelligence into their AMI solution such that the reassigning of the IP address is automated as part of the cluster failover process, effectively simulating the behavior you would normally expect from a cluster.

And because all of this is built on top of failover clustering, this can be deployed using SQL 2008/2008 R2 or 2012. Even the Standard Edition of SQL Server will support a 2-node cluster so the cost savings vs. deploying SQL 2012 AlwaysOn Availability groups could be substantial.

Let me know what you think. Does this solution sound interesting? What are you doing today to ensure the availability of your SQL Server EC2 instances?

Clustering SQL Server 2012 on Windows Server 2012 Step-by-Step

Posted in DataKeeper, SQL, WSFC by daveberm on January 5, 2013

In my previous post I walked through the process of building a 2-node cluster up to the point where we are ready to start configuring the cluster resources. If you have completed those steps you are ready to move on and actually create your clustered application. First up, we have SQL Server 2012. SQL Server 2012 cluster installation is pretty much identical to SQL 2008/2008 R2 cluster installations, so most of this will apply even if you are using SQL 2008/2008 R2. The terminology around SQL Server 2012 Clustering gets a little convoluted. You will hear mention of SQL Server AlwaysOn, which essentially could mean one of two different things: AlwaysOn Availability Groups or AlwaysOn Failover Cluster Instance. The confusion arises because both solutions require some level of integration with Windows Server Failover Clustering and it is even further confused by the fact that you can deploy a combination of AlwaysOn Availability Groups and AlwaysOn Failover Clustering, but that is a topic for another day!

I’ll break it down in easy to understand terms. Essentially AlwaysOn Availability Groups is what used to be called Database Mirroring in SQL 2008 R2 and earlier. It has some new bells and whistles that overcome some of the limitations of earlier versions of database mirroring, so it is certainly worth checking it out. AlwaysOn Failover Cluster Instance is simply what used to be called a SQL Server Failover Cluster. This is the latest edition of the same clustering technology that has been available since early versions of SQL Server. One of the best new features of SQL Server 2012 AlwaysOn Failover Cluster Instance is the ability to have nodes in different subnets. This was a major limitation in earlier versions of SQL Server. In a previous blog entry I discussed some of the limitations of AlwaysOn Availability Groups, you should check that out before you make any decisions on which technology to deploy.

With that said, this article is going to focus on the Step-by-Step instructions on deploying a SQL Server 2012 AlwaysOn Failover Cluster Instance.

Step 1 is to make sure your cluster storage is ready. If you followed the instructions in my previous post, you will know that instead of a shared disk resource, we are going to use a replicated disk resource using the 3rd party software DataKeeper Cluster Edition. If you are using shared storage and have added the storage than you can skip right to Step 2 where we begin the SQL install. Otherwise, follow the steps below to configure DataKeeper Cluster Edition to replicate the local disks for use in a SQL cluster.

  1. Install and configure DataKeeper Cluster Edition
    1. Run DK Setup
    2. Go through the entire installation process selecting all of the default values.








    3. Restart the computer after the installation completes as prompted and repeat the process on the SECONDARY server
    4. Launch the DataKeeper UI on PRIMARY and click Connect to Server. Connect to PRIMARY and then connect to SECONDARY

    5. Click on Create Job and walk through the Create Job wizard to create a mirror of the E drive


      Choose the source volume of the mirror and the IP address of the NIC that will carry the replication traffic.

      Choose the target of the mirror and click Next

      Here you will choose your mirror options:
      Compression – only enable for replication across a WAN
      Asynchronous – choose this for all WAN replication
      Synchronous – this is ideal for LAN replication
      Maximum bandwidth – used in WAN replication as a way to put a cap on the amount of bandwidth replication is allowed to use. Generally it should be left on 0, however for initial mirror creation you may want to limit the bandwidth so replication does not use all available bandwidth to do the initial synchronization

      Once you click Done the mirror will be created.

      Once the mirror is created you will be prompted to register the volume in Windows Server Failover Clustering (WSFC). Click Yes and a new DataKeeper Volume Resource will be registered in Available Storage (see picture in Step 2).
  2. In Step 2 we are going to begin the installation of SQL Server 2012 on the first cluster node.
    1. Before we begin, make sure your storage appears in Failover Cluster Manager and is assigned to the Available Storage group as shown below
    2. At this point we are going to launch the SQL Server 2012 setup and go to the Installation Tab and click New SQL Server failover cluster installation
    3. Step through the installation as shown in the following screen shots.



      The following error is expected if your servers are not connected to the internet. If you are connected to the internet you should go ahead and accept the updates it finds.










      For Service Account best practices read the following: http://msdn.microsoft.com/en-us/library/ms143504.aspx

      For our lab purposes I am just using the Administrator account


      Before you click next, click on the Data Directories tab and change the location of tempdb. With Windows Server 2012 tempdb no longer has to reside on the cluster storage. In our example we are moving tempdb to the C drive to avoid replicating unnecessary data.

      At this point you will need to make sure to create the same tempdb directory on the SECONDARY server as advised by the warning.




      Congratulations, the 1st cluster node has been installed.

  3. We are now ready to install SQL on the second node of the cluster.
    1. Go to the SECONDARY server and launch the SQL Server 2012 Setup and follow the wizard as shown in the following screen shots, starting with clicking on Add node to a SQL Server failover cluster.




      The following error is expected if your servers are not connected to the internet. If you are connected to the internet you should go ahead and accept the updates it finds.









  4. Congratulations – you have built a 2-node SQL Server 2012 AlwaysOn Failover Cluster Instance. Open up Failover Cluster Manager and you should see something that looks like this.

    This article was meant to be just a quick run through on how to install SQL 2012 in a Windows Server 2012 cluster. For additional reading start here and let Google be your friend!

Follow

Get every new post delivered to your Inbox.

Join 258 other followers