Creating your cluster and configuring the quorum: Node and File Share Majority
Welcome to Part 1 of my series “Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2”. Before we jump right in to the details, let’s take a moment to discuss what exactly a multi-site cluster is and why I would want to implement one. Microsoft has a great webpage and white paper that you will want to download to get you all of the details, so I won’t repeat everything here. But basically a multi-site cluster is a disaster recovery solution and a high availability solution all rolled into one. A multi-site cluster gives you the highest recovery point objective (RTO) and recovery time objective (RTO) available for your critical applications. With the introduction of Windows Server 2008 failover clustering a multi-site cluster has become much more feasible with the introduction of cross subnet failover and support for high latency network communications.
I mentioned “cross-subnet failover” as a great new feature of Windows Server 2008 Failover Clustering, and it is a great new feature. However, SQL Server has not yet embraced this functionality, which means you will still be required to span your subnet across sites in a SQL Server multi-site cluster. As of Tech-Ed 2009, the SQL Server team reported that they plan on supporting this feature, but they say it will come sometime after SQL Server 2008 R2 is released. For the foreseeable future you will be stuck with spanning your subnet across sites in a SQL Server multi-site cluster. There are a few other network related issues that you need to consider as well, such as redundant communication paths, bandwidth and file share witness placement.
All Microsoft failover clusters must have redundant network communication paths. This ensures that a failure of any one communication path will not result in a false failover and ensures that your cluster remains highly available. A multi-site cluster has this requirement as well, so you will want to plan your network with that in mind. There are generally two things that will have to travel between nodes: replication traffic and cluster heartbeats. In addition to that, you will also need to consider client connectivity and cluster management activity. You will want to be sure that whatever networks you have in place, you are not overwhelming the network or you will have unreliable behavior. Your replication traffic will most likely require the greatest amount of bandwidth; you will need to work with your replication vendor to determine how much bandwidth is required.
With your redundant communication paths in place, the last thing you need to consider is your quorum model. For a 2-node multi-site cluster configuration, the Microsoft recommended configuration is a Node and File Share Majority quorum. For a detailed description of the quorum types, have a look at this article.
The most common cause of confusion with the Node and File Share Majority quorum is the placement of the File Share Witness. Where should I put the server that is hosting the file share? Let’s look at the options.
Option 1 – place the file share in the primary site.
This is certainly a valid option for disaster recovery, but not so much for high availability. If the entire site fails (including the Primary node and the file share witness) the Secondary node in the secondary site will not come into service automatically, you will need to force the quorum online manually. This is because it will be the only remaining vote in the cluster. One out of three does not make a majority! Now if you can live with a manual step being involved for recovery in the event of a disaster, then this configuration may be OK for you.
Option 2 – place the file share in the secondary site.
This is not such a good idea. Although it solves the problem of automatic recovery in the event of a complete site loss, it exposes you to the risk of a false failover. Consider this…what happens if your secondary site goes down? In this case, your primary server (Node1) will go also go offline as it is now only a single node in the primary site and will no longer have a node majority. I can see no good reason to implement this configuration as there is too much risk involved.
Option 3 – place the file share witness in a 3rd geographic location
This is the preferred configuration as it allows for automatic failover in the event of a complete site loss and eliminates any the possibility of a failure of the secondary site causing the primary node to go offline. By having a 3rd site host the file share witness you have eliminated any one site as a single point of failure, so now the cluster will act as you expect and automatic failover in the event of a site loss is possible. Identifying a 3rd geographic location can be challenging for some companies, but with the advent of cloud based utility computing like Amazon EC2 and GoGrid, it is well within the reach of all companies to put a file share witness in the clouds and have the resiliency required for effective multi-site clusters. In fact, you may consider the cloud itself as your secondary data center and just failover to the cloud in the event of a disaster. I think the possibilities of cloud based computing and disaster recovery configurations are extremely enticing and in fact I plan on doing a whole blog post on a just that in the near future.
Configure the Cluster
Now that we have the basics in place, let’s get started with the actual configuration of the cluster. You will want to add the Failover Clustering feature to both nodes of your cluster. For simplicity sake, I’ve called my nodes PRIMARY and SECONDARY. This is accomplished very easily through the Add Features Wizard as shown below.
Figure 1 – Add the Failover Clustering Role
Next you will want to have a look at your network connections. It is best if you rename the connections on each of your servers to reflect the network that they represent. This will make things easier to remember later.
Figure 2- Change the names of your network connections
You will also want to go into the Advanced Settings of your Network Connections (hit Alt to see Advanced Settings menu) of each server and make sure the Public network is first in the list.
Figure 3- Make sure your public network is first
Your private network should only contain an IP address and Subnet mask. No Default Gateway or DNS servers should be defined. Your nodes need to be able to communicate across this network, so make sure the servers can communicate across this network; add static routes if necessary.
Figure 4 – Private network settings
Once you have your network configured, you are ready to build your cluster. The first step is to “Validate a Configuration”. Open up the Failover Cluster Manager and click on Validate a Configuration.
Figure 5 – Validate a Configuration
The Validation Wizard launches and presents you the first screen as shown below. Add the two servers in your cluster and click Next to continue.
Figure 6 – Add the cluster nodes
A multi-site cluster does not need to pass the storage validation (see Microsoft article). Toskip the storage validation process,click on “Run only the tests I select” and click Continue.
Figure 7 – Select “Run only tests I select”
In the test selection screen, unselect Storage and click Next
Figure 8 – Unselect the Storage test
You will be presented with the following confirmation screen. Click Next to continue.
Figure 9 – Confirm your selection
If you have done everything right, you should see a summary page that looks like the following. Notice that the yellow exclamation point indicates that not all of the tests were run. This is to be expected in a multi-site cluster because the storage tests are skipped. As long as everything else checks out OK, you can proceed. If the report indicates any other errors, fix the problem, re-run the tests, and continue.
Figure 10 – View the validation report
You are now ready to create your cluster. In the Failover Cluster Manager, click on Create a Cluster.
Figure 11 – Create your cluster
The next step asks whether or not you want to validate your cluster. Since you have already done this you can skip this step. Note this will pose a little bit of a problem later on if installing SQL as it will require that the cluster has passed validation before proceeding. When we get to that point I will show you how to by-pass this check via a command line option in the SQL Server setup. For now, choose No and Next.
Figure 12 – Skip the validation test
The next step is that you must create a name for this cluster and IP for administering this cluster. This will be the name that you will use to administer the cluster, not the name of the SQL cluster resource which you will create later. Enter a unique name and IP address and click Next.
Note: This is also the computer name that will need permission to the File Share Witness as described later in this document.
Figure 13 – Choose a unique name and IP address
Confirm your choices and click Next.
Figure 14 – Confirm your choices
Congratulation, if you have done everything right you will see the following Summary page. Notice the yellow exclamation point; obviously something is not perfect. Click on View Report to find out what the problem may be.
Figure 15 – View the report to find out what the warning is all about
If you view the report, you should see a few lines that look like this.
Figure 16 – Error report
Don’t fret; this is to be expected in a multi-site cluster. Remember we said earlier that we will be implementing a Node and File Share Majority quorum. We will change the quorum type from the current Node Majority Cluster (not a good idea in a two node cluster) to a Node and File Share Majority quorum.
Implementing a Node and File Share Majority quorum
First, we need to identify the server that will hold our File Share witness. Remember, as we discussed earlier, this File Share witness should be located in a 3rd location, accessible by both nodes of the cluster. Once you have identified the server, share a folder as you normally would share a folder. In my case, I create a share called MYCLUSTER on a server named DEMODC.
The key thing to remember about this share is that you must give the cluster computer name read/write permissions to the share at both the Share level and NTFS level permissions. If you recall back at Figure 13, I created my cluster and gave it the name “MYCLUSTER”. You will need to make sure you give the cluster computer account read/write permissions as shown in the following screen shots.
Figure 17 – Make sure you search for Computers
Figure 18 – Give the cluster computer account NTFS permissions
Figure 19 – Give the cluster computer account share level permissions
Now with the shared folder in place and the appropriate permissions assigned, you are ready to change your quorum type. From Failover Cluster Manager, right-click on your cluster, choose More Actions and Configure Cluster Quorum Settings.
Figure 20 – Change your quorum type
On the next screen choose Node and File Share Majority and click Next.
Figure 21 – Choose Node and File Share Majority
In this screen, enter the path to the file share you previously created and click Next.
Figure 22 – Choose your file share witness
Confirm that the information is correct and click Next.
Figure 23 – Click Next to confirm your quorum change to Node and File Share Majority
Assuming you did everything right, you should see the following Summary page.
Figure 24 – A successful quorum change
Now when you view your cluster, the Quorum Configuration should say “Node and File Share Majority” as shown below.
Figure 25 – You now have a Node and File Share Majority quorum
The steps I have outlined up until this point apply to any multi-site cluster, whether it is a SQL, Exchange, File Server or other type of failover cluster. The next step in creating a multi-site cluster involves integrating your storage and replication solution into the failover cluster. This step will vary from depending upon your replication solution, so you really need to be in close contact with your replication vendor to get it right. In Part 2 of my series, I will illustrate how SteelEye DataKeeper Cluster Edition integrates with Windows Server Failover Clustering to give you an idea of how one of the replication vendor’s solutions works.
Other parts of this series will describe in detail how to install SQL, File Servers and Hyper-V in multi-site clusters. I will also have a post on considerations for multi-node clusters of three or more nodes.
68 thoughts on “Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2 – Part 1”
[…] Part 1 of this series, we took a look at the first steps required for building a multi-site cluster. We […]
very nice document. Thank you for sharing. I have question can we install sql 2008r2 failover cluster on a windows domain controller. if no why cant we?
Do you want to protect AD or a different service running on you DC?
I’ve done it for 15 years without an issue. The domain though is used exclusively for centralized admin accounts, and because clusters need them. If you have a very active domain, you will want to put the domain controller on a different system for scalability. Note that you can no longer do this in Windows 2012R2. If you promote a cluster node to a DC, the cluster service no longer functions properly. As soon as you remove the DC service, the cluster service functions again as normal. I wish Microsoft would not have done this. For my scenario (which is not typical), it makes sense to have the cluster and DC on the same box.
[…] Part 1 of this series, I showed you how to prepare your multi-site cluster, including setting up the Node and File Share […]
Very Informative and helpfull…
Blogs RSS feed not work in my browser (google chrome) how can I fix it?
I use a wordpress.com as my host, I imagine there might be some information on their site? Sorry I couldn’t be more helpful, thanks for reading!
[…] https://clusteringformeremortals.com/2009/09/15/step-by-step-configuring-a-2-node-multi-site-cluster-… […]
I have a question. What if i have 2 sites; on primary site i have 2 nodes and on secondary i have 2 nodes with file share majority. What should be with vm’s if 1 node from primary site fail? How SCVMM or failover clustering service understand that these vm’s no need to start on secondary site?
You really need to have a look at these articles to understand what node will come into service next. Unfortunately, there is no concept of “site preference”, so it is possible that a remote node will come into service, even if a local node is available. If you need site preference (you do), let Microsoft know!
First – thanks for the excellent articles — they really helped me deal with the fun of installing SQL Server 2008 on a Windows 2008 R2 cluster.
My question is about the binding order of the network adapters in the Windows Cluster install. Why does public have to be first? And if the cluster was built with the private adapter first in the binding order, can the binding order be changed without messing up the cluster configuration?
I know if the public network is not at the top of the binding order, SQL Server 2008 setup will complain. In Windows 2000 and 2003 it definately was a hard requirement in a cluster to have the public network at the top of the binding order, so I think it is ingrained in me to do that by default. I think you will be fine to change the binding order after the fact. I would change it and run Validate again to make sure the cluster is happy with the changes.
Actually with Windows 2008 R2 filaover cluster, this will all work with just the public netwrok configured. The hearbeat will configure itself within that network, this is autometic within the cluster so you can now forgo the private network.
That is a really good point. I was about to dispute your claim and say that you needed at least two networks to pass validation but when I looked up the Network requirements in help I discovered the following…
Network adapters and cable (for network communication): The network hardware, like other components in the failover cluster solution, must be marked as “Certified for Windows Server 2008 R2.” If you use iSCSI, your network adapters should be dedicated to either network communication or iSCSI, not both.
In the network infrastructure that connects your cluster nodes, avoid having single points of failure. There are multiple ways of accomplishing this. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network that is constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.
So, it certainly can be built with a single public network but the best practice is to to make sure you eliminate all single points of failure on that network as described above.
Thanks for the great comment!
[…] https://clusteringformeremortals.com/2009/09/15/step-by-step-configuring-a-2-node-multi-site-cluster-… June 22, 2010 3:51 am Avihu Turzion The answer actually lies in part 1 of this blog, but it answered it well enough for what I needed. I’d be happy if you could edit your answer to contain a link to the first part, and in it the part about network considerations, and then I’ll remove this comment asking you you to do so. July 5, 2010 4:58 am tony roth what os level are you running I assume w2k8r2 is that correct? June 22, 2010 3:49 am Avihu Turzion yes, I’m running w2k8r2 July 4, 2010 8:32 am […]
Yes, assuming you have a VPN tunnel or something like that?
Hi, its a very nice article, hats off 🙂
I have a quick question on Multi-Site Cluster, as we know we traditionally have two networks in a typical cluster, one is public network used for client connectivity and second one is private network used for heartbeat etc, now keeping in view we can use different subnets in Windows Server 2008 (R2) failover clustering but how Private Network would be configured in this situation where we have two different datacenters at different geographic locations and using different public subnets.
looking forward for the soonest response 🙂 thanks.
Basically with Windows Server 2008 clustering your nodes can now reside in two different subnets. The new functionality is in the IP Resource which allows an “OR” dependency. By defining one IP Resource in subnet “A” and one IP Resource in subnet “B” and creating an “OR” dependency, the IP Resource that is appropriate for that subnet will come into service. This slide presentation describes it nicely starting around slide 22.
This is great for most cluster types but does not work yet for SQL Server resources as it is not supported. And of course a Hyper-V cluster does not have an IP Resource as part of the cluster and instead the IP is part of the VM configuration. In the case of a Hyper-V resource I recommend using DHCP inside the VM and use DHCP reservations instead of static IP addresses.
[…] Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2 – Part 1 September 200914 comments and 1 Like on WordPress.com, 3 […]
Great info. Looks like someone is trying to pass it off as their own work. Take a look. In Part 2, he even left some of the links to your own site.
I guess it is a lot easier to copy other’s content than to write your own. Thanks for the nice comment and letting me know about this other site. I’m sure most of his material is probably “borrowed” from others and he is just trying to make a buck on advertising or something along those lines.
Quick question, during the creation of the cluster it asks for the Network and Address. My question is if I have 2 servers I’m adding to the cluster lets call them Node1 and Node2 configured as such
When setting up the cluster the Networks are prepopulated with 192.168.1.0/24 and the Address is left for me to determine. So if I set the address as 192.168.1.4/24, am I still going to be able to connect to Node1 and Node 2 on their 192.168.1.2 and 192.168.1.3 addresses?
Also being that this is for Hyper-V what will happen to any existing VM’s that are currently running as I setup the clustering?
Yes, you will still be able to access the cluster nodes via their static IPs. As far as your VMs, simply clustering them does not change any of the IP information within the VM’s.
Regarding the file placement of the file share witness, can it be a DFS share?
It is not supported to put a FSW on a DFS share. Here is a good explanation why…
Thanks for this series of articles on multi-site clustering; very informative.
What are the minimum requirments in terms of network latency and bandiwidth for mult-site cluster? Would appreciatte numbers for both Async and Sync mode.
In 2 node primary site, and 2 node seondary site scenario, can I place the File Share Quorum in the secondary site? How about 2 node primary site, and 1 node seondary site scenario?
Network latency and bandwidth requirements will be determined by your replication vendor. In general synchronous replication will require LAN like speed and latency in order to minimize the impact to on the write throughput of your application. In terms of DataKeeper replication latency really has no impact on replication as we have built in WAN acceleration. The link speed required will be determined by your rate of change, which can be measured by using Perfmon to collect Disk Write Bytes/sec on the replicated volumes. If you collect that and assume you are going to minimally get a 50% compression ratio you will have a pretty good idea on your bandwidth requirements.
Putting your FSW in the second site is not a good idea as a failure of your WAN will cause a failover even though your primary site is still online. A 3 node cluster with 2 in SiteA on 1 in SiteB is OK, but if you use a simple node majority quorum which is customary for a 3-node cluster you will have to force the quorum online if you lose SiteA. However, there was a patch release recently that will allow you to adjust the node weight so that you can use a FSW in a 3 node cluster. I wrote an article about it here
Regardless of how you slice it or dice it the FSW must be in a 3rd location in a multisite cluster in order to support automated failover without risking the false failovers associated with having the FSW in SiteB.
Hope that helps!
Thanks you so much for your quick reply; truly appreciatted. I will let you know if I have any further questions.
The roundtrip communication latency between any pair of nodes cannot exceed 500 milliseconds. If communication latency exceeds this limit, the Cluster service assumes the node has failed and will potentially fail over resources.
FSW on a sevrer at a third site that has a network path to both sites is a good idea if possible.
While the 500 millisecond rule is true for Windows 2003 clusters, with Windows 2008 they did away with that restriction and added tunable parameters called CrossSubnetDelay and CrossSubnetThreshold to support as described in the following articles.
[…] https://clusteringformeremortals.com/2009/09/15/step-by-step-configuring-a-2-node-multi-site-cluster-… […]
Yes, asuming you are connected via VPN to this site from the other two data centers. I imagine cloud providers that offer infrastructure as a service can help you get this set up.
Hi, anyone knows why one of the nodes fails to start. It has been functioning for a long time and now fails…
Probably better off posting this question in the Microsoft Clustering Forum with ALOT more detail!
First of all thank you for a really good article.
I have followed your instructions to the t but when i come to adding in the quorum share then I get this error message: Unable to save property changes for “File Share Witness” The system cannot find the file specified, I made sure that all permissions where set correctly and I have tried with 2 different machines but still no joy
It sounds like a permission problem still. At the share level permissions give Everyone Full Control. Then at the NTFS security permissions you must add the cluster
name (this is the virtual name of the cluster that you picked when you created the cluster) and give it READ/WRITE access.
If you still have problems let me know and I’d be glad to set up a remote session to have a look.
Oh man your the hero of the day 🙂 that was the biscuit 🙂 in my 8nix world I gave everything the 777 rights and voila the quorum disk was created.
If I can assist you at all in anyways then please feel free to ping me
We are planning on changing IP addresses for SQL, Enterprise SSO and MSDTC services which are clustered on a 2 node windows failover cluster. My Plan is to update the network names in DNS and then update the same IP addresses for each of the clustered resources in the Failover Cluster Management console. Please advise what would be the best way to achieve this? Thanks.
Just to say thank for a great article. I approached clustering a SQL server with some doubts but your article explained it very well.
Thanks for the note!
Can I follow these instructions and set up a multi-node cluster on Amazon Web Services EC2?
With kind regards,
The last time I looked at EC2 there were two problems that needed to be resolved. The first issue is that building a multisite cluster requires static IP addresses, which seemed hard if not impossible to implement in EC2. The second is that the servers were not “persistent”, meaning that a reboot of the server essentially reset them to their original state. I thuink maybe EBS has fixed this but you need to make sure your whole VM is persistent between reboots.
This whole process is a lot easier in other cloud providers like GoGrid who give you true Infrastructure as a Service with all the features you need. Now that they have multiple data centers in different geographic locations it is also an ideal configuration for geographically dispersed clusters.
Wow That was quick. Thank you for your reply. i have looked into GoGrid but they are expensive and do not offer a “Pay As You Go” option. I am a mere engineer who is looking for a way to run a 64 CPU, Windows based cluster, when I need it. With kind regards and Happy Holidays,
I was wondering if you can help me, please, I have just build a site to site Windows Server 2008 R2 cluster using EMC VNX for the mirrior view, but when I tried to carry out a resource failover I am getting RHS error, DLL Deadlocks, what do you think the problem could be? and how can this be resolved.
Sounds like it might have to do with the storage resource. If you have support from EMC I would see if they have any idea. John Toner is another cluster MVP and works for EMC. He knows EMC storage and clustering better than anyone. I would check out his blog http://msmvps.com/blogs/jtoner/archive/2007/04/27/emc-srdf-ce-for-mscs.aspx and he also monitors the cluster forum at http://social.technet.microsoft.com/Forums/en-US/winserverClustering/threads
I would probably open a case with Microsoft as DLL Deadlocks can be hard to diagnose. You can also post on the Microsoft forum, but opening a case will get you the quickest answer. Sorry I couldn’t be of more help.
[…] Once you have your network configured, you are ready to build your cluster. The first step is to “Validate a Configuration”. Open up the Failover Cluster Manager and click on Validate a Configuration. Figure 4 – Private network settings Step-by-Step: Configuring a 2-node multi-site cluster on Windows Server 2008 R2 – Part 1 « Cluste… […]
Great and excelent site, Clus folks
Hi Dave very nice article Still trying to read the remaining sections but I have a quick question. Can a DFS share between the sites be used as the File Share Witness for the clustere?
Sorry, a FSW cannot be part of DFS-R, it kinds of defeats the purpose of a witness as if sites loss connectivity you could have two witness reporting two different things.
So if your solution is limited to two sites then option1 will be the vaible option to implement by placing it in the primary site which involves some manual intervention?
Excellent article, many thanks to you to document the entire configuration process. I am planning to set up a 4 node SQL 2008 R2 geo-cluster on WIndows 2008 R2 geo-cluster -> 2 Nodes on Site1 and 2 more on Site2.
1) I have heard that it is recommended to have more number of nodes on a specific site (to configure majority node), is it true?
2) Also, I have heard that we can use local disks and then have them replicated through storage layer (don’t need SAN).. is this true? I will use EMC networker for SAN replication.
I have installed lot of multi-node failover clusters but this geo-clustering is new for me. We would use stretched VLAN so that the nodes in the different sites are there in the same subnet. Please advise.
If you want a 2 x 2 cluster you will need to use a File Share Witness (FSW). In order to have automated failover, the File Share Witness MUST reside in a 3rd location. If you don’t have a 3rd location, then just put the FSW in the primarary location, but you will have to force the quorum online should you have a complete site loss.
Thanks a bunch for the reply Dave, appreciate your assistance and guidance!
A few more questions :
1) How easy is it to force a quorum online in case of a site loss?
2) I need more details on the failover process in case of:
(a) Failover within the same site
(b) Failover across sites
AFAIK, the above failovers are not similar to the single site clusters wherein we have shared disks, here we would have to depend on the storage replication solution for failover, correct me if I am wrong. If that is the case, then how will the failover happen “automatically” if the FSW is placed in 3rd location? Won’t we have to manually mount the disks in the 2nd site incase the 1st site is down (and vice versa)? Please advise. My apologies if I have asked too many questions..
anyone doing this with netapp snapmirror? I already have a D drive on both nodes but in cluster manager I am unable to add these in as storage. any idea?
Not snapmirror. You can do something similar with MetroCluster, but from what I understand it is synchronous only which will certainly have an impact when replicating across the WAN. DataKeeper Cluster Edition will work with your NetApp Storage and can integrate with faiover clustering to provide a multisite cluster solution with either synchronous or asynchronous replication.
[…] the cluster nodes, sometimes cost dictates otherwise. Microsoft Clustering MVP David Bermingham wroteabout the different options on where to place the file share witness. I’m all for this […]
I have an existing 2008R2 failover cluster that is running a SQL 2008 database. I need to add a server at another site that is updated in real-time. Any suggestions on how to do this?
You should probably look into replication solutions like DataKeeper or any of the native SQL Server replication solutions, like Database Mirroring
No, Express does not support clustering
Thanks for the Nice Document, it helped me very well to setup my environment. In my environment we EMC RecoverPoint to replicate the data between the sites, and using SQL Server 2012 With EMC CE for Multi-Site Clustering. Everything works well, able to Fail-over SQL Server without having any issue.
But just realized when we ping the SQL Resource Name from client PC or Application server, it comes either Site A IP or Site B IP (though Site B is offline), i think this is because of Round Robin in DNS Level. so the problem is when we have multiple application Servers, one application works fine and other application server comes with SQL timeout error.
do you know any solution for this ? i read there is a property “MultiSubnetFailover=True” in application layer to resolve this. But what if, if the application does not support this feature ? is there any workaround in DNS level to forward between IP addresss ? or is there anyway to suspend the Round Robin for this Cluster Resource name ?
Please help me out.
You probably want to turn off registerallprovidersip so only the active node is in DNS. This article should help.http://blogs.msdn.com/b/clustering/archive/2009/07/17/9836756.aspx
Hi, I’ve setup a 3 node cluster ( for testing) 2 nodes in one city and 1 node in city2. The node in city2 has a D:\ which will not come online. Basically, I want users in city2 to use the role name
\\fileservercluster\share and access the d:\ on the node in their respective location. However, users in the main city will not use this D:\. Is this possible?
Yes, but you will need something like DataKeeper in order to make this possible.
A couple of things, 1) how are your configuring EC2 to host the FSW? VPN? Public file share?
2) What happens when the FSW goes down but the 2 nodes are okay and operational?
The FSW is only one vote out of three, so if it fails you still have two votes so nothing will happen. The FSW needs to be hosted on a server where the cluster computer object has read/write access. Coming in Windows Server 2016 you can have a Cloud Witness which leverages Azure File Service available over the public network.
In the meantime, how would you connect to an EC2 Instance? Is it Windows? Do you just open the firewall port for file sharing, or do you setup a VPN between EC2 and the 2 sites?