Hyper-V Cluster w/V-SAN for SQL Server

A virtualized playground providng parallel SQL 2005, 2008 R2, and 2012 instances (side by side) hosted on a three-node failover cluster. The experimentation possible with this lab is incredible – including any variant of always-on availbility groups interacting with failover cluster instances (which is a pretty interesting topic).

Architecture

  • Parallel Windows 2008R2 and Windows 2012 Clusters
  • Running SQL 2005, 2008R2, and 2012 instances together Side-by-Side.
  • Active/Active/Active cluster load model.
  • Windows 2008 R2 fully virtualized iSCSI SAN leveraging VHD files.
  • Architectural reference for SQL Server consolidation.

The lab creates two three-node virtual clusters and two AD machines acting as virtual sans and gateways for the clusters. There are two network paths – a host network and an internal cluster network. The design is outlined below.

Figure1.png

I found I needed this lab to help refine recommendations implementing SQL Server with clustering. Availability groups have several idiosyncrasies when used with failover cluster instances, and the lab lets any possible combination to be tested.

The platform provided by the lab has matured through several projects and is good practice for Fault-Tolerant, Highly-Available SQL Server installations.

Containing everything needed to setup and experiment with Windows Failover Clustering Services (WFCS), the labs provide insight you can’t get any other way.

There are scenarios provided by the three cluster nodes in two clusters which are not apparent until you work through the nuances of Availability Groups used with WFCS/Failover Cluster Instances. Three-node clusters allow you to run instances on different sub-sets of nodes which allows using the same cluster as the source and destination of an Availability Group. You can also install a stand-alone version of SQL on one of the nodes to experiment with FCI and non-FCI instance interaction. Install two SQL 2012 instances and experiment with the interactions between each other, stand-alone instances, and Availability Groups.

Building the Clusters

Building the lab is fairly straightforward. Give yourself a day to perform all of the installation. SQL installation on a cluster is straightforward when everything is working, but can be difficult to troubleshoot. SQL’s install takes a while and if you install as recommended you will be installing a total of 18 copies of SQL Server.

The clusters are mirrors of one another, so we’ll set them up together. Windows 2008R2 and Windows 2012 are similar in implementation and where they differ significantly I’ll document.

You are not required to implement both clusters – feel free to follow the instructions to create either – just ignore the other.

I will write another blog entry on the specifics of SQL with clustering, nothing but an overview of installation provided here.

Download the 180-day trial version of Windows Server 2008 R2 64-bit and Windows Server 2012 64-bit, or use your own licensed copies of either. We will be creating 8 virtual machines, 4 of which form each cluster, and one is sacrificed making the virtual machine’s operating system differencing disk (to conserve storage space). All 4 cluster nodes are built using the same underlying differencing disk. If you download the VHD versions of the operating system, or have a VHD already skip the differencing disk steps up to sysprep, and use the VHD as the base disk.

Enable Hyper-V

Enable Hyper-V by going to Programs and Features, Turn On/Off Windows Features. You will likely need to reboot.

Figure4.png

Open the Hyper-V Manager.

Figure2.png

Configure Virtual Switch

In Hyper-V Manager click the Virtual Switch Manager to open it. Create two networks for the cluster. A bridged switch which the NAT routers will use to provide internet access to the cluster, and an internal switch for the cluster communications.

Figure3.png

Create a bridged network called “Host Network”, and an internal network called “Cluster Network”.

Configure Internal Host Adapter

Configuring the switch in Hyper-V creates a network adapter for the internal network on the host PC. You’ll likely want your host to connect to your clusters – so configure the adapter and set fixed IPs which will be supported by the cluster services.

Figure5.png

Create Differencing Disks

If you downloaded the VHD version of the operating system skip to sysprep below. For ISO operating system installs we want to create a sacrificial virtual machine to use as a differencing disk. This allows us to have one install of the operating system shared among all the machines in each cluster without duplication.

Create a new virtual machine – make it easy to remember where the hard drive is for when we repurpose it.

Mount the operating system image you downloaded to the virtual machines DVD drive.

Run the install for Windows Server 2008R2 or Windows Server 2012 with defaults.

Download and Install all updates from Microsoft.

Once the VM has all the updates installed we need to seal the image and make it ready to act as the base of a new PC. Run sysprep.

After running this, you’ll not open the virtual machine so it is the final step to creating the base differential. Our cluster VM’s will start from this image.

Once the virtual machine has shut down, copy differential virtual hard disk to a new directory and mark it as read-only.

You will use it as a base for the base drive for the other machines you create.

Create VCreate Virtual Machines

Create eight new VMs bypassing hard drive creation in the wizard. We will manually add drives so we’ll get the option for differencing disks.

On all eOn all eight machines once created, manually add a new virtual hard drive. Creating the new VHDx selecting the read-only differencing disk we created as base.

Figure6.png

The clusters are mirrors of one another, so we’ll set them up together. Windows 2008R2 and Windows 2012 are similar in implementation and where they differ significantly I’ll document.

On four machines select your Windows 2008 R2 base disk, on four use your Windows Server 2012 base disk.

Create Domain Controllers

On two virtual machines install two network cards – one connected to the Host Network, one connected to the Cluster Network. Name these machines 08-ADSAN and 12-ADSAN. Assign 08-ADSAN a fixed IP of 10.1.1.1 on the Cluster Network, 12-ADSAN 10.1.1.2 on the Cluster Network. Leave the network adapter for the Host Network auto-detect. These machines host the Active Directory domain, V-SANs, and routing for the clusters.

On the ADSANs the SAN VHD files require a second hard drive with enough space, so create a second hard drive. For testing purposes I generally allocate 40gb and make 4 10gb VHD files for instances.

On 08-ADSAN add the Active Directory Services Role, and run dcpromo creating a new forest.

Figure7.png

Name the domain vcluster.local. Choose to enable DNS.

Add Active Directory Services role to 12-ADSAN, run dcpromo, and join the existing domain vcluster.local, and enable it to act as a secondary domain controller.

Figure11.png

Create Virtual SANs

On Windows 2008 R2 Install the iSCSI target adapter, a tool which allows us to mimic a iSCSI SAN virtually for the clustered environment. On Windows Server 2012 the iSCSI Target is built-in but needs to be enabled under Roles… File and Storage Services.

Install the iSCSI Software target on 08-ADSAN, and navigate to it under administrative tools.

On open, right click and create a new iSCSI target called VSAN.

Under devices, right click, and add a new virtual disk file located on the drive you added to C-ADSAN for the SAN.

It iIt is recommended you create a minimum of four virtual disks – one for each of three active nodes/instances of SQL Server (supporting an active/active cluster), and a quorum drive if you choose.

Figure9.png

On ADSAN we created a VSAN which each node will connect to.

In the end we will have an endpoint registered here for each node in the cluster. It will look like below.

You You can configure it manually or it will auto-populate with each as a choice of nodes try to connect using their iSCSI Initiators.

Figure10.png

Note the iSCSI Initiators is a list of references to the client machines which access the endpoint, so there should be three entries for each ADSAN machine – one for each cluster node. Your list should match the list below.

In Windows 2012 you need to enable the iSCSI target under the File and Storage Services role. Once enabled you will have iSCSI listed under File and Storage management.

Figure12.png

Create a target with several stores and authorize access to the IQN entries of the cluster nodes.

Figure13.png

Provision Cluster Nodes

Name three machines 08-WFCS1-3 and three machines 12-WFCS1-3. Assign their network cards to the Clustered network and assign them the following IPs.

On the 08-* machines set the gateway to 10.1.1.1, DNS Servers to 10.1.1.1, 10.1.1.2, and subnet mask to 255.255.255.0. Assign each 08-* machine IP as below.

  • 08-WFCS1 10.1.1.20
  • 08-WFCS2 10.1.1.21
  • 08-WFCS3 10.1.1.22

On the 12-* machines set the gateway to 10.1.1.2, DNS Servers to 10.1.1.2, 10.1.1.1, and a subnet mask of 255.255.255.0. Assign each 12-* machine IP as below.

  • 12-WFCS1 10.1.1.30
  • 12-WFCS2 10.1.1.31
  • 12-WFCS3 10.1.1.32

Join the machines to the vcluster.local domain.

Configure iSCSI Initiator

Go to administrative tools, iSCSI Initiator. On 08-* machines enter 10.1.1.1, 12-* machines 10.1.1.2 and click quick connect. iv>

 Figure14.png

Click auto-configure to attach the virtual hard drives on VSAN to endpoints locally.

Wait a moment for the servers to communicate drive entries. Once auto-configure lists the drives under the volume list, click OK.

Figure15.png

Close the iSCSI Initiator.

Setup Failover Clustering

Go to the Server Manager, and install the Failover Clustering Feature.

Figure16.png

Configuring Routing and Remote Access

Configure the ADSAN routing to provide gateway services for the network. These instructions are the same for Windows 2008R2 and Windows 2012.

Administrative Tools -> Routing and Remote Access

Choose NAT, next, and connect the public interface to the Host Network.

Figure17.png

Bind the Cluster Network to the private side of the NAT router.

Figure18.png

Bind the external network to the public side of the router.

Figure19.png

Building the Cluster

There will be two clusters – one for the 2008R2 machines, and one using 2012 machines. Enter the Failover Cluster Manager from Administrative Tools on 08-WFCS1 and 12-WFCS1.

Figure20.png

Through each click Create Cluster – add the three WFCS nodes to each cluster.

Figure21.png

Walk through the wizard enabling all tests. Once the tests successfully complete you’ll get a successful cluster validation report.

Call your 2008R2 cluster virtualcluster, which will fully qualify to virtualcluster.vcluster.local (10.1.1.10), and your 2012 cluster virtualcluster2012.vcluster.local (10.1.1.11).

And complete.

Figure22.png

Storage configuration in cluster manager illustrated below. You will have the four disks you created in the iSCSI target VSAN listed as available storage since you added 10.1.1.1/10.1.1.2 to your iSCSI Initiator.

Figure23.png

With the drives exposed we can install instances of SQL Server.

Installing SQL Server

I chose to install SQL Server 2012 first and I install only named instances on clusters. I also only install the latest version of SQL Server Management Studio, and like it available first.  I assume most know how to install SQL server so I’ll leave the gritty details beyond the scope of this blog (perhaps another) – however where the install deviates for clusters I make note. Note SQL 2008 R2 and SQL 2012 are far easier cluster installs that SQL 2005.

Figure25.png

With 2008 and 2012, instead of installing a stand-alone server, you’re installing the failover cluster installation from the first node (08-WFCS1 and 12-WFCS1).

Subsequent nodes are installed using add a node to a failover cluster.

Choose the storage for the instance you’re installing. Storage 1 for 2012, Storage 2 for 2008R2, Storage 3 for 2005.

The cluster install differs between 2005 and 2008/2012. In SQL 2005 you run the install from the active node, and it deploys to all the nodes. Be sure to run SQL 2005’s latest service pack before using the cluster.

In 2008 and beyond, for each instance of SQL Server you wish to install, you repeat setup – install the initial node as the first failover cluster node, and additional nodes as additional failover cluster nodes. Use one storage location per instance.

Configuring the Cluster

Once you have installed all instances on all nodes, enter the Cluster Manager. Set Preferred owners for each instance on a different node – distributing the load by server version. Set Allow Failback to immediate or to an interval. This will keep the nodes distributed and equally active when not in failover.

Configure MSDTC if your environment requires it – it is best practice.

End Product

The end result of the exercise is a very functional virtual lab with multiple instances of SQL Server hosted in an active/active/active configuration. It is ideal for experimenting with the nuances of SQL Server Always-On, and how FCI and Availability Groups interact.

Built out, your cluster manager should look similar to the one below!

I recommend SQL 2005, 2008R2, and 2012 instances, each homed to node1, node2, and node3 respectively for maximum versatility – explore replicating between nodes in the cluster with different allowed owners – if you make an instance exclusive to node 2 and an instance node 3 you can use Always On Availability Groups with 2 FCI nodes. In theory this enables two FCI to replicate also using Availability Groups.

There are interesting permutations available if you change the nodes which are allowed ownership of an instance – for example Availability Groups do work between instances of the same cluster as long as the instance cannot be owned by the same two nodes.

 

 

Leave a Reply

x

We use cookies to ensure the best possible experience on our website. Detailed information on the use of cookies on this site is provided in our Privacy and Cookie Policy. Further instruction on how to disable our cookies can be found there.