Hyper-V Core Cluster Home Lab Setup

This article will explore the setup and configuration of a Hyper-V Core cluster and the creation of high availability VMs.

Hyper-V Server 2012 R2 (also referred to by some as Hyper-V Core) is a free standalone virtualization platform offered by Microsoft.  This free offering sports the same feature set and scalability available in the Hyper-V role in a full Server 2012R2 OS install.  This makes it a viable and attractive solution not only for organizations but also Home Lab setups, where budget is often an important factor.  It should be noted that while Hyper-V Server 2012 R2 itself is free, the infrastructure requirements needed for cluster creation are not.

I’ve come across a few articles stating how quickly and easily you can setup / configure a Hyper-V Core cluster.  In my experience I found the setup and configuration process to be both lengthy, and somewhat complex.  If you’re viewing this write-up in the hopes of finding a few one-liners and a 15 minute solution, you’re going to be disappointed.  This guide will hopefully reduce the complexity of this topic and speed you along, but a Hyper-V Core cluster setup is still an involved process with a lot to configure.


Requirements for Hyper-V Cluster:
  • 2 physical computers for Hyper-V hosts
    • matching computers that contain the same or similar components
  • Shared storage solution – this example utilizes a separate physical server for iSCSI
  • Domain/DNS infrastructure – VM or physical
  • Network connectivity – It is recommended to have a minimum of two networks for your failover cluster

For full requirement details you can reference this Technet article: Deploy a Hyper-V Cluster

Aidan Finn also has a really detailed requirements and planning write-up: Rough Guide To Setting Up A Hyper-V Cluster

If you are looking for hardware inspiration check out Gareth’s two posts:

The Home Hyper-V Test Lab – On a Budget (Part 1)
The Home Hyper-V Test Lab – On a Budget (Part 2)


Configuration used in this Hyper-V Core Cluster guide:

  • Cluster nodes
    • 2x matching Dell Poweredge R710 2×2.26GHz Quad Core – 32GB – used from eBay
  • Shared storage solution – iSCSI
    • Synology DS1815+ – 4xSSD RAID10 VM storage – 4x3TB WD Reds RAID5 for additional guest storage
  • Domain/DNS
    • VMWare hosted Domain Controller with DNS role – running on custom build 3.2GHz quad xeon – 32GB
  • Network connectivity
    • 2x HP 1810-24G v2 Switch – you could get away using only one, it would just be the single point of failure

Note: This is a guide for a Home Lab configuration. It’ll give you an idea of how a production environment might be setup, but it does not highlight or utilize many of the best practices that you should be implementing for a production setup.

Step 1: Get the Hyper-V Server 2012 R2 ISO and install onto both cluster nodes

ISO download: Hyper-V Server 2012 R2

Hyper-V Server Configuration Screen
Hyper-V Server Configuration
Step 2: Rename both nodes

I named mine: HYPV1 and HYPV2

Hyper-V Server Configuration - Setting Computer Name
Hyper-V Server Configuration – Setting Computer Name
Step 3: Enable Remote desktop on both nodes
Hyper-V Server Configuration - Remote Desktop Setting
Hyper-V Server Configuration – Remote Desktop Setting
Step 4: Disable (at least temporarily) the Windows Firewall on both nodes

We are going to be doing a ton of configuration and setup changes below.  If something isn’t working right during initial setup, having the firewall off will help with troubleshooting issues.  This can always be enabled later.

netsh advfirewall set allprofiles state off
Step 5: Adjust the page file on both nodes – as needed

Your hyp nodes are (hopefully) loaded up with a good bit of RAM.  As a result, the page file default settings may have created a rather huge page file.  You can check if this is the case with first command, and then adjust as needed using the second command.

#Get PageFile Initial and Max sizes
$PageFileInfo = Get-WmiObject Win32_PageFileSetting | Select InitialSize, MaximumSize
Write-Host "Initial: " $PageFileInfo.InitialSize"`nMax: "$PageFileInfo.MaximumSize

#Set Page File
$PageFile = Get-WmiObject Win32_ComputerSystem -EnableAllPrivileges
$PageFile.AutomaticManagedPagefile = $False
$PageFile.Put()
$NewPageFile = gwmi -Query "select * from Win32_PageFileSetting where name='C:\\pagefile.sys'"
$NewPageFile.InitialSize = [int]"6144"
$NewPageFile.MaximumSize = [int]"6144"
$NewPageFile.Put()
Step 6: Create NIC Team on both nodes

You will often see recommendations to create two NIC teams – one being dedicated for iSCSI traffic.  I elected to team all 4 NICs using the minimum bandwidth setting for Management, Cluster, ISCSI, VM, and LiveMigration traffic.  I highly recommend that you take a few moments to watch John Savill’s discussion on this method of teaming: Using NIC Teaming and a virtual switch for Windows Server 2012 host networking

#---------------------HYPV1----------------------------
# This will loop through and get all available NICs and add them to the team
$NICname = Get-NetAdapter | %{$_.name}
New-NetLbfoTeam -Name Hyp1Team –TeamMembers $NICname -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort -Confirm:$false
# New Network Hyp1Team Team
Set-NetLbfoTeam -Name Hyp1Team -TeamingMode
#-------------------END HYPV1---------------------------
#---------------------HYPV2----------------------------
# This will loop through and get all available NICs and add them to the team
$NICname = Get-NetAdapter | %{$_.name}
New-NetLbfoTeam -Name Hyp2Team –TeamMembers $NICname -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort -Confirm:$false
# New Network Hyp1Team Team
Set-NetLbfoTeam -Name Hyp2Team -TeamingMode
#-------------------END HYPV2---------------------------
Step 7: Hyper-V switch configuration

The below may seem a little daunting at first glance.  While there is a lot going on note that much of it is repeating as we perform the same set of very simple steps.

We are creating a new Hyper-V switch, and then we are creating a seperate vNIC for each type of traffic: Management, Cluster, ISCSI, VM, and LiveMigration. Each of these gets it’s own subnet, VLAN ID, and MinimumBandwidthWeight.

Adjust accordingly to fit the current configuration demands of your own lab.

#---------------------HYPV1----------------------------
# Create new switch
New-VMSwitch -Name HypVSwitch –NetAdapterName Hyp1Team –MinimumBandwidthMode Weight –AllowManagementOS $false
 
# Create vNICs on VSwitch

# Management1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Management1 –SwitchName HypVSwitch
# Set Manamgement1 to VLAN1
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Management1 –Access –VlanId 1
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (Management1)" -NewName Management1
# IP/subnet
New-NetIPAddress -InterfaceAlias Management1 -IPAddress 192.168.1.191 -PrefixLength 24 -DefaultGateway 192.168.1.250 -Confirm:$false
Set-DnsClientServerAddress -InterfaceAlias Management1 -ServerAddresses 192.168.1.189, 192.168.1.188, 192.168.1.250
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name Management1 -MinimumBandwidthWeight 10

# Cluster1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster1 –SwitchName HypVSwitch
# Set Cluster1 to VLAN4
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster1 –Access –VlanId 4
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (Cluster1)" -NewName Cluster1
# IP/subnet
New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.20 -PrefixLength 24 -Confirm:$false
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name Cluster1 -MinimumBandwidthWeight 15

# iSCSI1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name iSCSI1 –SwitchName HypVSwitch
# Set iSCSI1 to VLAN3
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName iSCSI1 –Access –VlanId 3
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (iSCSI1)" -NewName iSCSI1
# IP/subnet
New-NetIPAddress -InterfaceAlias iSCSI1 -IPAddress 10.0.1.20 -PrefixLength 24 -Confirm:$false
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name iSCSI1 -MinimumBandwidthWeight 30
 
# VM1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name VM1 –SwitchName HypVSwitch
# Set VM1 to VLAN2
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName VM1 –Access –VlanId 2
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (VM1)" -NewName VM1
# IP/subnet
New-NetIPAddress -InterfaceAlias VM1 -IPAddress 10.0.3.20 -PrefixLength 24 -Confirm:$false
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name VM1 -MinimumBandwidthWeight 30

# LM1 vNIC
Add-VMNetworkAdapter –ManagementOS –Name LM1 –SwitchName HypVSwitch
# Set LM1 to VLAN5
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName LM1 –Access –VlanId 5
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (LM1)" -NewName LM1
# IP/subnet
New-NetIPAddress -InterfaceAlias LM1 -IPAddress 10.0.4.20 -PrefixLength 24 -Confirm:$false
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name LM1 -MinimumBandwidthWeight 15

#for removing: Set-VMNetworkAdapterVlan -Untagged
#-------------------END HYPV1---------------------------
#---------------------HYPV2----------------------------
# Create new switch
New-VMSwitch -Name HypVSwitch –NetAdapterName Hyp2Team –MinimumBandwidthMode Weight –AllowManagementOS $false
 
# Create vNICs on VSwitch

# Management2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Management2 –SwitchName HypVSwitch
# Set Manamgement2 to VLAN1
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Management2 –Access –VlanId 1
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (Management2)" -NewName Management2
# IP/subnet
New-NetIPAddress -InterfaceAlias Management2 -IPAddress 192.168.1.192 -PrefixLength 24 -DefaultGateway 192.168.1.250 -Confirm:$false
Set-DnsClientServerAddress -InterfaceAlias Management2 -ServerAddresses 192.168.1.189, 192.168.1.188, 192.168.1.250
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name Management2 -MinimumBandwidthWeight 10

# Cluster2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name Cluster2 –SwitchName HypVSwitch
# Set Cluster2 to VLAN4
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName Cluster2 –Access –VlanId 4
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (Cluster2)" -NewName Cluster2
# IP/subnet
New-NetIPAddress -InterfaceAlias Cluster2 -IPAddress 10.0.2.30 -PrefixLength 24 -Confirm:$false
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name Cluster2 -MinimumBandwidthWeight 15

# iSCSI2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name iSCSI2 –SwitchName HypVSwitch
# Set iSCSI2 to VLAN3
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName iSCSI2 –Access –VlanId 3
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (iSCSI2)" -NewName iSCSI2
# IP/subnet
New-NetIPAddress -InterfaceAlias iSCSI2 -IPAddress 10.0.1.30 -PrefixLength 24 -Confirm:$false
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name iSCSI2 -MinimumBandwidthWeight 30
 
# VM2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name VM2 –SwitchName HypVSwitch
# Set VM2 to VLAN2
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName VM2 –Access –VlanId 2
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (VM2)" -NewName VM2
# IP/subnet
New-NetIPAddress -InterfaceAlias VM2 -IPAddress 10.0.3.30 -PrefixLength 24 -Confirm:$false
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name VM2 -MinimumBandwidthWeight 30

# LM2 vNIC
Add-VMNetworkAdapter –ManagementOS –Name LM2 –SwitchName HypVSwitch
# Set LM2 to VLAN5
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName LM2 –Access –VlanId 5
# Rename the adapter so we can keep everything straight when troubleshooting
Rename-NetAdapter -Name "vEthernet (LM2)" -NewName LM2
# IP/subnet
New-NetIPAddress -InterfaceAlias LM2 -IPAddress 10.0.4.30 -PrefixLength 24 -Confirm:$false
# minimum QoS weighting
Set-VMNetworkAdapter -ManagementOS -name LM2 -MinimumBandwidthWeight 15

#for removing: Set-VMNetworkAdapterVlan -Untagged
#-------------------END HYPV2---------------------------

I would recommend you now pull an ipconfig /all on both nodes to ensure all settings took. Also verify connectivity between both nodes on each of the VLAN segments.

Step 8: Join both nodes to the domain
Hyper-V Server Configuration – Join Domain
Hyper-V Server Configuration – Domain/Workgroup settings
 Step 9: Enable iSCSI Initiator Service and add / enable MPIO

On both nodes we will be setting the iSCSI Initiator Service to Automatic start and then starting the service.  To optimize connectivity with the iSCSI storage and to ensure high availability we will also be adding and enabling MPIO.

#set iSCSI Initiator Service to Automatic start
Set-Service -Name msiscsi -StartupType Automatic
#start service
Start-Service msiscsi

# to check if working: Get-WindowsOptionalFeature –Online –FeatureName MultiPathIO
# enable MPIO
Enable-WindowsOptionalFeature –Online –FeatureName MultiPathIO
# to remove: Disable-WindowsOptionalFeature –Online –FeatureName MultiPathIO

# enable and set the policy
# reboot will likely be required at some point with the next two commands
Enable-MSDSMAutomaticClaim -BusType iSCSI
#--------------------------------------------------
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR
#--------------------------------------------------
 Step 10: Add shared storage to both cluster nodes

In this step we will be connecting our iSCSI targets to both nodes.  We will then initialize/partition/format the disks from one node – I used HYPV1.

In my lab setup I added three LUNS:

  1. Quorum – for quorum drive
  2. HyperVVMs – this is where the Hyper-V VMs will reside
  3. HyperVStorage – additional storage that can be added to VMs that require additional drives

Note that if you will be leveraging a different shared storage solution that your steps will vary considerably here and also in step 9.

#---------------------HYPV1----------------------------
# Configure the iSCSI target portal
New-IscsiTargetPortal –TargetPortalAddress 10.0.1.2

# quorum drive
$target = Get-IscsiTarget -NodeAddress iqn.2000-01.com.synology:Nosgoth.Target-12.13016eecb5
$target| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 10.0.1.20 -TargetPortalAddress 10.0.1.2

#HyperVStorage
$target2 = Get-IscsiTarget -NodeAddress iqn.2000-01.com.synology:Nosgoth.Target-13.13016eecb5
$target2| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 10.0.1.20 -TargetPortalAddress 10.0.1.2

#HyperVVMs
$target3 = Get-IscsiTarget -NodeAddress iqn.2000-01.com.synology:Nosgoth.Target-11.13016eecb5
$target3| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 10.0.1.20 -TargetPortalAddress 10.0.1.2

# to remove: Disconnect-IscsiTarget
#-------------------END HYPV1---------------------------
#---------------------HYPV2----------------------------
# Configure the iSCSI target portal
New-IscsiTargetPortal –TargetPortalAddress 10.0.1.2

#quorum
$target = Get-IscsiTarget -NodeAddress iqn.2000-01.com.synology:Nosgoth.Target-12.13016eecb5
$target| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 10.0.1.30 -TargetPortalAddress 10.0.1.2

#HyperVStorage
$target2 = Get-IscsiTarget -NodeAddress iqn.2000-01.com.synology:Nosgoth.Target-13.13016eecb5
$target2| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 10.0.1.30 -TargetPortalAddress 10.0.1.2
#HyperVVMs
$target3 = Get-IscsiTarget -NodeAddress iqn.2000-01.com.synology:Nosgoth.Target-11.13016eecb5
$target3| Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPortalAddress 10.0.1.30 -TargetPortalAddress 10.0.1.2

# to remove: Disconnect-IscsiTarget
#-------------------END HYPV2---------------------------

We will now get a list of the added drives and then initialize/partition/format each. Again, this should only be run from one node – I used HYPV1.

# Get-Disk - to see a lit of available disks
Get-Disk

#Prep each drive from only one node - run this command for each drive that you want to initialize
$Disk = Get-Disk -Number 1 #this number is obtained from the list you just pulled a minute ago
$disk|Initialize-Disk -PartitionStyle GPT
$disk|New-Partition -UseMaximumSize -AssignDriveLetter| Format-Volume -Confirm:$false

Once complete you should see something similar to the following on each node:

PS C:\> Get-Disk
Number Friendly Name                            Operationa Total Size Partition
                                                lStatus                Style
------ -------------                            ---------- ---------- ---------
0      DELL PERC H700 SCSI Disk Device          Online      272.25 GB MBR
1      Kingston DT Ultimate G3 USB Device       Online       29.28 GB MBR
3      SYNOLOGY iSCSI Storage                   Online           1 GB GPT
2      SYNOLOGY iSCSI Storage                   Online         500 GB GPT
4      SYNOLOGY iSCSI Storage                   Online         400 GB GPT
Step 11: Create the Hyper-V Cluster

With networking configured and shared storage now presented to both nodes we are ready to create the Hyper-V cluster.

Tip: Remember that the cluster creation will create additional computer objects in AD. Make sure your two computer accounts in AD have the necessary permissions to create these objects. (Ex, if in an OU called Cluster, you can give the nodes full permission to that OU)

On both nodes install the Failover Clustering feature:

Install-WindowsFeature –Name Failover-Clustering –IncludeManagementTools

On one node (pick one – I used HYPV1) initiate the cluster validation.

Test-Cluster -node HYPV1,HYPV2

This will take some time. Once completed you can find the results at the following location:
C:\Users\[username]\AppData\Local\Temp\1\Validation_Report[Date/Time].xml.mht
Tip: I navigated to this location from another computer to review the report.
I can’t stress enough that you should thoroughly review this report and address any issues identified before proceeding.

Once the cluster passes validation, it’s time to create the cluster:

# create the cluster and assign it an IP
New-Cluster -Name HyperVCluster -node HYPV1,HYPV2 -staticAddress 192.168.1.190
Step 12: Rename cluster resources

Technically, this is completely optional.  The cluster will automatically generate names for cluster resources.  However, for troubleshooting and documentation purposes renaming the resources will be something that you thank yourself for later.

Lets begin with renaming the cluster networks:

#check the cluster network details
Get-ClusterNetwork -Cluster HyperVCluster| Format-Table Name, Address, role, Metric, AutoMetric -AutoSize
 
#Update Cluster Network Names to indicate purpose/function
(Get-ClusterNetwork| ?{$_.Address -eq "192.168.1.0"}).name = "Managment"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.1.0"}).name = "Cluster"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.2.0"}).name = "iSCSI"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.3.0"}).name = "VMTraffic"
(Get-ClusterNetwork| ?{$_.Address -eq "10.0.4.0"}).name = "LM"

Once completed you can re-run the Get-ClusterNetwork command above and you should be getting something similar to the below:

Name      Address     Role Metric AutoMetric
----      -------     ---- ------ ----------
LM        10.0.4.0       1  39840       True
Cluster   10.0.2.0       1  39841       True
iSCSI1    10.0.1.0       0  79843       True
Managment 192.168.1.0    3  79841       True
VMTraffic 10.0.3.0       3  79842       True

Now we can rename the drives to reflect their purpose. You can utilize the first command to identify the individual drives within the cluster resource list.  Using the information from the initial results, you can then adjust the second command to rename your drives appropriately.

#-----------------------------------------------------------------------------------------------------
#Get the names of the drives and drive information
$resources = Get-WmiObject -namespace root\MSCluster MSCluster_Resource -filter "Type='Physical Disk'"
$resources | foreach {
    $res = $_
    $disks = $res.GetRelated("MSCluster_Disk")
    $disks | foreach {
        $_.GetRelated("MSCluster_DiskPartition") |
            select @{N="Name"; E={$res.Name}}, @{N="Status"; E={$res.State}}, Path, VolumeLabel, TotalSize, FreeSpace
    }
} | ft -AutoSize
#-----------------------------------------------------------------------------------------------------
 
#Update Cluster Disk Names to Match Function - you will need the info retrieved from the <span id="vocabhighlighter445035" class="vh-Default-pen" title="<span id="vocabhighlighter445099" class="vh-Default-pen" title="<span id="vocabhighlighter445768" class="vh-Default-pen" title="first">first</span>"><span id="vocabhighlighter445769" class="vh-Default-pen" title="first">first</span></span>"><span id="vocabhighlighter445056" class="vh-Default-pen" title="<span id="vocabhighlighter445100" class="vh-Default-pen" title="<span id="vocabhighlighter445770" class="vh-Default-pen" title="first">first</span>"><span id="vocabhighlighter445771" class="vh-Default-pen" title="first">first</span></span>"><span id="vocabhighlighter445101" class="vh-Default-pen" title="<span id="vocabhighlighter445772" class="vh-Default-pen" title="first">first</span>"><span id="vocabhighlighter445773" class="vh-Default-pen" title="first">first</span></span></span></span> command to adjust this command as necessary
(Get-ClusterGroup -Name "Cluster group"| Get-ClusterResource |?{$_.ResourceType -eq "Physical Disk"}).name = "Witness"
(Get-ClusterGroup -Name "Available Storage"| Get-ClusterResource |?{$_.Name -eq "Cluster Disk 1"}).name = "CSV-VMs"
(Get-ClusterGroup -Name "Available Storage"| Get-ClusterResource |?{$_.Name -eq "Cluster Disk 3"}).name = "CSV-VMStorage"
#(Get-ClusterGroup "available storage"| Get-ClusterResource).name = "CSV-VMs"
Step 13: Configure Cluster Shared Volumes

By default Failover Clustering utilizes a “shared nothing architecture”.  This means that only one node within the cluster owns each disk.  This is not conducive to our Hyper-V cluster setup as this would restrict a VM on our HyperVVMs storage location to only being able to “live” on the cluster node that owned the HyperVVMs disk.

Clustered Shared Volumes breaks this one node restriction and permits multiple nodes within the cluster concurrent access to a single shared volume.  This means that the Hyper-V VM will be able to live on either node and failover / migrate as needed between the two Hyper-V nodes.  This is accomplished by creating a common namespace under %SystemDrive%\ClusterStorage.  As such, it is required that the OS be on the same drive letter on every node in the cluster (such as C:\)

Please note that the below code examples takes advantage of the renaming scheme performed in Step 12.  If you elected not to rename your drives then you will need to adjust appropriately in the code below.

The following steps will be performed on only one node:

#Configure the CSVs
Get-ClusterResource -Name "CSV-VMs"| Add-ClusterSharedVolume

Rename-Item -path C:\ClusterStorage\Volume1 -NewName C:\ClusterStorage\CSV-VMs

Get-ClusterResource -Name "CSV-VMStorage"| Add-ClusterSharedVolume
Rename-Item -path C:\ClusterStorage\Volume1 -NewName C:\ClusterStorage\CSV-VMStorage

Congratulations, you should now have an operational Hyper-V cluster!
From your management PC you should now be able to establish a connection to your cluster using Failover Cluster Manager.
Tip: You can obtain this by installing the Remote Server Administration Tools.

Failover Cluster Manager being used to manage Hyper-V Cluster
Failover Cluster Manager – Hyper-V Management
Step 14: Configure Cluster Network Use

As a last step you may need to adjust the cluster use settings of your cluster networks as the cluster creation process doesn’t always get it right.  You can see my configuration below:

Failover Cluster Manager - Hyper-V Cluster Network Management
Failover Cluster Manager – Hyper-V Cluster Network Management

You are now ready to begin creating Highly Available VMs!

1: Create a New Virtual Machine

Right click Cluster Roles – Virtual Machines – New Virtual Machine…

Create Hyper-V high availability VM
Create Hyper-V high availability VM
2: Select a target note the VM will be created on initially
Hyper-V high availability VM - Select Node
Hyper-V high availability VM – Select Node
3: Choose where to store the VM

Ensure that you are storing the VM on the Cluster Shared Volume (CSV) that was set up previously!

Hyper-V high availability VM - Store on CSV
Hyper-V high availability VM – Store on CSV
4: Specify Generation type
Hyper-V high availability VM - Choose Generation
Hyper-V high availability VM – Choose Generation
5: Assign memory
Hyper-V high availability VM - Assign Memory
Hyper-V high availability VM – Assign Memory
6: Configure networking

Assign the switch created in previous steps

Hyper-V high availability VM - Configure Networking
Hyper-V high availability VM – Configure Networking
7: Configure Hard Disk

Tip: You probably don’t need the default size which is quite large

Hyper-V high availability VM - Virtual Hard Disk
Hyper-V high availability VM – Virtual Hard Disk
8: Set OS Installation options
Hyper-V high availability VM - OS Installation
Hyper-V high availability VM – OS Installation
9: Complete Wizard
Hyper-V high availability VM - Wizard Completion
Hyper-V high availability VM – Wizard Completion
Additional reading:

Step-by-Step: Building a FREE Hyper-V Server 2012 Cluster – Part 1 of 2
Step-by-Step: Building a FREE Hyper-V Server 2012 Cluster – Part 2 of 2

The following video series is also an excellent resource with a lot of great Hyper-V cluster information:
Hyper-V Rockstar – Building a Hyper-V Cluster – Part 0/5

4 Comments

  1. Great article, helped me a lot.

    How to apply or how vNICs on VSwitch works, as these VMNetworkAdapter function to communicate with other networks?

    Explanation: You have created the “vNICs on VSwitch called Management1 vNIC,” making this adptador communicate with the network manages and others like CSV, LM and network virtual machines to communicate with the external network?

  2. You specified a weighting on the connections themselves, but how do we make sure the connections are used for their job? Would be an awesome follow up!

2 Trackbacks / Pingbacks

  1. Hyper-V Home Lab Setup and Configuration - Tech Thoughts
  2. Hyper-V Core 2016: Building A Workgroup Cluster – Part 4, Cluster Setup | Matthew Fugel

Leave a Reply

Your email address will not be published.


*