Using PowerShell to perform a Hyper-V resource allocation check
This Powershell script determines the current resource allocation health of a Hyper-V server or nodes in a Hyper-V Cluster. The script will automatically scan the physical resources of each Hyper-V node and then compare that to the resources allocated to the Virtual Machines. It will then pass/fail based on the following criteria:
1:1 Memory – anything higher fails
*This is for static memory only, see below for best practice suggestions regarding dynamic memory
4:1 CPU – anything higher fails
20% free storage space – anything lower fails
These ratios can be edited in the script to suit your desired cutoff points.
Here is a brief video that shows how to run and interpret the script:
The following is an example of the scripts output from a two-node Hyper-V Cluster.
Download / Review Hyper-V resource allocation check
Download the script from the TechNet Script Center here:
Interpreting the results from the Hyper-V resource allocation check:
Is there a best Hyper-V practice ratio of vCPU to pCPU Cores?
Answer: This question has no answer. The only answer that comes close is “it depends”, and that isn’t much of an answer.
For the longest time 1:1 was recommended. You can still do this, but with modern processors and schedulers its just wasteful. One of the major benefits to virtualization in the first place is that CPUs can be used when needed, and shared when not needed.
What this means is, if you really want to know how many cores you need, then you need to have a solid understanding of what your actual workload is going to be. If unsure, you can go with a very conservative 4:1 (which is what the script is defaulted at) but in may cases 6:1 and even 12:1 will operate just fine.
Why? Because in many cases the threads sit idle almost all the time. As such, there is no real hard and fast rule to follow regarding the vCPU:pCPU ratio.
The following factors also have to be taken into consideration:
- Number of virtual processors
- Virtual machine reserve
- Percentage of total system resources
- Virtual machine Limit (percentage)
- Percentage of total system resources
- Relative weight
The bottom line: the Hyper-V scheduler is extremely efficient and unless your are running an abnormal CPU intensive workload, a higher ratio is often fine. If in doubt review the CPU usage for the same server over the duration of an entire month. In many cases you will find that it’s quite low. The script will fail anything higher than a 4:1 but that’s really on the conservative side. Feel free to adjust it.
Additional reading regarding Hyper CPU allocation:
Is there a best Hyper-V practice ratio for memory?
Regarding static memory: Yes, 1:1
Regarding dynamic memory: “It depends”
There are three main points to consider regarding Dynamic memory:
- Startup – the RAM required for the VM to literally start (Ex 1024MB)
- Minimum – the lowest amount the VM can shrink down to when not busy (Ex 512MB)
- Maximum – the maximum amount the VM can grow to when very busy (Ex 2048MB)
In the above example a VM would turn on with 1GB of RAM, could go down to 512MB when not in use, and could increase to 2GB if busy.
Also factor in:
- Memory buffer
- Memory weight
Bottom line: You need to take into account the maximum dynamic RAM setting of each of your VMs. If the total maximum exceeds your available physical RAM then contention issues can occur and weight will be taken into account. Additionally, if you have RAM assigned out to meet those maximums you may be unable to start additional VMs. In my opinion you should be familiar with your expected workloads and if you are using dynamic you should weight VMs appropriately to ensure RAM usage is provisioned appropriately.
Don’t forget about cluster failovers
Don’t forget during your planning to account for a node failure. If each node in a two node cluster has 512GB of RAM and you have provisioned all 512GB out on a node and 128GB of RAM on the second node, the VMs will be unable to start on the opposing node in the event of a failure. You may be able to run more VMs this way, but they are not truly Highly Available when configured in this fashion. In a two node cluster with 512GB each you should be shooting for VM memory usage under 256GB on each node to accommodate a node loss. The script will simulate a node loss for you and will advise if your VMs are truly HA or not.