Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Ever tried adding a new server to your VMware cluster only to find that vMotion won’t work because the processors don’t match? You’re not alone. This common frustration has a powerful solution: Enhanced vMotion Compatibility (EVC) Mode.
Whether you’re managing a small three-node cluster or a sprawling enterprise environment, understanding EVC mode can save you countless hours of troubleshooting and enable seamless hardware upgrades without expensive downtime.
Enhanced vMotion Compatibility (EVC) is a cluster-level feature in VMware vSphere that enables live migration of virtual machines between ESXi hosts with different CPU generations from the same vendor. Think of it as a translator that ensures all processors in your cluster speak the same language to your virtual machines.
When you enable EVC mode on a vSphere cluster, it establishes a baseline CPU feature set that all hosts must present to virtual machines. Even if you have newer processors with advanced instruction sets, EVC masks those extra features so that VMs see a consistent CPU across all hosts.
The reality of IT infrastructure is that hardware refreshes happen gradually. You purchased five Dell servers with Intel Xeon E5-2600 v3 processors three years ago, and now you need to expand. Those processors are discontinued, and you can only buy newer Xeon Gold or Platinum series. Without EVC, those VMs won’t migrate between your old and new hosts.
EVC mode solves this by creating a uniform CPU environment across mixed-generation hardware, enabling:
At its core, EVC operates through CPU instruction set masking. Here’s what happens behind the scenes:
Every processor has a CPUID—essentially an API that tells the system what instructions the CPU can execute. When a virtual machine boots, it queries the host’s CPUID to understand available CPU features like SSE4.2, AVX, or AVX-512.
EVC intercepts this communication and enforces a standardized CPUID baseline across the entire cluster. The vSphere hypervisor already abstracts the physical CPU from virtual machines, and EVC leverages this abstraction to control which CPU features are exposed.
When configuring EVC, you select a baseline corresponding to a specific processor generation:
The baseline you choose determines which CPU instructions are available to VMs. If you select “Intel Haswell,” for example, all hosts—whether they have Haswell, Broadwell, Skylake, or Cascade Lake processors—will only expose Haswell-level instructions to virtual machines.
VMware introduced two ways to implement EVC, each with distinct advantages.
Cluster-level EVC applies the baseline to all ESXi hosts in the cluster. Every VM running in that cluster inherits the cluster’s EVC mode. This is the original implementation and remains the most common approach.
Advantages:
Limitations:
Per-VM EVC makes the EVC mode an attribute of the virtual machine itself rather than the cluster. Available for VM hardware version 14 and higher, this provides granular control.
Advantages:
Limitations:
The per-VM EVC configuration is stored in the VM’s VMX file with entries like:
featMask.vm.cpuid.Intel = "Val:1"
featMask.vm.cpuid.FAMILY = "Val:6"
featMask.vm.cpuid.MODEL = "Val:0x4f"
Before enabling EVC, verify:
VMware recommends enabling EVC when creating a cluster to avoid complications:
Enabling EVC on a cluster with running workloads requires more care:
Pro tip: VMs running on the host with the oldest processor can remain powered on if using vCenter 4.1 or later, as their CPU features already match the baseline.
For virtual machines requiring specific EVC modes:
The VMware Compatibility Guide is your definitive resource for EVC planning:
For example, if you have:
Your EVC baseline should be set to Intel Haswell to ensure all three hosts can participate.
Challenge: A financial services company has a 10-node cluster running ESXi hosts with Intel Sandy Bridge processors. They need to add five new hosts with Cascade Lake processors for increased capacity.
Solution: Enable EVC mode at Intel Sandy Bridge baseline. The new Cascade Lake hosts will mask advanced features and present Sandy Bridge-compatible instructions to VMs. This allows seamless vMotion across all 15 hosts while maintaining production uptime.
Challenge: An organization wants to migrate VMs between their primary and DR datacenters, which have processors from different generations.
Solution: Implement per-VM EVC on critical VMs with a baseline compatible with both sites. VMs can now migrate between datacenters using Cross-vCenter vMotion while maintaining CPU compatibility.
Challenge: After adding a new host with Ice Lake processors to an existing cluster, VMs cannot migrate to the new host, causing DRS failures.
Solution: Enable cluster-level EVC at the appropriate baseline. After powering off and restarting affected VMs, DRS can now balance workloads across all hosts, including the new Ice Lake system.
A common concern is whether masking CPU features reduces performance. The reality is nuanced:
1. Enable EVC from Day One Configure EVC when creating new clusters, even if all processors currently match. This prevents headaches during future expansions.
2. Choose the Right Baseline
3. Document Your Configuration Maintain clear documentation of:
4. Test Before Production
5. Use Per-VM EVC Strategically Reserve per-VM EVC for specific use cases:
6. Monitor and Validate
7. Plan for Baseline Upgrades As older hosts are retired, you can raise the EVC baseline:
Cause: The host’s processor doesn’t support the cluster’s EVC baseline, or VMs are using CPU features beyond the baseline.
Solution:
Cause: Per-VM EVC baseline exceeds the cluster’s EVC mode or host capabilities.
Solution:
Cause: Hosts revert to their native CPU instruction sets, creating incompatibility.
Solution:
Cause: Intel disabled TSX in some Haswell processors via microcode, causing them to appear as Ivy Bridge.
Solution:
High Availability (HA) is not impacted by EVC changes. When HA fails over a VM, it power-cycles the VM on the new host, allowing it to detect new CPU features naturally.
Distributed Resource Scheduler (DRS) relies heavily on EVC. Without EVC in mixed-hardware clusters, DRS cannot migrate VMs between incompatible hosts, severely limiting its effectiveness.
Storage vMotion works independently of EVC since it only migrates storage, not the running VM state. VMs can be moved between datastores regardless of EVC configuration.
Fault Tolerance requires compatible CPUs and benefits significantly from EVC. FT maintains a shadow VM, and EVC ensures both primary and secondary VMs see identical CPU features.
As virtualization evolves, EVC remains critically relevant:
Can I mix Intel and AMD processors with EVC?
No. EVC requires all processors to be from the same vendor. EVC cannot translate between Intel and AMD instruction sets.
Will enabling EVC slow down my VMs?
For the vast majority of workloads, no. EVC has zero overhead and only affects performance if your applications specifically use masked advanced instructions.
Can I enable EVC without powering off VMs?
Yes, when using vCenter 4.1 or later. VMs on the host with the oldest processor can remain running. VMs on newer hosts may need to be powered off and restarted.
What happens if I add a host with a CPU older than my EVC baseline?
The host cannot join the cluster. You must either lower the EVC baseline (requiring VM power cycles) or upgrade the host’s processor.
Is per-VM EVC better than cluster-level EVC?
Neither is inherently better. Cluster-level EVC is simpler and suitable for most environments. Per-VM EVC provides granular control for complex scenarios with varying VM requirements.
How do I check what EVC mode my VM is using?
In vSphere Client, select the VM → Configure tab → Settings → VM Options → Advanced → VMware EVC. The pane displays the current EVC mode and CPUID details.