vSphere

⇚==================================================================================================⇛

- February 06, 2013

VMware vSphere Metro Storage Cluster (vMSC):


VMware vSphere Metro Storage Cluster (vMSC) is not a product but a certified configuration where a vSphere cluster spans geographical locations. This could be spread across a campus, metropolitan area or a larger area up to 200 km apart.

vMSC was introduced with vSphere 5.0.

It relies on a stretched storage solution such a NetApp MetroCluster and a stretched layer 2 VLAN. The storage must be treated as a single storage solution that spans both sites. The storage is synchronously replicated between the sites so that both sites are always in sync and there is zero data loss in the event of a failure. The storage solution must allow the datastores/LUNs to be access from either location.




This brings the functionality of a local VMware vSphere cluster to hosts spread across two locations so that VMware HA, DRS and vMotion can be performed across the sites as if all the hosts were local. But is this a better solution than SRM? They are different solutions aimed at resolving different problems. vMSC is targeted at disaster avoidance whereas SRM is targeted at disaster recovery.

vMSC achieves disaster avoidance by allowing you to move workloads off failing components without outages.

SRM achieves disaster recovery by automating recovery plans to bring workloads back online in a controlled manor following a disaster.

SRM can be used in a planned migration to move workloads from one site to another site, for example when maintenance is required at the primary site; however an outage is always required to move the workloads with SRM. vMSC can restart workloads at the secondary site when the other site fails using VMware HA but there is little control over the order the failed workloads restart.

⇚==================================================================================================⇛


- January 07, 2013

vSphere VM Memory Statistics :

There still seems to be some confusion over what the Active Memory statistic of a virtual machine within vSphere. See example below.
You can see that this virtual machine is configured with 4096 MB of memory and the Active Guest Memory is being reported as 696 MB. So what is this Active Guest Memory? If I look within the operating system of this virtual machine I can see that at this point in time it is using 2.48 GB of memory, see below.
VMware define Active Memory as “Amount of memory that is actively used, as estimated by VMkernel based on recently touched memory pages”. The overhead of monitoring every memory page access would be too much for the hypervisor so a random sampling is used to estimate the active memory. So, how often is “recently”? This depends on where you are looking at the statistic, in vCenter it is refreshed every 20 seconds so it is showing the amount of memory that has been touched in the last 20 seconds.
If we look at the memory charts in vCenter for this virtual machine we can see that the active memory is fairly static at between 750 MB and 1,000 MB.
You could assume from this that the virtual machine is only using about 1 GB of memory; however we saw the operating system reporting that about 2.5 GB was being used. What we don’t know from the information in the graph is if for each sampling period it was the same memory pages that were touched. It could be that in one 20 second period 750 MB of memory was accessed and in the next 20 seconds a different 750 MB of memory was accessed. For this reason this statistic is not a good value to take to right-size the virtual machine.
Consumed Memory
What I notice for Windows Servers is that the Consumed Host Memory is always almost equivalent to the amount of memory configured on a virtual machine. Taking the example above where the virtual machine is configured for 4096 MB of memory the consumed memory is being reported as 4144 MB. This is higher than the configured memory as it also includes the memory overhead used by the hypervisor to run the virtual machine. My understand of why Windows Server report that they are consuming all of the memory they are allocated for, even though we can see from the operating system in this case that it is only using 2.5 GB is because when the server boots up it zeros out all of the memory pages and therefore the hypervisor needs to present these memory pages to the virtual machine.

⇚==================================================================================================⇛


-January 03, 2013

RDM Path Selection Policy (PSP) with Microsoft Clustering


Prior to vSphere 5.5 the Round Robin Path Selection Policy (VMW_PSP_RR) is not supported for the shared disks of a Microsoft cluster. You may find that the ESXi multipathing claim rules are set so that when the RDMs are discovered the PSP is automatically set to Round Robin, so you will want to change this. You will probably want to keep the Round Robin PSP for the other non-shared clustered disks such as LUNs used for VMFS volumes therefore you probably do not want to change the default claim rules.
From vSphere 5.5 the Round Robin PSP is supported for the clusters shared disks.
There is also a NetApp KnowledgeBase article that states that ALUA should not be enabled on the igroup when using Microsoft clustering with shared RDMs prior to vSphere 5.5. See https://kb.netapp.com/support/index?page=content&id=2013316. They have 3 solutions: –
  1. Disable ALUA on the igroup for the ESXi hosts with Microsoft Windows Clustered servers.
  2. Use dedicated initiators for the shared clustered RDMs with ALUA disabled and different initiators for the other LUNs such as VMFS volumes and non-shared RDMs.
  3. Use iSCSI within the Windows Servers for the shared disks.

⇚==================================================================================================⇛

HA Deepdive

Introduction to vSphere High Availability

Availability has traditionally been one of the most important aspects when providing services. When providing services on a shared platform like VMware vSphere, the impact of downtime exponentially grows as many services run on a single physical machine. As such VMware engineered a feature called VMware vSphere High Availability. VMware vSphere High Availability, hereafter simply referred to as HA, provides a simple and cost effective solution to increase availability for any application running in a virtual machine regardless of its operating system. It is configured using a couple of simple steps through vCenter Server (vCenter) and as such provides a uniform and simple interface. HA enables you to create a cluster out of multiple ESXi hosts. This will allow you to protect virtual machines and their workloads. In the event of a failure of one of the hosts in the cluster, impacted virtual machines are automatically restarted on other ESXi hosts within that same VMware vSphere Cluster (cluster).

Figure 1 - High Availability in action
On top of that, in the case of a Guest OS level failure, HA can restart the failed Guest OS. This feature is called VM Monitoring, but is sometimes also referred to as VM-HA. This might sound fairly complex but again can be implemented with a single click.
Figure 2 - OS Level HA just a single click away
Unlike many other clustering solutions, HA is a simple solution to implement and literally enabled within 5 clicks. On top of that, HA is widely adopted and used in all situations. However, HA is not a 1:1 replacement for solutions like Microsoft Clustering Services / Windows Server Failover Clustering (WSFC). The main difference between WSFC and HA being that WSFC was designed to protect stateful cluster-aware applications while HA was designed to protect any virtual machine regardless of the type of workload within, but also can be extended to the application layer through the use of VM and Application Monitoring.
In the case of HA, a fail-over incurs downtime as the virtual machine is literally restarted on one of the remaining hosts in the cluster. Whereas MSCS transitions the service to one of the remaining nodes in the cluster when a failure occurs. In contrary to what many believe, WSFC does not guarantee that there is no downtime during a transition. On top of that, your application needs to be cluster-aware and stateful in order to get the most out of this mechanism, which limits the number of workloads that could really benefit from this type of clustering.
One might ask why would you want to use HA when a virtual machine is restarted and service is temporarily lost. The answer is simple; not all virtual machines (or services) need 99.999% uptime. For many services the type of availability HA provides is more than sufficient. On top of that, many applications were never designed to run on top of an WSFC cluster. This means that there is no guarantee of availability or data consistency if an application is clustered with WSFC but is not cluster-aware.
In addition, WSFC clustering can be complex and requires special skills and training. One example is managing patches and updates/upgrades in a WSFC environment; this could even lead to more downtime if not operated correctly and definitely complicates operational procedures. HA however reduces complexity, costs (associated with downtime and MSCS), resource overhead and unplanned downtime for minimal additional costs. It is important to note that HA, contrary to WSFC, does not require any changes to the guest as HA is provided on the hypervisor level. Also, VM Monitoring does not require any additional software or OS modifications except for VMware Tools, which should be installed anyway as a best practice. In case even higher availability is required, VMware also provides a level of application awareness through Application Monitoring, which has been leveraged by partners like Symantec to enable application level resiliency and could be used by in-house development teams to increase resiliency for their application.
+

HA has proven itself over and over again and is widely adopted within the industry; if you are not using it today, hopefully you will be convinced after reading this section of the book.

vSphere 6.0

Before we dive into the main constructs of HA and describe all the choices one has to make when configuring HA, we will first briefly touch on what’s new in vSphere 6.0 and describe the basic requirements and steps needed to enable HA. This book covers all the released versions of what is known within VMware as “Fault Domain Manager” (FDM) which was introduced with vSphere 5.0. We will call out the differences in behavior in the different versions where applicable, our baseline however is vSphere 6.0.

What’s New in 6.0?

Compared to vSphere 5.0 the changes introduced with vSphere 6.0 for HA appear to be minor. However, some of the new functionality will make the life of many of you much easier. Although the list is relatively short, from an engineering point of view many of these things have been an enormous effort as they required change to the deep fundaments of the HA architecture.
  • Support for Virtual Volumes – With Virtual Volumes a new type of storage entity is introduced in vSphere 6.0. This has also resulted in some changes in the HA architecture to accommodate for this new way of storing virtual machines
  • Support for Virtual SAN – This was actually introduced with vSphere 5.5, but as it is new to many of you and led to changes in the architecture we decided to include it in this update
  • VM Component Protection – This allows HA to respond to a scenario where the connection to the virtual machine’s datastore is impacted temporarily or permanently
    • HA “Response for Datastore with All Paths Down”
    • HA “Response for Datastore with Permanent Device Loss”
  • Increased host scale – Cluster limit has grown from 32 to 64 hosts
  • Increased VM scale – Cluster limit has grown from 4000 VMs to 8000 VMs per cluster
  • Secure RPC – Secures the VM/App monitoring channel
  • Full IPv6 support
  • Registration of “HA Disabled” VMs on hosts after failure

What is required for HA to Work?

Each feature or product has very specific requirements and HA is no different. Knowing the requirements of HA is part of the basics we have to cover before diving into some of the more complex concepts. For those who are completely new to HA, we will also show you how to configure it.

Prerequisites

Before enabling HA it is highly recommend validating that the environment meets all the prerequisites. We have also included recommendations from an infrastructure perspective that will enhance resiliency.
Requirements:
  • Minimum of two ESXi hosts
  • Minimum of 5GB memory per host to install ESXi and enable HA
  • VMware vCenter Server
  • Shared Storage for virtual machines
  • Pingable gateway or other reliable address
Recommendation:
  • Redundant Management Network (not a requirement, but highly recommended)
  • 8GB of memory or more per host
  • Multiple shared datastores

Firewall Requirements

The following table contains the ports that are used by HA for communication. If your environment contains firewalls external to the host, ensure these ports are opened for HA to function correctly. HA will open the required ports on the ESX or ESXi firewall.
PortProtocolDirection
8182UDPInbound
8182TCPInbound
8182UDPOutbound
8182TCPOutbound

Configuring vSphere High Availability

HA can be configured with the default settings within a couple of clicks. The following steps will show you how to create a cluster and enable HA, including VM Monitoring, using the vSphere Web Client. Each of the settings and the design decisions associated with these steps will be described in more depth in the following chapters.
  1. Click “Hosts & Clusters” under Inventories on the Home tab.
  2. Right-click the Datacenter in the Inventory tree and click New Cluster.
  3. Give the new cluster an appropriate name. We recommend at a minimum including the location of the cluster and a sequence number ie. ams-hadrs-001.
  4. Select Turn On vSphere HA.
  5. Ensure “Enable host monitoring” and “Enable admission control” is selected.
  6. Select “Percentage of cluster resources…” under Policy and specify a percentage.
  7. Enable VM Monitoring Status by selecting “VM and Application Monitoring”.
  8. Click “OK” to complete the creation of the cluster.

Figure 3 - Ready to complete the New Cluster Wizard


When the HA cluster has been created, the ESXi hosts can be added to the cluster simply by right clicking the host and selecting “Move To”, if they were already added to vCenter, or by right clicking the cluster and selecting “Add Host”.
+

When an ESXi host is added to the newly-created cluster, the HA agent will be loaded and configured. Once this has completed, HA will enable protection of the workloads running on this ESXi host.
As we have clearly demonstrated, HA is a simple clustering solution that will allow you to protect virtual machines against host failure and operating system failure in literally minutes. Understanding the architecture of HA will enable you to reach that extra 9 when it comes to availability. The following chapters will discuss the architecture and fundamental concepts of HA. We will also discuss all decision-making moments to ensure you will configure HA in such a way that it meets the requirements of your or your customer’s environment.

Components of High Availability

Now that we know what the pre-requisites are and how to configure HA the next steps will be describing which components form HA. Keep in mind that this is still a “high level” overview. There is more under the cover that we will explain in following chapters. The following diagram depicts a two-host cluster and shows the key HA components.
+
Figure 4 - Components of High Availability
As you can clearly see, there are three major components that form the foundation for HA as of vSphere 6.0:
  • FDM
  • HOSTD
  • vCenter
The first and probably the most important component that forms HA is FDM (Fault Domain Manager). This is the HA agent.
The FDM Agent is responsible for many tasks such as communicating host resource information, virtual machine states and HA properties to other hosts in the cluster. FDM also handles heartbeat mechanisms, virtual machine placement, virtual machine restarts, logging and much more. We are not going to discuss all of this in-depth separately as we feel that this will complicate things too much.
FDM, in our opinion, is one of the most important agents on an ESXi host, when HA is enabled, of course, and we are assuming this is the case. The engineers recognized this importance and added an extra level of resiliency to HA. FDM uses a single-process agent. However, FDM spawns a watchdog process. In the unlikely event of an agent failure, the watchdog functionality will pick up on this and restart the agent to ensure HA functionality remains without anyone ever noticing it failed. The agent is also resilient to network interruptions and “all paths down” (APD) conditions. Inter-host communication automatically uses another communication path (if the host is configured with redundant management networks) in the case of a network failure.
HA has no dependency on DNS as it works with IP addresses only. This is one of the major improvements that FDM brought. This does not mean that ESXi hosts need to be registered with their IP addresses in vCenter; it is still a best practice to register ESXi hosts by its fully qualified domain name
(FQDN) in vCenter. Although HA does not depend on DNS, remember that other services may depend on it. On top of that, monitoring and troubleshooting will be much easier when hosts are correctly registered within vCenter and have a valid FQDN.
Basic design principle: Although HA is not dependent on DNS, it is still recommended to register the hosts with their FQDN for ease of operations/management.
vSphere HA also has a standardized logging mechanism, where a single log file has been created for all operational log messages; it is called fdm.log. This log file is stored under /var/log/ as depicted in Figure 5.

Figure 5 - HA log file
Basic design principle: Ensure syslog is correctly configured and log files are offloaded to a safe location to offer the possibility of performing a root cause analysis in case disaster strikes.

HOSTD Agent

One of the most crucial agents on a host is HOSTD. This agent is responsible for many of the tasks we take for granted like powering on virtual machines. FDM talks directly to HOSTD and vCenter, so it is not dependent on VPXA, like in previous releases. This is, of course, to avoid any unnecessary overhead and dependencies, making HA more reliable than ever before and enabling HA to respond faster to power-on requests. That ultimately results in higher VM uptime.
When, for whatever reason, HOSTD is unavailable or not yet running after a restart, the host will not participate in any FDM-related processes. FDM relies on HOSTD for information about the virtual machines that are registered to the host, and manages the virtual machines using HOSTD APIs. In short, FDM is dependent on HOSTD and if HOSTD is not operational, FDM halts all functions and waits for HOSTD to become operational.

vCenter

That brings us to our final component, the vCenter Server. vCenter is the core of every vSphere Cluster and is responsible for many tasks these days. For our purposes, the following are the most important and the ones we will discuss in more detail:
  • Deploying and configuring HA Agents
  • Communication of cluster configuration changes
  • Protection of virtual machines
vCenter is responsible for pushing out the FDM agent to the ESXi hosts when applicable. The push of these agents is done in parallel to allow for faster deployment and configuration of multiple hosts in a cluster. vCenter is also responsible for communicating configuration changes in the cluster to the host which is elected as the master. We will discuss this concept of master and slaves in the following chapter. Examples of configuration changes are modification or addition of an advanced setting or the introduction of a new host into the cluster.
HA leverages vCenter to retrieve information about the status of virtual machines and, of course, vCenter is used to display the protection status (Figure 6) of virtual machines. (What “virtual machine protection” actually means will be discussed in chapter 3.) On top of that, vCenter is responsible for the protection and unprotection of virtual machines. This not only applies to user initiated power-offs or power-ons of virtual machines, but also in the case where an ESXi host is disconnected from vCenter at which point vCenter will request the master HA agent to unprotect the affected virtual machines.
Figure 6 - Virtual machine protection state
Although HA is configured by vCenter and exchanges virtual machine state information with HA, vCenter is not involved when HA responds to failure. It is comforting to know that in case of a host failure containing the virtualized vCenter Server, HA takes care of the failure and restarts the vCenter Server on another host, including all other configured virtual machines from that failed host.
There is a corner case scenario with regards to vCenter failure: if the ESXi hosts are so called “stateless hosts” and Distributed vSwitches are used for the management network, virtual machine restarts will not be attempted until vCenter is restarted. For stateless environments, vCenter and Auto Deploy availability is key as the ESXi hosts literally depend on them.
If vCenter is unavailable, it will not be possible to make changes to the configuration of the cluster. vCenter is the source of truth for the set of virtual machines that are protected, the cluster configuration, the virtual machine-to-host compatibility information, and the host membership. So, while HA, by design, will respond to failures without vCenter, HA relies on vCenter to be available to configure or monitor the cluster.
When a virtual vCenter Server, or the vCenter Server Appliance, has been implemented, we recommend setting the correct HA restart priorities for it. Although vCenter Server is not required to restart virtual machines, there are multiple components that rely on vCenter and, as such, a speedy recovery is desired. When configuring your vCenter virtual machine with a high priority for restarts, remember to include all services on which your vCenter server depends for a successful restart: DNS, MS AD and MS SQL (or any other database server you are using).
+

Basic design principles:
  1. In stateless environments, ensure vCenter and Auto Deploy are highly available as recovery time of your virtual machines might be dependent on them.
  2. Understand the impact of virtualizing vCenter. Ensure it has high priority for restarts and ensure that services which vCenter Server depends on are available: DNS, AD and database.

Fundamental Concepts

Now that you know about the components of HA, it is time to start talking about some of the fundamental concepts of HA clusters:
  • Master / Slave agents
  • Heartbeating
  • Isolated vs Network partitioned
  • Virtual Machine Protection
  • Component Protection
Everyone who has implemented vSphere knows that multiple hosts can be configured into a cluster. A cluster can best be seen as a collection of resources. These resources can be carved up with the use of vSphere Distributed Resource Scheduler (DRS) into separate pools of resources or used to increase availability by enabling HA.
The HA architecture introduces the concept of master and slave HA agents. Except during network partitions, which are discussed later, there is only one master HA agent in a cluster. Any agent can serve as a master, and all others are considered its slaves. A master agent is in charge of monitoring the health of virtual machines for which it is responsible and restarting any that fail. The slaves are responsible for forwarding information to the master agent and restarting any virtual machines at the direction of the master. The HA agent, regardless of its role as master or slave, also implements the VM/App monitoring feature which allows it to restart virtual machines in the case of an Operating System or restart services in the case of an application failure.

Master Agent

As stated, one of the primary tasks of the master is to keep track of the state of the virtual machines it is responsible for and to take action when appropriate. In a normal situation there is only a single master in a cluster. We will discuss the scenario where multiple masters can exist in a single cluster in one of the following sections, but for now let’s talk about a cluster with a single master. A master will claim responsibility for a virtual machine by taking “ownership” of the datastore on which the virtual machine’s configuration file is stored.
Basic design principle: 
To maximize the chance of restarting virtual machines after a failure we recommend masking datastores on a cluster basis. Although sharing of datastores across clusters will work, it will increase complexity from an administrative perspective.
That is not all, of course. The HA master is also responsible for exchanging state information with vCenter. This means that it will not only receive but also send information to vCenter when required. The HA master is also the host that initiates the restart of virtual machines when a host has failed. You may immediately want to ask what happens when the master is the one that fails, or, more generically, which of the hosts can become the master and when is it elected?

Election

A master is elected by a set of HA agents whenever the agents are not in network contact with a master. A master election thus occurs when HA is first enabled on a cluster and when the host on which the master is running:
  • fails,
  • becomes network partitioned or isolated,
  • is disconnected from vCenter Server,
  • is put into maintenance or standby mode,
  • or when HA is reconfigured on the host.
The HA master election takes approximately 15 seconds and is conducted using UDP. While HA won’t react to failures during the election, once a master is elected, failures detected before and during the election will be handled. The election process is simple but robust. The host that is participating in the election with the greatest number of connected datastores will be elected master. If two or more hosts have the same number of datastores connected, the one with the highest Managed Object Id will be chosen. This however is done lexically; meaning that 99 beats 100 as 9 is larger than 1. For each host, the HA State of the host will be shown on the Summary tab. This includes the role as depicted in screenshot below where the host is a master host.
After a master is elected, each slave that has management network connectivity with it will setup a single secure, encrypted, TCP connection to the master. This secure connection is SSL-based. One thing to stress here though is that slaves do not communicate with each other after the master has been elected unless a re-election of the master needs to take place.

Figure 7 - Master Agent

As stated earlier, when a master is elected it will try to acquire ownership of all of the datastores it can directly access or access by proxying requests to one of the slaves connected to it using the management network. For regular storage architectures it does this by locking a file called “protectedlist” that is stored on the datastores in an existing cluster. The master will also attempt to take ownership of any datastores it discovers along the way, and it will periodically retry any it could not take ownership of previously.
+

The naming format and location of this file is as follows:
//.vSphere-HA//protectedlist
For those wondering how “cluster-specific-directory” is constructed:
---
The master uses this protectedlist file to store the inventory. It keeps track of which virtual machines are protected by HA. Calling it an inventory might be slightly overstating: it is a list of protected virtual machines and it includes information around virtual machine CPU reservation and memory overhead. The master distributes this inventory across all datastores in use by the virtual machines in the cluster. The next screenshot shows an example of this file on one of the datastores.
Figure 8 - Protectedlist file
Now that we know the master locks a file on the datastore and that this file stores inventory details, what happens when the master is isolated or fails? If the master fails, the answer is simple: the lock will expire and the new master will relock the file if the datastore is accessible to it.
In the case of isolation, this scenario is slightly different, although the result is similar. The master will release the lock it has on the file on the datastore to ensure that when a new master is elected it can determine the set of virtual machines that are protected by HA by reading the file. If, by any chance, a master should fail right at the moment that it became isolated, the restart of the virtual machines will be delayed until a new master has been elected. In a scenario like this, accuracy and the fact that virtual machines are restarted is more important than a short delay.
Let’s assume for a second that your master has just failed. What will happen and how do the slaves know that the master has failed? HA uses a point-to-point network heartbeat mechanism. If the slaves have received no network heartbeats from the master, the slaves will try to elect a new master. This new master will read the required information and will initiate the restart of the virtual machines within roughly 10 seconds.
Restarting virtual machines is not the only responsibility of the master. It is also responsible for monitoring the state of the slave hosts and reporting this state to vCenter Server. If a slave fails or becomes isolated from the management network, the master will determine which virtual machines must be restarted. When virtual machines need to be restarted, the master is also responsible for determining the placement of those virtual machines. It uses a placement engine that will try to distribute the virtual machines to be restarted evenly across all available hosts.
+

All of these responsibilities are really important, but without a mechanism to detect a slave has failed, the master would be useless. Just like the slaves receive heartbeats from the master, the master receives heartbeats from the slaves so it knows they are alive.

Slaves

A slave has substantially fewer responsibilities than a master: a slave monitors the state of the virtual machines it is running and informs the master about any changes to this state.
The slave also monitors the health of the master by monitoring heartbeats. If the master becomes unavailable, the slaves initiate and participate in the election process. Last but not least, the slaves send heartbeats to the master so that the master can detect outages. Like the master to slave communication, all slave to master communication is point to point. HA does not use multicast.
Figure 9 - Slave Agent

Files for both Slave and Master

Before explaining the details it is important to understand that both Virtual SAN and Virtual Volumes have introduced changes to the location and the usage of files. For specifics on these two different storage architectures we refer you to those respective sections in the book.
Both the master and slave use files not only to store state, but also as a communication mechanism. We’ve already seen the protectedlist file (Figure 8) used by the master to store the list of protected virtual machines. We will now discuss the files that are created by both the master and the slaves. Remote files are files stored on a shared datastore and local files are files that are stored in a location only directly accessible to that host.

Remote Files

The set of powered on virtual machines is stored in a per-host “poweron” file. It should be noted that, because a master also hosts virtual machines, it also creates a “poweron” file.
The naming scheme for this file is as follows: host-number-poweron
Tracking virtual machine power-on state is not the only thing the “poweron” file is used for. This file is also used by the slaves to inform the master that it is isolated from the management network: the top line of the file will either contain a 0 or a 1. A 0 (zero) means not-isolated and a 1 (one) means isolated. The master will inform vCenter about the isolation of the host.

Local Files

As mentioned before, when HA is configured on a host, the host will store specific information about its cluster locally.
Figure 10 - Locally stored files

Each host, including the master, will store data locally. The data that is locally stored is important state information. Namely, the VM-to-host compatibility matrix, cluster configuration, and host membership list. This information is persisted locally on each host. Updates to this information is sent to the master by vCenter and propagated by the master to the slaves. Although we expect that most of you will never touch these files – and we highly recommend against modifying them – we do want to explain how they are used:
+

  • clusterconfig This file is not human-readable. It contains the configuration details of the cluster.
  • vmmetadata This file is not human-readable. It contains the actual compatibility info matrix for every HA protected virtual machine and lists all the hosts with which it is compatible plus a vm/host dictionary
  • fdm.cfg This file contains the configuration settings around logging. For instance, the level of logging and syslog details are stored in here.
  • hostlist A list of hosts participating in the cluster, including hostname, IP addresses, MAC addresses and heartbeat datastores.

Heartbeating

We mentioned it a couple of times already in this chapter, and it is an important mechanism that deserves its own section: heartbeating. Heartbeating is the mechanism used by HA to validate whether a host is alive. HA has two different heartbeating mechanisms. These heartbeat mechanisms allows it to determine what has happened to a host when it is no longer responding. Let’s discuss traditional network heartbeating first.

Network Heartbeating

Network Heartbeating is used by HA to determine if an ESXi host is alive. Each slave will send a heartbeat to its master and the master sends a heartbeat to each of the slaves, this is a point-to-point communication. These heartbeats are sent by default every second.
When a slave isn’t receiving any heartbeats from the master, it will try to determine whether it is Isolated– we will discuss “states” in more detail later on in this chapter.
Basic design principle: Network heartbeating is key for determining the state of a host. Ensure the management network is highly resilient to enable proper state determination.

Datastore Heartbeating

Datastore heartbeating adds an extra level of resiliency and prevents unnecessary restart attempts from occurring as it allows vSphere HA to determine whether a host is isolated from the network or is completely unavailable. How does this work?
Datastore heartbeating enables a master to more determine the state of a host that is not reachable via the management network. The new datastore heartbeat mechanism is used in case the master has lost network connectivity with the slaves. The datastore heartbeat mechanism is then used to validate whether a host has failed or is merely isolated/network partitioned. Isolation will be validated through the “poweron” file which, as mentioned earlier, will be updated by the host when it is isolated. Without the “poweron” file, there is no way for the master to validate isolation. Let that be clear! Based on the results of checks of both files, the master will determine the appropriate action to take. If the master determines that a host has failed (no datastore heartbeats), the master will restart the failed host’s virtual machines. If the master determines that the slave is Isolated or Partitioned, it will only take action when it is appropriate to take action. With that meaning that the master will only initiate restarts when virtual machines are down or powered down / shut down by a triggered isolation response.
By default, HA selects 2 heartbeat datastores – it will select datastores that are available on all hosts, or as many as possible. Although it is possible to configure an advanced setting (das.heartbeatDsPerHost) to allow for more datastores for datastore heartbeating we do not recommend configuring this option as the default should be sufficient for most scenarios, except for stretched cluster environments where it is recommended to have two in each site manually selected.
The selection process gives preference to VMFS over NFS datastores, and seeks to choose datastores that are backed by different LUNs or NFS servers when possible. If desired, you can also select the heartbeat datastores yourself. We, however, recommend letting vCenter deal with this operational “burden” as vCenter uses a selection algorithm to select heartbeat datastores that are presented to all hosts. This however is not a guarantee that vCenter can select datastores which are connected to all hosts. It should be noted that vCenter is not site-aware. In scenarios where hosts are geographically dispersed it is recommend to manually select heartbeat datastores to ensure each site has one site-local heartbeat datastore at minimum.
+

Basic design principle: In a metro-cluster / geographically dispersed cluster we recommend setting the minimum number of heartbeat datastores to four. It is recommended to manually select site local datastores, two for each site.

Figure 11 - Selecting the heartbeat datastores
The question now arises: what, exactly, is this datastore heartbeating and which datastore is used for this heartbeating? Let’s answer which datastore is used for datastore heartbeating first as we can simply show that with a screenshot, see below. vSphere displays extensive details around the “Cluster Status” on the Cluster’s Monitor tab. This for instance shows you which datastores are being used for heartbeating and which hosts are using which specific datastore(s). In addition, it displays how many virtual machines are protected and how many hosts are connected to the master.
Figure 12 - Validating
In block based storage environments HA leverages an existing VMFS file system mechanism. The datastore heartbeat mechanism uses a so called “heartbeat region” which is updated as long as the file is open. On VMFS datastores, HA will simply check whether the heartbeat region has been updated. In order to update a datastore heartbeat region, a host needs to have at least one open file on the volume. HA ensures there is at least one file open on this volume by creating a file specifically for datastore heartbeating. In other words, a per-host file is created on the designated heartbeating datastores, as shown below. The naming scheme for this file is as follows: host-number-hb.

Figure 13 - Datastores used

On NFS datastores, each host will write to its heartbeat file once every 5 seconds, ensuring that the master will be able to check host state. The master will simply validate this by checking that the time-stamp of the file changed.
Realize that in the case of a converged network environment, the effectiveness of datastore heartbeating will vary depending on the type of failure. For instance, a NIC failure could impact both network and datastore heartbeating. If, for whatever reason, the datastore or NFS share becomes unavailable or is removed from the cluster, HA will detect this and select a new datastore or NFS share to use for the heartbeating mechanism.
Basic design principle
Datastore heartbeating adds a new level of resiliency but is not the be-all end-all. In converged networking environments, the use of datastore heartbeating adds little value due to the fact that a NIC failure may result in both the network and storage becoming unavailable.

Isolated versus Partitioned

We’ve already briefly touched on it and it is time to have a closer look. When it comes to network failures there are two different states that exist. What are these exactly and when is a host Partitioned rather than Isolated? Before we will explain this we want to point out that there is the state as reported by the master and the state as observed by an administrator and the characteristics these have.
First, consider the administrator’s perspective. Two hosts are considered partitioned if they are operational but cannot reach each other over the management network. Further, a host is isolated if it does not observe any HA management traffic on the management network and it can’t ping the configured isolation addresses. It is possible for multiple hosts to be isolated at the same time. We call a set of hosts that are partitioned but can communicate with each other a “management network partition”. Network partitions involving more than two partitions are possible but not likely.
+

Now, consider the HA perspective. When any HA agent is not in network contact with a master, they will elect a new master. So, when a network partition exists, a master election will occur so that a host failure or network isolation within this partition will result in appropriate action on the impacted virtual machine(s). The screenshot below shows possible ways in which an Isolation or a Partition can occur.
Figure 14 - Isolated versus Partitioned
If a cluster is partitioned in multiple segments, each partition will elect its own master, meaning that if you have 4 partitions your cluster will have 4 masters. When the network partition is corrected, one of the four masters will take over the role and be responsible for the cluster again. This will be done using the election algorithm (most connected datastores, highest lexical number). It should be noted that a master could claim responsibility for a virtual machine that lives in a different partition. If this occurs and the virtual machine happens to fail, the master will be notified through the datastore communication mechanism.
In the HA architecture, whether a host is partitioned is determined by the master reporting the condition. So, in the above example, the master on host ESXi-01 will report ESXi-03 and ESXi-04 partitioned while the master on host ESXi-03 will report ESXi-01 and ESXi-02 partitioned. When a partition occurs, vCenter reports the perspective of one master.
A master reports a host as partitioned or isolated when it can’t communicate with the host over the management network, it can observe the host’s datastore heartbeats via the heartbeat datastores. The master cannot alone differentiate between these two states – a host is reported as isolated only if the host informs the master via the datastores that is isolated.
This still leaves the question open how the master differentiates between a Failed, Partitioned, or Isolated host.
When the master stops receiving network heartbeats from a slave, it will check for host “liveness” for the next 15 seconds. Before the host is declared failed, the master will validate if it has actually failed or not by doing additional liveness checks. First, the master will validate if the host is still heartbeating to the datastore. Second, the master will ping the management IP address of the host. If both are negative, the host will be declared Failed. This doesn’t necessarily mean the host has PSOD’ed; it could be the network is unavailable, including the storage network, which would make this host Isolated from an administrator’s perspective but Failed from an HA perspective. As you can imagine, however, there are a various combinations possible. The following table depicts these combinations including the “state”.
StateNetworkHeartbeatStorageHeartbeatHost Live-nessPingIsolation Criteria Met
RunningYesN/AN/AN/A
IsolatedNoYesNoYes
PartitionedNoYesNoNo
FailedNoNoNoN/A
FDM Agent DownN/AN/AYesN/A
HA will trigger an action based on the state of the host. When the host is marked as Failed, a restart of the virtual machines will be initiated. When the host is marked as Isolated, the master might initiate the restarts.
The one thing to keep in mind when it comes to isolation response is that a virtual machine will only be shut down or powered off when the isolated host knows there is a master out there that has taken ownership for the virtual machine or when the isolated host loses access to the home datastore of the virtual machine.
For example, if a host is isolated and runs two virtual machines, stored on separate datastores, the host will validate if it can access each of the home datastores of those virtual machines. If it can, the host will validate whether a master owns these datastores. If no master owns the datastores, the isolation response will not be triggered and restarts will not be initiated. If the host does not have access to the datastore, for instance, during an “All Paths Down” condition, HA will trigger the isolation response to ensure the “original” virtual machine is powered down and will be safely restarted. This to avoid so-called “split-brain” scenarios.
To reiterate, as this is a very important aspect of HA and how it handles network isolations, the remaining hosts in the cluster will only be requested to restart virtual machines when the master has detected that either the host has failed or has become isolated and the isolation response was triggered.

Virtual Machine Protection

Virtual machine protection happens on several layers but is ultimately the responsibility of vCenter. We have explained this briefly but want to expand on it a bit more to make sure everyone understands the dependency on vCenter when it comes to protecting virtual machines. We do want to stress that this only applies to protecting virtual machines; virtual machine restarts in no way require vCenter to be available at the time.
When the state of a virtual machine changes, vCenter will direct the master to enable or disable HA protection for that virtual machine. Protection, however, is only guaranteed when the master has committed the change of state to disk. The reason for this, of course, is that a failure of the master would result in the loss of any state changes that exist only in memory. As pointed out earlier, this state is distributed across the datastores and stored in the “protectedlist” file.
When the power state change of a virtual machine has been committed to disk, the master will inform vCenter Server so that the change in status is visible both for the user in vCenter and for other processes like monitoring tools.
+

To clarify the process, we have created a workflow diagram of the protection of a virtual machine from the point it is powered on through vCenter:
Figure 15 - Virtual Machine protection workflow

But what about “unprotection?” When a virtual machine is powered off, it must be removed from the protectedlist. We have documented this workflow in the following diagram for the situation where the power off is invoked from vCenter.


Figure 16 - Virtual Machine Unprotection workflow

Restarting Virtual Machines

In the previous chapter, we have described most of the lower level fundamental concepts of HA. We have shown you that multiple mechanisms increase resiliency and reliability of HA. Reliability of HA in this case mostly refers to restarting (or resetting) virtual machines, as that remains HA’s primary task.
HA will respond when the state of a host has changed, or, better said, when the state of one or more virtual machines has changed. There are multiple scenarios in which HA will respond to a virtual machine failure, the most common of which are listed below:
  • Failed host
  • Isolated host
  • Failed guest operating system
Depending on the type of failure, but also depending on the role of the host, the process will differ slightly. Changing the process results in slightly different recovery timelines. There are many different scenarios and there is no point in covering all of them, so we will try to describe the most common scenario and include timelines where possible.
Before we dive into the different failure scenarios, we want to explain how restart priority and retries work.

Restart Priority and Order

HA can take the configured priority of the virtual machine into account when restarting VMs. However, it is good to know that Agent VMs take precedence during the restart procedure as the “regular” virtual machines may rely on them. A good example of an agent virtual machine is a virtual storage appliance.
Prioritization is done by each host and not globally. Each host that has been requested to initiate restart attempts will attempt to restart all top priority virtual machines before attempting to start any other virtual machines. If the restart of a top priority virtual machine fails, it will be retried after a delay. In the meantime, however, HA will continue powering on the remaining virtual machines. Keep in mind that some virtual machines might be dependent on the agent virtual machines. You should document which virtual machines are dependent on which agent virtual machines and document the process to start up these services in the right order in the case the automatic restart of an agent virtual machine fails.
Basic design principle: Virtual machines can be dependent on the availability of agent virtual machines or other virtual machines. Although HA will do its best to ensure all virtual machines are started in the correct order, this is not guaranteed. Document the proper recovery process.
Besides agent virtual machines, HA also prioritizes FT secondary machines. We have listed the full order in which virtual machines will be restarted below:
  • Agent virtual machines
  • FT secondary virtual machines
  • Virtual Machines configured with a restart priority of high
  • Virtual Machines configured with a medium restart priority
  • Virtual Machines configured with a low restart priority
It should be noted that HA will not place any virtual machines on a host if the required number of agent virtual machines are not running on the host at the time placement is done.
Now that we have briefly touched on it, we would also like to address “restart retries” and parallelization of restarts as that more or less dictates how long it could take before all virtual machines of a failed or isolated host are restarted.

Restart Retries

The number of retries is configurable as of vCenter 2.5 U4 with the advanced option “das.maxvmrestartcount”. The default value is 5. Note that the initial restart is included.
HA will try to start the virtual machine on one of your hosts in the affected cluster; if this is unsuccessful on that host, the restart count will be increased by 1. Before we go into the exact timeline, let it be clear that T0 is the point at which the master initiates the first restart attempt. This by itself could be 30 seconds after the virtual machine has failed. The elapsed time between the failure of the virtual machine and the restart, though, will depend on the scenario of the failure, which we will discuss in this chapter.
As said, the default number of restarts is 5. There are specific times associated with each of these attempts. The following bullet list will clarify this concept. The ‘m’ stands for “minutes” in this list.
  • T0 – Initial Restart
  • T2m – Restart retry 1
  • T6m – Restart retry 2
  • T14m – Restart retry 3
  • T30m – Restart retry 4
Figure 17 - High Availability restart timeline
As clearly depicted in the diagram above, a successful power-on attempt could take up to ~30 minutes in the case where multiple power-on attempts are unsuccessful. This is, however, not exact science. For instance, there is a 2-minute waiting period between the initial restart and the first restart retry. HA will start the 2-minute wait as soon as it has detected that the initial attempt has failed. So, in reality, T2 could be T2 plus 8 seconds. Another important fact that we want emphasize is that there is no coordination between masters, and so if multiple ones are involved in trying to restart the virtual machine, each will retain their own sequence. Multiple masters could attempt to restart a virtual machine. Although only one will succeed, it might change some of the timelines.
What about VMs which are "disabled" for HA or VMs that are powered-off? What will happen with those VMs? Before vSphere 6.0 those VMs would be left alone, as of vSphere 6.0 these VMs will be registered on another host after a failure. This will allow you to easily power-on that VM when needed without needed to manually re-register it yourself. Note, HA will not do a power-on of the VM, it will just register it for you! (Note that a bug in vSphere 6.0 U2 prevents this from happening, and you need vSphere 6.0 U3 for this functionality to work.)
Let’s give an example to clarify the scenario in which a master fails during a restart sequence:
Cluster: 4 Host (esxi01, esxi02, esxi03, esxi04)
Master: esxi01
The host “esxi02” is running a single virtual machine called “vm01” and it fails. The master, esxi01, will try to restart it but the attempt fails. It will try restarting “vm01” up to 5 times but, unfortunately, on the 4th try, the master also fails. An election occurs and “esxi03” becomes the new master. It will now initiate the restart of “vm01”, and if that restart would fail it will retry it up to 4 times again for a total including the initial restart of 5.
Be aware, though, that a successful restart might never occur if the restart count is reached and all five restart attempts (the default value) were unsuccessful.
When it comes to restarts, one thing that is very important to realize is that HA will not issue more than 32 concurrent power-on tasks on a given host. To make that more clear, let’s use the example of a two host cluster: if a host fails which contained 33 virtual machines and all of these had the same restart priority, 32 power on attempts would be initiated. The 33rd power on attempt will only be initiated when one of those 32 attempts has completed regardless of success or failure of one of those attempts.
Now, here comes the gotcha. If there are 32 low-priority virtual machines to be powered on and a single high-priority virtual machine, the power on attempt for the low-priority virtual machines will not be issued until the power on attempt for the high priority virtual machine has completed. Let it be absolutely clear that HA does not wait to restart the low-priority virtual machines until the high-priority virtual machines are started, it waits for the issued power on attempt to be reported as “completed”. In theory, this means that if the power on attempt fails, the low-priority virtual machines could be powered on before the high priority virtual machine.
The restart priority however does guarantee that when a placement is done, the higher priority virtual machines get first right to any available resources.
Basic design principle: Configuring restart priority of a virtual machine is not a guarantee that virtual machines will actually be restarted in this order. Ensure proper operational procedures are in place for restarting services or virtual machines in the appropriate order in the event of a failure.
Now that we know how virtual machine restart priority and restart retries are handled, it is time to look at the different scenarios.
  • Failed host
    • Failure of a master
    • Failure of a slave
  • Isolated host and response

Failed Host

When discussing a failed host scenario it is needed to make a distinction between the failure of a master versus the failure of a slave. We want to emphasize this because the time it takes before a restart attempt is initiated differs between these two scenarios. Although the majority of you probably won’t notice the time difference, it is important to call out. Let’s start with the most common failure, that of a host failing, but note that failures generally occur infrequently. In most environments, hardware failures are very uncommon to begin with. Just in case it happens, it doesn’t hurt to understand the process and its associated timelines.

The Failure of a Slave

The failure of a slave host is a fairly complex scenario. Part of this complexity comes from the introduction of a new heartbeat mechanism. Actually, there are two different scenarios: one where heartbeat datastores are configured and one where heartbeat datastores are not configured. Keeping in mind that this is an actual failure of the host, the timeline is as follows:
  • T0 – Slave failure.
  • T3s – Master begins monitoring datastore heartbeats for 15 seconds.
  • T10s – The host is declared unreachable and the master will ping the management network of the failed host. This is a continuous ping for 5 seconds.
  • T15s – If no heartbeat datastores are configured, the host will be declared dead.
  • T18s – If heartbeat datastores are configured, the host will be declared dead.
The master monitors the network heartbeats of a slave. When the slave fails, these heartbeats will no longer be received by the master. We have defined this as T0. After 3 seconds (T3s), the master will start monitoring for datastore heartbeats and it will do this for 15 seconds. On the 10th second (T10s), when no network or datastore heartbeats have been detected, the host will be declared as “unreachable”. The master will also start pinging the management network of the failed host at the 10th second and it will do so for 5 seconds. If no heartbeat datastores were configured, the host will be declared “dead” at the 15th second (T15s) and virtual machine restarts will be initiated by the master. If heartbeat datastores have been configured, the host will be declared dead at the 18th second (T18s) and restarts will be initiated. We realize that this can be confusing and hope the timeline depicted in the diagram below makes it easier to digest.
Figure 18 - Restart timeline slave failure
The master filters the virtual machines it thinks failed before initiating restarts. The master uses the protectedlist for this, on-disk state could be obtained only by one master at a time since it required opening the protectedlist file in exclusive mode. If there is a network partition multiple masters could try to restart the same virtual machine as vCenter Server also provided the necessary details for a restart. As an example, it could happen that a master has locked a virtual machine’s home datastore and has access to the protectedlist while the other master is in contact with vCenter Server and as such is aware of the current desired protected state. In this scenario it could happen that the master which does not own the home datastore of the virtual machine will restart the virtual machine based on the information provided by vCenter Server.
This change in behavior was introduced to avoid the scenario where a restart of a virtual machine would fail due to insufficient resources in the partition which was responsible for the virtual machine. With this change, there is less chance of such a situation occurring as the master in the other partition would be using the information provided by vCenter Server to initiate the restart.
That leaves us with the question of what happens in the case of the failure of a master.

The Failure of a Master

In the case of a master failure, the process and the associated timeline are slightly different. The reason being that there needs to be a master before any restart can be initiated. This means that an election will need to take place amongst the slaves. The timeline is as follows:
  • T0 – Master failure.
  • T10s – Master election process initiated.
  • T25s – New master elected and reads the protectedlist.
  • T35s – New master initiates restarts for all virtual machines on the protectedlist which are not running.
Slaves receive network heartbeats from their master. If the master fails, let’s define this as T0 (T zero), the slaves detect this when the network heartbeats cease to be received. As every cluster needs a master, the slaves will initiate an election at T10s. The election process takes 15s to complete, which brings us to T25s. At T25s, the new master reads the protectedlist. This list contains all the virtual machines, which are protected by HA. At T35s, the master initiates the restart of all virtual machines that are protected but not currently running. The timeline depicted in the diagram below hopefully clarifies the process.
Figure 19 - Restart timeline master failure

Besides the failure of a host, there is another reason for restarting virtual machines: an isolation event.
+

Isolation Response and Detection

Before we will discuss the timeline and the process around the restart of virtual machines after an isolation event, we will discuss Isolation Response and Isolation Detection. One of the first decisions that will need to be made when configuring HA is the “Isolation Response”.

Isolation Response

The Isolation Response (or Host Isolation as it is called in vSphere 6.0) refers to the action that HA takes for its virtual machines when the host has lost its connection with the network and the remaining nodes in the cluster. This does not necessarily mean that the whole network is down; it could just be the management network ports of this specific host. Today there are two isolation responses: “Power off”, and “Shut down”. In previous versions (pre vSphere 6.0) there was also an isolation response called "leave powered on", this has been renamed to "disabled" as "leave powered on" means that there is no response to an isolation event.
Figure 20 - vSphere 6.0 Isolation Response settings
The isolation response features answers the question, “what should a host do with the virtual machines it manages when it detects that it is isolated from the network?” Let’s discuss these three options more in-depth:
+

  • Disabled (default) – When isolation occurs on the host, the state of the virtual machines remains unchanged.
  • Power off – When isolation occurs, all virtual machines are powered off. It is a hard stop, or to put it bluntly, the “virtual” power cable of the virtual machine will be pulled out!
  • Shut down – When isolation occurs, all virtual machines running on the host will be shut down using a guest-initiated shutdown through VMware Tools. If this is not successful within 5 minutes, a “power off” will be executed. This time out value can be adjusted by setting the advanced option das.isolationShutdownTimeout. If VMware Tools is not installed, a “power off” will be initiated immediately.
This setting can be changed on the cluster settings under virtual machine options.
Figure 21 - Cluster default settings
The default setting for the isolation response has changed multiple times over the last couple of years and this has caused some confusion.
  • Up to ESXi3.5 U2 / vCenter 2.5 U2 the default isolation response was “Power off”
  • With ESXi3.5 U3 / vCenter 2.5 U3 this was changed to “Leave powered on”
  • With vSphere 4.0 it was changed to “Shut down”.
  • With vSphere 5.0 it has been changed to “Leave powered on”.
  • With vSphere 6.0 the "leave powered on" setting is now renamed to "Disabled".
Keep in mind that these changes are only applicable to newly created clusters. When creating a new cluster, it may be required to change the default isolation response based on the configuration of existing clusters and/or your customer’s requirements, constraints and expectations. When upgrading an existing cluster, it might be wise to apply the latest default values. You might wonder why the default has changed once again. There was a lot of feedback from customers that “Disabled” was the desired default value.
Basic design principle: Before upgrading an environment to later versions, ensure you validate the best practices and default settings. Document them, including justification, to ensure all people involved understand your reasons.
The question remains, which setting should be used? The obvious answer applies here; it depends. We prefer “Disabled” because it eliminates the chances of having a false positive and its associated down time. One of the problems that people have experienced in the past is that HA triggered its isolation response when the full management network went down. Basically resulting in the power off (or shutdown) of every single virtual machine and none being restarted. This problem has been mitigated. HA will validate if virtual machines restarts can be attempted – there is no reason to incur any down time unless absolutely necessary. It does this by validating that a master owns the datastore the virtual machine is stored on. Of course, the isolated host can only validate this if it has access to the datastores. In a converged network environment with iSCSI storage, for instance, it would be impossible to validate this during a full isolation as the validation would fail due to the inaccessible datastore from the perspective of the isolated host.
We feel that changing the isolation response is most useful in environments where a failure of the management network is likely correlated with a failure of the virtual machine network(s). If the failure of the management network won’t likely correspond with the failure of the virtual machine networks, isolation response would cause unnecessary downtime as the virtual machines can continue to run without management network connectivity to the host.
A second use for power off/shutdown is in scenarios where the virtual machine retains access to the virtual machine network but loses access to its storage, leaving the virtual machine powered-on could result in two virtual machines on the network with the same IP address.
It is still difficult to decide which isolation response should be used. The following table was created to provide some more guidelines.
Likelihood that host will retain access to VM datastoreLikelihood VMs will retain access to VM networkRecommended Isolation PolicyRationale
LikelyLikelyDisabledVirtual machine is running fine, no reason to power it off
LikelyUnlikelyEither Disabled On or Shutdown.Choose shutdown to allow HA to restart virtual machines on hosts that are not isolated and hence are likely to have access to storage
UnlikelyLikelyPower OffUse Power Off to avoid having two instances of the same virtual machine on the virtual machine network
UnlikelyUnlikelyDisabled or Power OffDisabled if the virtual machine can recover from the network/datastore outage if it is not restarted because of the isolation, and Power Off if it likely can’t.
The question that we haven’t answered yet is how HA knows which virtual machines have been powered-off due to the triggered isolation response and why the isolation response is more reliable than with previous versions of HA. Previously, HA did not care and would always try to restart the virtual machines according to the last known state of the host. That is no longer the case. Before the isolation response is triggered, the isolated host will verify whether a master is responsible for the virtual machine.
As mentioned earlier, it does this by validating if a master owns the home datastore of the virtual machine. When isolation response is triggered, the isolated host removes the virtual machines which are powered off or shutdown from the “poweron” file. The master will recognize that the virtual machines have disappeared and initiate a restart. On top of that, when the isolation response is triggered, it will create a per-virtual machine file under a “poweredoff” directory which indicates for the master that this virtual machine was powered down as a result of a triggered isolation response. This information will be read by the master node when it initiates the restart attempt in order to guarantee that only virtual machines that were powered off / shut down by HA will be restarted by HA.
This is, however, only one part of the increased reliability of HA. Reliability has also been improved with respect to “isolation detection,” which will be described in the following section.

Isolation Detection

We have explained what the options are to respond to an isolation event and what happens when the selected response is triggered. However, we have not extensively discussed how isolation is detected. The mechanism is fairly straightforward and works with heartbeats, as earlier explained. There are, however, two scenarios again, and the process and associated timelines differ for each of them:
  • Isolation of a slave
  • Isolation of a master
Before we explain the differences in process between both scenarios, we want to make sure it is clear that a change in state will result in the isolation response not being triggered in either scenario. Meaning that if a single ping is successful or the host observes election traffic and is elected a master or slave, the isolation response will not be triggered, which is exactly what you want as avoiding down time is at least as important as recovering from down time. When a host has declared itself isolated and observes election traffic it will declare itself no longer isolated.

Isolation of a Slave

HA triggers a master election process before it will declare a host is isolated. In the below timeline, “s” refers to seconds.
  • T0 – Isolation of the host (slave)
  • T10s – Slave enters “election state”
  • T25s – Slave elects itself as master
  • T25s – Slave pings “isolation addresses”
  • T30s – Slave declares itself isolated
  • T60s – Slave “triggers” isolation response
When the isolation response is triggered HA creates a “power-off” file for any virtual machine HA powers off whose home datastore is accessible. Next it powers off the virtual machine (or shuts down) and updates the host’s poweron file. The power-off file is used to record that HA powered off the virtual machine and so HA should restart it. These power-off files are deleted when a virtual machine is powered back on or HA is disabled, the below screenshot shows such a power-off file, which in this case is stored in a VVol.
Figure 22 - VVol based poweroff file
After the completion of this sequence, the master will learn the slave was isolated through the “poweron” file as mentioned earlier, and will restart virtual machines based on the information provided by the slave.
Figure 23 - Isolation of a slave timeline

Isolation of a Master

In the case of the isolation of a master, this timeline is a bit less complicated because there is no need to go through an election process. In this timeline, “s” refers to seconds.
  • T0 – Isolation of the host (master)
  • T0 – Master pings “isolation addresses”
  • T5s – Master declares itself isolated
  • T35s – Master “triggers” isolation response

Additional Checks

Before a host declares itself isolated, it will ping the default isolation address which is the gateway specified for the management network, and will continue to ping the address until it becomes unisolated. HA gives you the option to define one or multiple additional isolation addresses using an advanced setting. This advanced setting is called das.isolationaddress and could be used to reduce the chances of having a false positive. We recommend setting an additional isolation address. If a secondary management network is configured, this additional address should be part of the same network as the secondary management network. If required, you can configure up to 10 additional isolation addresses. A secondary management network will more than likely be on a different subnet and it is recommended to specify an additional isolation address which is part of the subnet.
Figure 24 - Isolation Address

Selecting an Additional Isolation Address

A question asked by many people is which address should be specified for this additional isolation verification. We generally recommend an isolation address close to the hosts to avoid too many network hops and an address that would correlate with the liveness of the virtual machine network. In many cases, the most logical choice is the physical switch to which the host is directly connected. Basically, use the gateway for whatever subnet your management network is on. Another usual suspect would be a router or any other reliable and pingable device on the same subnet. However, when you are using IP-based shared storage like NFS or iSCSI, the IP-address of the storage device can also be a good choice.
Basic design principle: Select a reliable secondary isolation address. Try to minimize the number of “hops” between the host and this address.

Isolation Policy Delay

For those who want to increase the time it takes before HA executes the isolation response an advanced setting is available. Thus setting is called “das.config.fdm.isolationPolicyDelaySec” and allows changing the number of seconds to wait before the isolation policy is executed is. The minimum value is 30. If set to a value less than 30, the delay will be 30 seconds. We do not recommend changing this advanced setting unless there is a specific requirement to do so. In almost all scenarios 30 seconds should suffice.

Restarting Virtual Machines

The most important procedure has not yet been explained: restarting virtual machines. We have dedicated a full section to this concept.
We have explained the difference in behavior from a timing perspective for restarting virtual machines in the case of a both master node and slave node failures. For now, let’s assume that a slave node has failed. When the master node declares the slave node as Partitioned or Isolated, it determines which virtual machines were running on using the information it previously read from the host’s “poweron” file. These files are asynchronously read approximately every 30s. If the host was not Partitioned or Isolated before the failure, the master uses cached data to determine the virtual machines that were last running on the host before the failure occurred.
Before it will initiate the restart attempts, though, the master will first validate that the virtual machine should be restarted. This validation uses the protection information vCenter Server provides to each master, or if the master is not in contact with vCenter Server, the information saved in the protectedlist files. If the master is not in contact with vCenter Server or has not locked the file, the virtual machine is filtered out. At this point, all virtual machines having a restart priority of “disabled” are also filtered out.
Now that HA knows which virtual machines it should restart, it is time to decide where the virtual machines are placed. HA will take multiple things in to account:
  • CPU and memory reservation, including the memory overhead of the virtual machine
  • Unreserved capacity of the hosts in the cluster
  • Restart priority of the virtual machine relative to the other virtual machines that need to be restarted
  • Virtual-machine-to-host compatibility set
  • The number of dvPorts required by a virtual machine and the number available on the candidate hosts
  • The maximum number of vCPUs and virtual machines that can be run on a given host
  • Restart latency
  • Whether the active hosts are running the required number of agent virtual machines.
Restart latency refers to the amount of time it takes to initiate virtual machine restarts. This means that virtual machine restarts will be distributed by the master across multiple hosts to avoid a boot storm, and thus a delay, on a single host.
If a placement is found, the master will send each target host the set of virtual machines it needs to restart. If this list exceeds 32 virtual machines, HA will limit the number of concurrent power on attempts to 32. If a virtual machine successfully powers on, the node on which the virtual machine was powered on will inform the master of the change in power state. The master will then remove the virtual machine from the restart list.
If a placement cannot be found, the master will place the virtual machine on a “pending placement list” and will retry placement of the virtual machine when one of the following conditions changes:
  • A new virtual-machine-to-host compatibility list is provided by vCenter.
  • A host reports that its unreserved capacity has increased.
  • A host (re)joins the cluster (For instance, when a host is taken out of maintenance mode, a host is added to a cluster, etc.)
  • A new failure is detected and virtual machines have to be failed over.
  • A failure occurred when failing over a virtual machine.
But what about DRS? Wouldn’t DRS be able to help during the placement of virtual machines when all else fails? It does. The master node will report to vCenter the set of virtual machines that were not placed due to insufficient resources, as is the case today. If DRS is enabled, this information will be used in an attempt to have DRS make capacity available.

Component Protection

In vSphere 6.0 a new feature as part of vSphere HA is introduced called VM Component Protection. VM Component Protection (VMCP) in vSphere 6.0 allows you to protect virtual machines against the failure of your storage system. There are two types of failures VMCP will respond to and those are Permanent Device Loss (PDL) and All Paths Down (APD). Before we look at some of the details, we want to point out that enabling VMCP is extremely easy. It can be enabled by a single tick box as shown in the screenshot below.
Figure 25 - Virtual Machine Component Protection
As stated there are two scenarios HA can respond to, PDL and APD. Lets look at those two scenarios a bit closer. With vSphere 5.0 a feature was introduced as an advanced option that would allow vSphere HA to restart VMs impacted by a PDL condition.
A PDL condition, is a condition that is communicated by the array controller to ESXi via a SCSI sense code. This condition indicates that a device (LUN) has become unavailable and is likely permanently unavailable. An example scenario in which this condition would be communicated by the array would be when a LUN is set offline. This condition is used during a failure scenario to ensure ESXi takes appropriate action when access to a LUN is revoked. It should be noted that when a full storage failure occurs it is impossible to generate the PDL condition as there is no communication possible between the array and the ESXi host. This state will be identified by the ESXi host as an APD condition.
Although the functionality itself worked as advertised, enabling and managing it was cumbersome and error prone. It was required to set the option “disk.terminateVMOnPDLDefault” manually. With vSphere 6.0 a simple option in the Web Client is introduced which allows you to specify what the response should be to a PDL sense code.
Figure 26 - Enabling Virtual Machine Component Protection
The two options provided are “Issue Events” and “Power off and restart VMs”. Note that “Power off and restart VMs” does exactly that, your VM process is killed and the VM is restarted on a host which still has access to the storage device.
Until now it was not possible for vSphere to respond to an APD scenario. APD is the situation where the storage device is inaccessible but for unknown reasons. In most cases where this occurs it is typically related to a storage network problem. With vSphere 5.1 changes were introduced to the way APD scenarios were handled by the hypervisor. This mechanism is leveraged by HA to allow for a response.
When an APD occurs a timer starts. After 140 seconds the APD is declared and the device is marked as APD time out. When the 140 seconds has passed HA will start counting. The HA time out is 3 minutes by default at shown in Figure 24. When the 3 minutes has passed HA will take the action defined. There are again two options “Issue Events” and “Power off and restart VMs”.
You can also specify how aggressively HA needs to try to restart VMs that are impacted by an APD. Note that aggressive / conservative refers to the likelihood of HA being able to restart VMs. When set to “conservative” HA will only restart the VM that is impacted by the APD if it knows another host can restart it. In the case of “aggressive” HA will try to restart the VM even if it doesn’t know the state of the other hosts, which could lead to a situation where your VM is not restarted as there is no host that has access to the datastore the VM is located on.
It is also good to know that if the APD is lifted and access to the storage is restored during the total of the approximate 5 minutes and 20 seconds it would take before the VM restart is initiated, that HA will not do anything unless you explicitly configure it do so. This is where the “Response for APD recovery after APD timeout” comes in to play. If there is a desire to do so you can restart the VM even when the host has recovered from the APD scenario, during the 3 minute (default value) grace period.
Basic design principle: Without access to shared storage a virtual machine becomes useless. It is highly recommended to configure VMCP to act on a PDL and APD scenario. We recommend to set both to “power off and restarts VMs” but leave the “response for APD recovery after APD timeout” disabled so that VMs are not rebooted unnecessarrily.

vSphere HA nuggets

Prior to vSphere 5.5, HA did nothing with VM to VM Affinity or Anti Affinity rules. Typically for people using “affinity” rules this was not an issue, but those using “anti-affinity” rules did see this as an issue. They created these rules to ensure specific virtual machines would never be running on the same host, but vSphere HA would simply ignore the rule when a failure had occurred and just place the VMs “randomly”. With vSphere 5.5 this has changed! vSphere HA is now “anti affinity” aware. In order to ensure anti-affinity rules are respected you can set an advanced setting or configure in the vSphere Web Client as of vSphere 6.0.
das.respectVmVmAntiAffinityRules - Values: "false" (default) and "true"
Now note that this also means that when you configure anti-affinity rules and have this advanced setting configured to “true” and somehow there aren’t sufficient hosts available to respect these rules… then rules will be respected and it could result in HA not restarting a VM. Make sure to understand this potential impact when configuring this setting and configuring these rules.
With vSphere 6.0 support for respecting VM to Host affinity rules has been included. This is enabled through the use of an advanced setting called “das.respectVmHostSoftAffinityRules”. When the advanced setting “das.respectVmHostSoftAffinityRules” is configured vSphere HA will try to respect the rules when it can. If there are any hosts in the cluster which belong to the same VM-Host group then HA will restart the respective VM on that host. As this is a “should rule” HA has the ability to ignore the rule when needed. If there is a scenario where none of the hosts in the VM-Host should rule is available HA will restart the VM on any other host in the cluster.
das. respectVmHostSoftAffinityRules - Values: "false" (default) and "true"

VMware vSAN and Virtual Volumes specifics

In the last couple of sections we have discussed the ins and out of HA. All of it based on VMFS based or NFS based storage. With the introduction of VMware vSAN and Virtual Volumes also comes changes to some of the discussed concepts.
+

HA and vSAN

vSAN is VMware’s approach to Software Defined Storage. We are not going to explain the ins and outs of vSAN, but do want to provide a basic understanding for those who have never done anything with it. vSAN leverages host local storage and creates a shared data store out of it.
Figure 27 - Virtual SAN Cluster
vSAN requires a minimum of 3 hosts and each of those 3 hosts will need to have 1 SSD for caching and 1 capacity device (can be SSD or HDD). Only the capacity devices will contribute to the available capacity of the datastore. If you have 1TB worth of capacity devices per host then with three hosts the total size of your datastore will be 3TB.
Having that said, with vSAN 6.1 VMware introduced a "2-node" option. This 2-node option is actually 2 regular vSAN nodes with a third "witness" node.
The big differentiator between most storage systems and vSAN is that availability of the virtual machine’s is defined on a per virtual disk or per virtual machine basis. This is called “Failures To Tolerate” and can be configured to any value between 0 (zero) and 3. When configured to 0 then the virtual machine will have only 1 copy of its virtual disks which means that if a host fails where the virtual disks are stored the virtual machine is lost. As such all virtual machines are deployed by default with Failures To Tolerate(FTT) set to 1. A virtual disk is what vSAN refers to as an object. An object, when FTT is configured as 1 or higher, has multiple components. In the diagram below we demonstrate the FTT=1 scenario, and the virtual disk in this case has 2 "data components" and a "witness components". The witness is used as a "quorom" mechnanism.

Figure 28 - Virtual SAN Object model
As the diagram above depicts, a virtual machine can be running on the first host while its storage components are on the remaining hosts in the cluster. As you can imagine from an HA point of view this changes things as access to the network is not only critical for HA to function correctly but also for vSAN. When it comes to networking note that when vSAN is configured in a cluster HA will use the same network for its communications (heartbeating etc). On top of that, it is good to know that VMware highly recommends 10GbE to be used for vSAN.
Basic design principle: 10GbE is highly recommend for vSAN, as vSphere HA also leverages the vSAN network and availability of VMs is dependent on network connectivity ensure that at a minimum two 10GbE ports are used and two physical switches for resiliency.
The reason that HA uses the same network as vSAN is simple, it is too avoid network partition scenarios where HA communications is separated from vSAN and the state of the cluster is unclear. Note that you will need to ensure that there is a pingable isolation address on the vSAN network and this isolation address will need to be configured as such through the use of the advanced setting “das.isolationAddress0”. We also recommend to disable the use of the default isolation address through the advanced setting “das.useDefaultIsolationAddress” (set to false). If you leave the isolation address set to the default gateway of the management network, then HA will use the management network to verify the isolation. There could be a scenario where only the vSAN network is isolated, in that particular situation VMs will not be powered off (or shutdown) when the isolation address is not part of the vSAN network.
When an isolation does occur the isolation response is triggered as explained in earlier chapters. For vSAN the recommendation is simple, configure the isolation response to “Power Off, then fail over”. This is the safest option. vSAN can be compared to the “converged network with IP based storage” example we provided. It is very easy to reach a situation where a host is isolated all virtual machines remain running but are restarted on another host because the connection to the vSAN datastore is lost.
Basic design principle: Configure your Isolation Address and your Isolation Policy accordingly. We recommend selecting “power off” as the Isolation Policy and a reliable pingable device as the isolation address. It is recommended to configure the Isolation Policy to “power off”.
What about things like heartbeat datastores and the folder structure that exists on a VMFS datastore, has any of that changed with vSAN. Yes it has. First of all, in a “vSAN” only environment the concept of Hearbeat Datastores is not used at all. The reason for this is straight forward, as HA and vSAN share the same network it is safe to assume that when the HA heartbeat is lost because of a network failure so is access to the vSAN datastore. Only in an environment where there is also traditional storage the heartbeat datastores will be configured, leveraging those traditional datastores as a heartbeat datastore. Note that we do not feel there is a reason to introduce traditional storage just to provide HA this functionality, HA and vSAN work perfectly fine without heartbeat datastores. If you however have traditional storage we do recommend to implement heartbeat datastores as it can help HA with identifying the type of issue that has occured.
+

Normally HA metadata is stored in the root of the datastore, for vSAN this is different as the metadata is stored in the VMs namespace object. The protectedlist is held in memory and updated automatically when VMs are powered on or off.
Now you may wonder, what happens when there is an isolation? How does HA know where to start the VM that is impacted? Lets take a look at a partition scenario.
Figure 29 - VSAN Partition scenario
In this scenario there a network problem has caused a cluster partition. Where a VM is restarted is determined by which partition owns the virtual machine files. Within a vSAN cluster this is fairly straight forward. There are two partitions, one of which is running the VM with its VMDK and the other partition has a VMDK replica and a witness. Guess what happens? Right, vSAN uses the witness to see which partition has quorum and based on that result, one of the two partitions will win. In this case, Partition 2 has more than 50% of the components of this object and as such is the winner. This means that the VM will be restarted on either “esxi-03″ or “esxi-04″ by vSphere HA. Note that the VM in Partition 1 will be powered off only if you have configured the isolation response to do so. We would like to stress that this is highly recommended! (Isolation response –> power off)
One final thing which is different for vSAN is how a partition is handled in a stretched cluster configuration. In a regular stretched cluster configuration using VMFS/NFS based storage VMs impacted by APD or PDL will be killed by HA. With vSAN this is slightly different. HA VMCP in 6.0 is not supported with vSAN unfortunately. vSAN has its own mechanism. vSAN recognizes when a VM running on a group of hosts has no access to any of the components. When this is the case vSAN will simply kill the impacted VM. You can disable this behavior, although we do not recommend doing this, by setting the advanced host setting called VSAN.AutoTerminateGhostVm to 0.

HA and Virtual Volumes

Let us start with first describing what Virtual Volumes is and what value it brings for an administrator. Virtual Volumes was developed to make your life (vSphere admin) and that of the storage administrator easier. This is done by providing a framework that enables the vSphere administrator to assign policies to virtual machines or virtual disks. In these policies capabilities of the storage array can be defined. These capabilities can be things like snapshotting, deduplication, raid-level, thin / thick provisioning etc. What is offered to the vSphere administrator is up to the Storage administrator, and of course up to what the storage system can offer to begin with. In the below screenshot we show an example for instance of some of the capabilities Nimble exposes through policy.
Figure 30 - Capabilities exposed by Nimble array
When a virtual machine is deployed and a policy is assigned then the storage system will enable certain functionality of the array based on what was specified in the policy. So no longer a need to assign capabilities to a LUN which holds many VMs, but rather a per VM or even per VMDK level control. So how does this work? Well lets take a look at an architectural diagram first.
Figure 31 - Virtual Volumes Architecture
The diagram shows a couple of components which are important in the VVol architecture. Lets list them out:
  • Protocol Endpoints aka PE
  • Virtual Datastore and a Storage Container
  • Vendor Provider / VASA
  • Policies
  • Virtual Volumes
Lets take a look at all of these three in the above order. Protocol Endpoints, what are they?
Protocol Endpoints are literally the access point to your storage system. All IO to virtual volumes is proxied through a Protocol Endpoint and you can have 1 or more of these per storage system, if your storage system supports having multiple of course. (Implementations of different vendors will vary.) PEs are compatible with different protocols (FC, FCoE, iSCSI, NFS) and if you ask me that whole discussion with Virtual Volumes will come to an end. You could see a Protocol Endpoint as a “mount point” or a device, and yes they will count towards your maximum number of devices per host (256). (Virtual Volumes it self won’t count towards that!)
Next up is the Storage Container. This is the place where you store your virtual machines, or better said where your virtual volumes end up. The Storage Container is a storage system logical construct and is represented within vSphere as a “virtual datastore”. You need 1 per storage system, but you can have many when desired. To this Storage Container you can apply capabilities. So if you like your virtual volumes to be able to use array based snapshots then the storage administrator will need to assign that capability to the storage container. Note that a storage administrator can grow a storage container without even informing you. A storage container isn’t formatted with VMFS or anything like that, so you don’t need to increase the volume in order to use the space.
But how does vSphere know which container is capable of doing what? In order to discover a storage container and its capabilities we need to be able to talk to the storage system first. This is done through the vSphere APIs for Storage Awareness. You simply point vSphere to the Vendor Provider and the vendor provider will report to vSphere what’s available, this includes both the storage containers as well as the capabilities they possess. Note that a single Vendor Provider can be managing multiple storage systems which in its turn can have multiple storage containers with many capabilities. These vendor providers can also come in different flavours, for some storage systems it is part of their software but for others it will come as a virtual appliance that sits on top of vSphere.
Now that vSphere knows which systems there are, what containers are available with which capabilities you can start creating policies. These policies can be a combination of capabilities and will ultimately be assigned to virtual machines or virtual disks even. You can imagine that in some cases you would like Quality of Service enabled to ensure performance for a VM while in other cases it isn’t as relevant but you need to have a snapshot every hour. All of this is enabled through these policies. No longer will you be maintaining that spreadsheet with all your LUNs and which data service were enabled and what not, no you simply assign a policy. (Yes, a proper naming scheme will be helpful when defining policies.) When requirements change for a VM you don’t move the VM around, no you change the policy and the storage system will do what is required in order to make the VM (and its disks) compliant again with the policy. Not the VM really, but the Virtual Volumes.
Okay, those are the basics, now what about Virtual Volumes and vSphere HA. What changes when you are running Virtual Volumes, what do you need to keep in mind when running Virtual Volumes when it comes to HA?
First of all, let me mention this, in some cases storage vendors have designed a solution where the "vendor provider" isn't designed in an HA fashion (VMware allows for Active/Active, Active/Standby or just "Active" as in a single instance). Make sure to validate what kind of implementation your storage vendor has, as the Vendor Provider needs to be available when powering on VMs. The following quote explains why:
When a Virtual Volume is created, it is not immediately accessible for IO. To Access Virtual Volumes, vSphere needs to issue a “Bind” operation to a VASA Provider (VP), which creates IO access point for a Virtual Volume on a Protocol Endpoint (PE) chosen by a VP. A single PE can be the IO access point for multiple Virtual Volumes. “Unbind” Operation will remove this IO access point for a given Virtual Volume.
That is the "Virtual Volumes" implementation aspect, but of course things have also changed from a vSphere HA point of view. No longer do we have VMFS or NFS datastores to store files on or use for heartbeating. What changes from that perspective. First of all a VM is carved up in different Virtual Volumes:
  • VM Configuration
  • Virtual Machine Disk's
  • Swap File
  • Snapshot (if there are any)
Besides these different types of objects, when vSphere HA is enabled there also is a volume used by vSphere HA and this volume will contain all the metadata which is normally stored under "//.vSphere-HA//" on regular VMFS. For each Fault Domain a separate folder will be created in this VVol as shown in the screenshot below.
Figure 32 - Single FDM cluster on VVol array
All VM related HA files which normally would be under the VM folder, like for instance the power-on file, heartbeat files and the protectedlist, are now stored in the VM Configuration VVol object. Conceptually speaking similar to regular VMFS, implementation wise however completely different.

Figure 33 - VVol FDM files
The power-off file however, which is used to indicate that a VM has been powered-off due to an isolation event, is not stored under the .vSphere-HA folder any longer, but is stored in the VM config VVol (in the UI exposed as the VVol VM folder) as shown in the screenshot below. The same applies for vSAN, where it is now stored in the VM Namespace object, and for traditional storage (NFS or VMFS) it is stored in the VM folder. This change was made when Virtual Volumes was introduced and done to keep the experience consistent across storage platforms.
Figure 34 - VVol based poweroff file
And that explains the differences between traditional storage systems using VMFS / NFS and new storage systems leveraging Virtual Volumes or even a full vSAN based solution.
---------------------------------------------------------Subtopic-----------------------------------------------------------------

Adding Resiliency to HA (Network Redundancy)

In the previous chapter we extensively covered both Isolation Detection, which triggers the selected Isolation Response and the impact of a false positive. The Isolation Response enables HA to restart virtual machines when “Power off” or “Shut down” has been selected and the host becomes isolated from the network. However, this also means that it is possible that, without proper redundancy, the Isolation Response may be unnecessarily triggered. This leads to downtime and should be prevented.
To increase resiliency for networking, VMware implemented the concept of NIC teaming in the hypervisor for both VMkernel and virtual machine networking. When discussing HA, this is especially important for the Management Network.
NIC teaming is the process of grouping together several physical NICs into one single logical NIC, which can be used for network fault tolerance and load balancing.
Using this mechanism, it is possible to add redundancy to the Management Network to decrease the chances of an isolation event. This is, of course, also possible for other “Portgroups” but that is not the topic of this chapter or book. Another option is configuring an additional Management Network by enabling the “management network” tick box on another VMkernel port. A little understood fact is that if there are multiple VMkernel networks on the same subnet, HA will use all of them for management traffic, even if only one is specified for management traffic!
Although there are many configurations possible and supported, we recommend a simple but highly resilient configuration. We have included the vMotion (VMkernel) network in our example as combining the Management Network and the vMotion network on a single vSwitch is the most commonly used configuration and an industry accepted best practice.
Requirements:
  • 2 physical NICs
  • VLAN trunking
Recommended:
  • 2 physical switches
  • If available, enable “link state tracking” to ensure link failures are reported
The vSwitch should be configured as follows:
  • vSwitch0: 2 Physical NICs (vmnic0 and vmnic1).
  • 2 Portgroups (Management Network and vMotion VMkernel).
  • Management Network active on vmnic0 and standby on vmnic1.
  • vMotion VMkernel active on vmnic1 and standby on vmnic0.
  • Failback set to No.
Each portgroup has a VLAN ID assigned and runs dedicated on its own physical NIC; only in the case of a failure it is switched over to the standby NIC. We highly recommend setting failback to “No” to avoid chances of an unwanted isolation event, which can occur when a physical switch routes no traffic during boot but the ports are reported as “up”. (NIC Teaming Tab)
+

Pros: Only 2 NICs in total are needed for the Management Network and vMotion VMkernel, especially useful in blade server environments. Easy to configure.
Cons: Just a single active path for heartbeats.
The following diagram depicts this active/standby scenario:

Figure 35 - Active-Standby Management Network design
To increase resiliency, we also recommend implementing the following advanced settings and using NIC ports on different PCI busses – preferably NICs of a different make and model. When using a different make and model, even a driver failure could be mitigated.
Advanced Settings: das.isolationaddressX =
The isolation address setting is discussed in more detail in the section titled "Fundamental Concepts". In short; it is the IP address that the HA agent pings to identify if the host is completely isolated from the network or just not receiving any heartbeats. If multiple VMkernel networks on different subnets are used, it is recommended to set an isolation address per network to ensure that each of these will be able to validate isolation of the host.
Basic design principle: Take advantage of some of the basic features vSphere has to offer like NIC teaming. Combining different physical NICs will increase overall resiliency of your solution.

Corner Case Scenario: Split-Brain

A split brain scenario is a scenario where a single virtual machine is powered up multiple times, typically on two different hosts. This is possible in the scenario where the isolation response is set to “Disabled” and network based storage, like NFS / iSCSI and even Virtual SAN, is used. This situation can occur during a full network isolation, which may result in the lock on the virtual machine’s VMDK being lost, enabling HA to actually power up the virtual machine. As the virtual machine was not powered off on its original host (isolation response set to “Disabled”), it will exist in memory on the isolated host and in memory with a disk lock on the host that was requested to restart the virtual machine.
Keep in mind that this truly is a corner case scenario which is very unlikely to occur in most environments. In case it does happen, HA relies on the “lost lock detection” mechanism to mitigate this scenario. In short ESXi detects that the lock on the VMDK has been lost and, when the datastore becomes accessible again and the lock cannot be reacquired, issues a question whether the virtual machine should be powered off; HA automatically answers the question with Yes. However, you will only see this question if you directly connect to the ESXi host during the failure. HA will generate an event for this auto-answered question though.
As stated above the question will be auto-answered and the virtual machine will be powered off to recover from the split brain scenario. The question still remains: in the case of an isolation with iSCSI or NFS, should you power off virtual machines or leave them powered on?
As just explained, HA will automatically power off your original virtual machine when it detects a split-brain scenario. This process however is not instantaneous and as such it is recommended to use the isolation response of “Power Off” or “Disabled". We also recommend increasing heartbeat network resiliency to avoid getting in to this situation. We will discuss the options you have for enhancing Management Network resiliency in the next chapter.
This was already briefly mentioned in the list of recommendations, but this feature is something we would like to emphasize. We have noticed that people often forget about this even though many switches offer this capability, especially in blade server environments.
+

Link state tracking will mirror the state of an upstream link to a downstream link. Let’s clarify that with a diagram.
Figure 36 - Link State tracking mechanism
The diagram above depicts a scenario where an uplink of a “Core Switch” has failed. Without Link State Tracking, the connection from the “Edge Switch” to vmnic0 will be reported as up. With Link State Tracking enabled, the state of the link on the “Edge Switch” will reflect the state of the link of the “Core Switch” and as such be marked as “down”. You might wonder why this is important but think about it for a second. Many features that vSphere offer rely on networking and so do your virtual machines. In the case where the state is not reflected, some functionality might just fail, for instance network heartbeating could fail if it needs to flow through the core switch. We call this a ‘black hole’ scenario: the host sends traffic down a path that it believes is up, but the traffic never reaches its destination due to the failed upstream link.
+

Basic design principle: Know your network environment, talk to the network administrators and ensure advanced features like Link State Tracking are used when possible to increase resiliency.

-----------------------------------------------------------------------------Sub-topic---------------------------------------------------------------------

Admission Control

Admission Control is more than likely the most misunderstood concept vSphere holds today and because of this it is often disabled. However, Admission Control is a must when availability needs to be guaranteed and isn’t that the reason for enabling HA in the first place?
What is HA Admission Control about? Why does HA contain this concept called Admission Control? The “Availability Guide” a.k.a HA bible states the following:
vCenter Server uses admission control to ensure that sufficient resources are available in a cluster to provide failover protection and to ensure that virtual machine resource reservations are respected.
Please read that quote again and especially the first two words. Indeed it is vCenter that is responsible for Admission Control, contrary to what many believe. Although this might seem like a trivial fact it is important to understand that this implies that Admission Control will not disallow HA initiated restarts. HA initiated restarts are done on a host level and not through vCenter.
As said, Admission Control guarantees that capacity is available for an HA initiated failover by reserving resources within a cluster. It calculates the capacity required for a failover based on available resources. In other words, if a host is placed into maintenance mode or disconnected, it is taken out of the equation. This also implies that if a host has failed or is not responding but has not been removed from the cluster, it is still included in the equation. “Available Resources” indicates that the virtualization overhead has already been subtracted from the total amount.
To give an example; VMkernel memory is subtracted from the total amount of memory to obtain the memory available memory for virtual machines. There is one gotcha with Admission Control that we want to bring to your attention before drilling into the different policies. When Admission Control is enabled, HA will in no way violate availability constraints. This means that it will always ensure multiple hosts are up and running and this applies for manual maintenance mode actions and, for instance, to VMware Distributed Power Management. So, if a host is stuck trying to enter Maintenance Mode, remember that it might be HA which is not allowing Maintenance Mode to proceed as it would violate the Admission Control Policy. In this situation, users can manually vMotion virtual machines off the host or temporarily disable admission control to allow the operation to proceed.
But what if you use something like Distributed Power Management (DPM), would that place all hosts in standby mode to reduce power consumption? No, DPM is smart enough to take hosts out of standby mode to ensure enough resources are available to provide for HA initiated failovers. If by any chance the resources are not available, HA will wait for these resources to be made available by DPM and then attempt the restart of the virtual machines. In other words, the retry count (5 retries by default) is not wasted in scenarios like these.

Admission Control Policy

The Admission Control Policy dictates the mechanism that HA uses to guarantee enough resources are available for an HA initiated failover. This section gives a general overview of the available Admission Control Policies. The impact of each policy is described in the following section, including our recommendation. HA has three mechanisms to guarantee enough capacity is available to respect virtual machine resource reservations.

Figure 37 - Admission control policy
Below we have listed all three options currently available as the Admission Control Policy. Each option has a different mechanism to ensure resources are available for a failover and each option has its caveats.

Admission Control Mechanisms

Each Admission Control Policy has its own Admission Control mechanism. Understanding each of these Admission Control mechanisms is important to appreciate the impact each one has on your cluster design. For instance, setting a reservation on a specific virtual machine can have an impact on the achieved consolidation ratio. This section will take you on a journey through the trenches of Admission Control Policies and their respective mechanisms and algorithms.

Host Failures Cluster Tolerates

The Admission Control Policy that has been around the longest is the “Host Failures Cluster Tolerates” policy. It is also historically the least understood Admission Control Policy due to its complex admission control mechanism.
This admission control policy can be configured in an N-1 fashion. This means that the number of host failures you can specify in a 32 host cluster is 31.
Within the vSphere Web Client it is possible to manually specify the slot size as can be seen in the below screenshot. The vSphere Web Client also allows you to view which virtual machines span multiple slots. This can be very useful in scenarios where the slot size has been explicitly specified, we will explain why in just a second.
Figure 38 - Host Failures
The so-called “slots” mechanism is used when the “Host failures cluster tolerates” has been selected as the Admission Control Policy. The details of this mechanism have changed several times in the past and it is one of the most restrictive policies; more than likely, it is also the least understood.
Slots dictate how many virtual machines can be powered on before vCenter starts yelling “Out Of Resources!” Normally, a slot represents one virtual machine. Admission Control does not limit HA in restarting virtual machines, it ensures enough unfragmented resources are available to power on all virtual machines in the cluster by preventing “over-commitment”. Technically speaking “over-commitment” is not the correct terminology as Admission Control ensures virtual machine reservations can be satisfied and that all virtual machines’ initial memory overhead requirements are met. Although we have already touched on this, it doesn’t hurt repeating it as it is one of those myths that keeps coming back; HA initiated failovers are not prone to the Admission Control Policy. Admission Control is done by vCenter. HA initiated restarts, in a normal scenario, are executed directly on the ESXi host without the use of vCenter. The corner-case is where HA requests DRS (DRS is a vCenter task!) to defragment resources but that is beside the point. Even if resources are low and vCenter would complain, it couldn’t stop the restart from happening.
Let’s dig in to this concept we have just introduced, slots.
A slot is defined as a logical representation of the memory and CPU resources that satisfy the reservation requirements for any powered-on virtual machine in the cluster.
In other words a slot is the worst case CPU and memory reservation scenario in a cluster. This directly leads to the first “gotcha.”
HA uses the highest CPU reservation of any given powered-on virtual machine and the highest memory reservation of any given powered-on virtual machine in the cluster. If no reservation of higher than 32 MHz is set, HA will use a default of 32 MHz for CPU. If no memory reservation is set, HA will use a default of 0 MB+memory overhead for memory. (See the VMware vSphere Resource Management Guide for more details on memory overhead per virtual machine configuration.) The following example will clarify what “worst-case” actually means.
Example: If virtual machine “VM1” has 2 GHz of CPU reserved and 1024 MB of memory reserved and virtual machine “VM2” has 1 GHz of CPU reserved and 2048 MB of memory reserved the slot size for memory will be 2048 MB (+ its memory overhead) and the slot size for CPU will be 2 GHz. It is a combination of the highest reservation of both virtual machines that leads to the total slot size. Reservations defined at the Resource Pool level however, will not affect HA slot size calculations.
Basic design principle: Be really careful with reservations, if there’s no need to have them on a per virtual machine basis; don’t configure them, especially when using host failures cluster tolerates. If reservations are needed, resort to resource pool based reservations.
Now that we know the worst-case scenario is always taken into account when it comes to slot size calculations, we will describe what dictates the amount of available slots per cluster as that ultimately dictates how many virtual machines can be powered on in your cluster.
First, we will need to know the slot size for memory and CPU, next we will divide the total available CPU resources of a host by the CPU slot size and the total available memory resources of a host by the memory slot size. This leaves us with a total number of slots for both memory and CPU for a host. The most restrictive number (worst-case scenario) is the number of slots for this host. In other words, when you have 25 CPU slots but only 5 memory slots, the amount of available slots for this host will be 5 as HA always takes the worst case scenario into account to “guarantee” all virtual machines can be powered on in case of a failure or isolation.
+

The question we receive a lot is how do I know what my slot size is? The details around slot sizes can be monitored on the HA section of the Cluster’s Monitor tab by checking the the “Advanced Runtime Info” section when the “Host Failures” Admission Control Policy is configured.

Figure 39 - High Availability cluster monitor section
Advanced Runtime Info will show the specifics the slot size and more useful details such as the number of slots available as depicted in Figure 30.
Figure 40 - High Availability advanced runtime info
As you can imagine, using reservations on a per virtual machine basis can lead to very conservative consolidation ratios. However, this is something that is configurable through the Web Client. If you have just one virtual machine with a really high reservation, you can set an explicit slot size by going to “Edit Cluster Services” and specifying them under the Admission Control Policy section as shown in Figure 29.
If one of these advanced settings is used, HA will ensure that the virtual machine that skewed the numbers can be restarted by “assigning” multiple slots to it. However, when you are low on resources, this could mean that you are not able to power on the virtual machine with this reservation because resources may be fragmented throughout the cluster instead of available on a single host. HA will notify DRS that a power-on attempt was unsuccessful and a request will be made to defragment the resources to accommodate the remaining virtual machines that need to be powered on. In order for this to be successful DRS will need to be enabled and configured to fully automated. When not configured to fully automated user action is required to execute DRS recommendations.
+

The following diagram depicts a scenario where a virtual machine spans multiple slots:

Figure 41 - Virtual machine spanning multiple HA slots
Notice that because the memory slot size has been manually set to 1024 MB, one of the virtual machines (grouped with dotted lines) spans multiple slots due to a 4 GB memory reservation. As you might have noticed, none of the hosts has enough resources available to satisfy the reservation of the virtual machine that needs to failover. Although in total there are enough resources available, they are fragmented and HA will not be able to power-on this particular virtual machine directly but will request DRS to defragment the resources to accommodate this virtual machine’s resource requirements.
Admission Control does not take fragmentation of slots into account when slot sizes are manually defined with advanced settings. It will take the number of slots this virtual machine will consume into account by subtracting them from the total number of available slots, but it will not verify the amount of available slots per host to ensure failover. As stated earlier, though, HA will request DRS to defragment the resources. This is by no means a guarantee of a successful power-on attempt.
Basic design principle: Avoid using advanced settings to decrease the slot size as it could lead to more down time and adds an extra layer of complexity. If there is a large discrepancy in size and reservations we recommend using the percentage based admission control policy.
Within the vSphere Web Client there is functionality which enables you to identify virtual machines which span multiple slots, as shown in Figure 29. We highly recommend monitoring this section on a regular basis to get a better understand of your environment and to identify those virtual machines that might be problematic to restart in case of a host failure.

Unbalanced Configurations and Impact on Slot Calculation

It is an industry best practice to create clusters with similar hardware configurations. However, many companies started out with a small VMware cluster when virtualization was first introduced. When the time has come to expand, chances are fairly large the same hardware configuration is no longer available. The question is will you add the newly bought hosts to the same cluster or create a new cluster?
From a DRS perspective, large clusters are preferred as it increases the load balancing opportunities. However there is a caveat for DRS as well, which is described in the DRS section of this book. For HA, there is a big caveat. When you think about it and understand the internal workings of HA, more specifically the slot algorithm, you probably already know what is coming up.
Let’s first define the term “unbalanced cluster.”
An unbalanced cluster would, for instance, be a cluster with 3 hosts of which one contains substantially more memory than the other hosts in the cluster.
Let’s try to clarify that with an example.
Example:
What would happen to the total number of slots in a cluster of the following specifications?
  • Three host cluster
  • Two hosts have 16 GB of available memory
  • One host has 32 GB of available memory
The third host is a brand new host that has just been bought and as prices of memory dropped immensely the decision was made to buy 32 GB instead of 16 GB.
The cluster contains a virtual machine that has 1 vCPU and 4 GB of memory. A 1024 MB memory reservation has been defined on this virtual machine. As explained earlier, a reservation will dictate the slot size, which in this case leads to a memory slot size of 1024 MB + memory overhead. For the sake of simplicity, we will calculate with 1024 MB. The following diagram depicts this scenario:
Figure 42 - High Availability memory slot size
When Admission Control is enabled and the number of host failures has been selected as the Admission Control Policy, the number of slots will be calculated per host and the cluster in total. This will result in:
HostNumber of slots
ESXi-0116 Slots
ESXi-0216 Slots
ESXi-0332 Slots
As Admission Control is enabled, a worst-case scenario is taken into account. When a single host failure has been specified, this means that the host with the largest number of slots will be taken out of the equation. In other words, for our cluster, this would result in:
ESXi-01 + ESXi-02 = 32 slots available
Although you have doubled the amount of memory in one of your hosts, you are still stuck with only 32 slots in total. As clearly demonstrated, there is absolutely no point in buying additional memory for a single host when your cluster is designed with Admission Control enabled and the number of host failures has been selected as the Admission Control Policy.
In our example, the memory slot size happened to be the most restrictive; however, the same principle applies when CPU slot size is most restrictive.
Basic design principle: When using admission control, balance your clusters and be conservative with reservations as it leads to decreased consolidation ratios.
Now, what would happen in the scenario above when the number of allowed host failures is to 2? In this case ESXi-03 is taken out of the equation and one of any of the remaining hosts in the cluster is also taken out, resulting in 16 slots. This makes sense, doesn’t it?
Can you avoid large HA slot sizes due to reservations without resorting to advanced settings? That’s the question we get almost daily and the answer is the “Percentage of Cluster Resources Reserved” admission control mechanism.

Percentage of Cluster Resources Reserved

The Percentage of Cluster Resources Reserved admission control policy is one of the most used admission control policies. The simple reason for this is that it is the least restrictive and most flexible. It is also very easy to configure as shown in the screenshot below.
Figure 43 - Setting a different percentage for CPU/Memory
The main advantage of the percentage based Admission Control Policy is that it avoids the commonly experienced slot size issue where values are skewed due to a large reservation. But if it doesn’t use the slot algorithm, what does it use?
When you specify a percentage, and let’s assume for now that the percentage for CPU and memory will be configured equally, that percentage of the total amount of available resources will stay reserved for HA purposes. First of all, HA will add up all available resources to see how much it has available (virtualization overhead will be subtracted) in total. Then, HA will calculate how much resources are currently reserved by adding up all reservations for memory and for CPU for all powered on virtual machines.
For those virtual machines that do not have a reservation, a default of 32 MHz will be used for CPU and a default of 0 MB+memory overhead will be used for Memory. (Amount of overhead per configuration type can be found in the “Understanding Memory Overhead” section of the Resource Management guide.)
In other words:
((total amount of available resources – total reserved virtual machine resources)/total amount of available resources) <= (percentage HA should reserve as spare capacity)
+

Total reserved virtual machine resources includes the default reservation of 32 MHz and the memory overhead of the virtual machine.
Let’s use a diagram to make it a bit clearer:

Figure 44 - Percentage of cluster resources reserved
Total cluster resources are 24GHz (CPU) and 96GB (MEM). This would lead to the following calculations:
((24 GHz - (2 GHz + 1 GHz + 32 MHz + 4 GHz)) / 24 GHz) = 69 % available
((96 GB - (1,1 GB + 114 MB + 626 MB + 3,2 GB)/96 GB= 85 % available
As you can see, the amount of memory differs from the diagram. Even if a reservation has been set, the amount of memory overhead is added to the reservation. This example also demonstrates how keeping CPU and memory percentage equal could create an imbalance. Ideally, of course, the hosts are provisioned in such a way that there is no CPU/memory imbalance. Experience over the years has proven, unfortunately, that most environments run out of memory resources first and this might need to be factored in when calculating the correct value for the percentage. However, this trend might be changing as memory is getting cheaper every day.
In order to ensure virtual machines can always be restarted, Admission Control will constantly monitor if the policy has been violated or not. Please note that this Admission Control process is part of vCenter and not of the ESXi host! When one of the thresholds is reached, memory or CPU, Admission Control will disallow powering on any additional virtual machines as that could potentially impact availability. These thresholds can be monitored on the HA section of the Cluster’s summary tab.

 Figure 45 - High Availability summary
If you have an unbalanced cluster (hosts with different sizes of CPU or memory resources), your percentage should be equal or preferably larger than the percentage of resources provided by the largest host. This way you ensure that all virtual machines residing on this host can be restarted in case of a host failure.
As earlier explained, this Admission Control Policy does not use slots. As such, resources might be fragmented throughout the cluster. Although DRS is notified to rebalance the cluster, if needed, to accommodate these virtual machines resource requirements, a guarantee cannot be given. We recommend selecting the highest restart priority for this virtual machine (of course, depending on the SLA) to ensure it will be able to boot.
The following example and diagram (Figure 37) will make it more obvious: You have 3 hosts, each with roughly 80% memory usage, and you have configured HA to reserve 20% of resources for both CPU and memory. A host fails and all virtual machines will need to failover. One of those virtual machines has a 4 GB memory reservation. As you can imagine, HA will not be able to initiate a power-on attempt, as there are not enough memory resources available to guarantee the reserved capacity. Instead an event will get generated indicating "not enough resources for failover" for this virtual machine.

Figure 46 - Available resources
Basic design principle: Although HA will utilize DRS to try to accommodate for the resource requirements of this virtual machine a guarantee cannot be given. Do the math; verify that any single host has enough resources to power-on your largest virtual machine. Also take restart priority into account for this/these virtual machine(s).
+

Failover Hosts

The third option one could choose is to select one or multiple designated Failover hosts. This is commonly referred to as a hot standby.

Figure 47 - Select failover hosts Admission Control Policy
It is “what you see is what you get”. When you designate hosts as failover hosts, they will not participate in DRS and you will not be able to run virtual machines on these hosts! Not even in a two host cluster when placing one of the two in maintenance. These hosts are literally reserved for failover situations. HA will attempt to use these hosts first to failover the virtual machines. If, for whatever reason, this is unsuccessful, it will attempt a failover on any of the other hosts in the cluster. For example, when three hosts would fail, including the hosts designated as failover hosts, HA will still try to restart the impacted virtual machines on the host that is left. Although this host was not a designated failover host, HA will use it to limit downtime.
Figure 48 - Select multiple failover hosts

Decision Making Time

As with any decision you make, there is an impact to your environment. This impact could be positive but also, for instance, unexpected. This especially goes for HA Admission Control. Selecting the right Admission Control Policy can lead to a quicker Return On Investment and a lower Total Cost of Ownership. In the previous section, we described all the algorithms and mechanisms that form Admission Control and in this section we will focus more on the design considerations around selecting the appropriate Admission Control Policy for your or your customer’s environment.
The first decision that will need to be made is whether Admission Control will be enabled. We generally recommend enabling Admission Control as it is the only way of guaranteeing your virtual machines will be allowed to restart after a failure. It is important, though, that the policy is carefully selected and fits your or your customer’s requirements.
Basic design principle
Admission control guarantees enough capacity is available for virtual machine failover. As such we recommend enabling it.
Although we already have explained all the mechanisms that are being used by each of the policies in the previous section, we will give a high level overview and list all the pros and cons in this section. On top of that, we will expand on what we feel is the most flexible Admission Control Policy and how it should be configured and calculated.

Host Failures Cluster Tolerates

This option is historically speaking the most used for Admission Control. Most environments are designed with an N+1 redundancy and N+2 is also not uncommon. This Admission Control Policy uses “slots” to ensure enough capacity is reserved for failover, which is a fairly complex mechanism. Slots are based on VM-level reservations and if reservations are not used a default slot size for CPU of 32 MHz is defined and for memory the largest memory overhead of any given virtual machine is used.
Pros:
  • Fully automated (When a host is added to a cluster, HA re-calculates how many slots are available.)
  • Guarantees failover by calculating slot sizes.
Cons:
  • Can be very conservative and inflexible when reservations are used as the largest reservation dictates slot sizes.
  • Unbalanced clusters lead to wastage of resources.
  • Complexity for administrator from calculation perspective.

Percentage as Cluster Resources Reserved

The percentage based Admission Control is based on per-reservation calculation instead of the slots mechanism. The percentage based Admission Control Policy is less conservative than “Host Failures” and more flexible than “Failover Hosts”.
Pros:
  • Accurate as it considers actual reservation per virtual machine to calculate available failover resources.
  • Cluster dynamically adjusts when resources are added.
Cons:
  • Manual calculations needed when adding additional hosts in a cluster and number of host failures needs to remain unchanged.
  • Unbalanced clusters can be a problem when chosen percentage is too low and resources are fragmented, which means failover of a virtual machine can’t be guaranteed as the reservation of this virtual machine might not be available as a block of resources on a single host.
Please note that, although a failover cannot be guaranteed, there are few scenarios where a virtual machine will not be able to restart due to the integration HA offers with DRS and the fact that most clusters have spare capacity available to account for virtual machine demand variance. Although this is a corner-case scenario, it needs to be considered in environments where absolute guarantees must be provided.

Specify Failover Hosts

With the “Specify Failover Hosts” Admission Control Policy, when one or multiple hosts fail, HA will attempt to restart all virtual machines on the designated failover hosts. The designated failover hosts are essentially “hot standby” hosts. In other words, DRS will not migrate virtual machines to these hosts when resources are scarce or the cluster is imbalanced.
Pros:
  • What you see is what you get.
  • No fragmented resources.
Cons:
  • What you see is what you get.
  • Dedicated failover hosts not utilized during normal operations.

Recommendations

We have been asked many times for our recommendation on Admission Control and it is difficult to answer as each policy has its pros and cons. However, we generally recommend a Percentage based Admission Control Policy. It is the most flexible policy as it uses the actual reservation per virtual machine instead of taking a “worst case” scenario approach like the number of host failures does. However, the number of host failures policy guarantees the failover level under all circumstances. Percentage based is less restrictive, but offers lower guarantees that in all scenarios HA will be able to restart all virtual machines. With the added level of integration between HA and DRS we believe a Percentage based Admission Control Policy will fit most environments.
Basic design principle: Do the math, and take customer requirements into account. We recommend using a “percentage” based admission control policy, as it is the most flexible.
Now that we have recommended which Admission Control Policy to use, the next step is to provide guidance around selecting the correct percentage. We cannot tell you what the ideal percentage is as that totally depends on the size of your cluster and, of course, on your resiliency model (N+1 vs. N+2). We can, however, provide guidelines around calculating how much of your resources should be set aside and how to prevent wasting resources.

Selecting the Right Percentage

It is a common strategy to select a single host as a percentage of resources reserved for failover. We geneårally recommend selecting a percentage which is the equivalent of a single or multiple hosts, Let’s explain why and what the impact is of not using the equivalent of a single or multiple hosts.
Let’s start with an example: a cluster exists of 8 ESXi hosts, each containing 70 GB of available RAM. This might sound like an awkward memory configuration but to simplify things we have already subtracted 2 GB as virtualization overhead. Although virtualization overhead is probably less than 2 GB, we have used this number to make calculations easier. This example zooms in on memory but this concept also applies to CPU, of course.
For this cluster we will define the percentage of resources to reserve for both Memory and CPU to 20%. For memory, this leads to a total cluster memory capacity of 448 GB:
(70 GB + 70 GB + 70 GB + 70 GB + 70 GB + 70 GB + 70 GB + 70 GB) * (1 – 20%)
A total of 112 GB of memory is reserved as failover capacity.
Once a percentage is specified, that percentage of resources will be unavailable for virtual machines, therefore it makes sense to set the percentage as close to the value that equals the resources a single (or multiple) host represents. We will demonstrate why this is important in subsequent examples.
In the example above, 20% was used to be reserved for resources in an 8-host cluster. This configuration reserves more resources than a single host contributes to the cluster. HA’s main objective is to provide automatic recovery for virtual machines after a physical server failure. For this reason, it is recommended to reserve resources equal to a single or multiple hosts. When using the per-host level granularity in an 8-host cluster (homogeneous configured hosts), the resource contribution per host to the cluster is 12.5%. However, the percentage used must be an integer (whole number). It is recommended to round up to the value guaranteeing that the full capacity of one host is protected, in this example (Figure 40), the conservative approach would lead to a percentage of 13%.
Figure 49 - Setting the correct value

Aggressive Approach

We have seen many environments where the percentage was set to a value that was less than the contribution of a single host to the cluster. Although this approach reduces the amount of resources reserved for accommodating host failures and results in higher consolidation ratios, it also offers a lower guarantee that HA will be able to restart all virtual machines after a failure. One might argue that this approach will more than likely work as most environments will not be fully utilized; however it also does eliminate the guarantee that after a failure all virtual machines will be recovered. Wasn’t that the reason for enabling HA in the first place?

Adding Hosts to Your Cluster

Although the percentage is dynamic and calculates capacity at a cluster-level, changes to your selected percentage might be required when expanding the cluster. The reason being that the amount of reserved resources for a fail-over might not correspond with the contribution per host and as a result lead to resource wastage. For example, adding 4 hosts to an 8-host cluster and continuing to use the previously configured admission control policy value of 13% will result in a failover capacity that is equivalent to 1.5 hosts. Figure 41 depicts a scenario where an 8-host cluster is expanded to 12 hosts. Each host holds 8 2 GHz cores and 70 GB of memory. The cluster was originally configured with admission control set to 13%, which equals to 109.2 GB and 24.96 GHz. If the requirement is to allow a single host failure 7.68 Ghz and 33.6 GB is “wasted” as clearly demonstrated in the diagram below.
Figure 50 - Avoid wasting resources

How to Define Your Percentage?

As explained earlier it will fully depend on the N+X model that has been chosen. Based on this model, we recommend selecting a percentage that equals the amount of resources a single host represents. So, in the case of an 8 host cluster and N+2 resiliency, the percentage should be set as follows: 2 / 8 (*100) = 25%
Basic design principle: In order to avoid wasting resources we recommend carefully selecting your N+X resiliency architecture. Calculate the required percentage based on this architecture.


+





















⇚==================================================================================================⇛




  1. VMware Networking Concepts 
  2. Troubleshooting 
  3. Performance Troubleshooting
  4. Firmware Upgrade
  5. Configuration  

Coming soon....
But please email us on - nitinmhalim@virtualconsultancyservices.com if you need more details(desgin/planning,configuration/implementation, troubleshooting, maintenance) about VMware products in emergency case. We will provide best solution at right direction.

Comments

Popular posts from this blog

esxi-host-shows-disconnected-vmware

ESXi : Lost uplink redundancy on virtual switch "vSwitch0". Physical NIC vmnic0 is down

Virtual machines appear as invalid or orphaned in vCenter Server