8CHAPTER 1 INTRODUCING VMWARE vSPHERE 6.7 Further, VMware has stated that the Flash-based vSphere Web Client and the Windows-based sphere Desktop Client are now end-of-life. Luckily, the step-by-step procedures for the Flash- based vSphere Web Client and the HTML5-based vSphere client are usually identical. For this reason,we'll use Flash-based vSphere Web Client screen shots and step-by-step guidance throughout this book to ensure each instruction can be completed with the same client Administering hosts without vCenter has also changed. You now access the user interface by browsing to the URL of each ESXi host. This loads an HTML5-based user interface(Un)but only for that particular host. No client installation is needed This can be a little confusing if this is your first foray into the VMware landscape, so let us recap. The vSphere Web Client, based on Flash, has been deprecated. The Windows-installable Sphere Desktop Client(for connecting to vCenter and hosts)has been deprecated. To adminis- ter vCenter, and hosts attached to a vCenter Server, use the new HTML5-based vSphere Client or the Flash-based vSphere Web Client. To administer ESXi hosts directly, without vCenter, use the HTML5-based vSphere Host Client Examining the Features in VMware vSphere In the following sections, we'll take a closer look at some of the features available in the vSphere product suite. We'll start with Virtual SMP. VSPHERE VIRTUAL SYMMETRIC MULTI-PROCESSING The vSphere Virtual Symmetric Multi-Processing(vSMP or Virtual SMP)product allows you to construct VMs with multiple virtual processor cores and/or sockets. vSphere Virtual SMP is not the licensing product that allows ESXi to be installed on servers with multiple processors; it is the technology that allows the use of multiple processors inside a VM. Figure 1. 2 identifies the differences between multiple processors in the ESXi host system and multiple virtual processors FIGURE 1.2 Virtual SMP vSphere Virtual SMP allows VMs to be creat virtual CPU VMkernel popup VMware ESXi
8 | CHAPTER 1 INTRODUCING VMWARE vSPHERE 6.7 Further, VMware has stated that the Flash-based vSphere Web Client and the Windows-based vSphere Desktop Client are now end-of-life. Luckily, the step-by-step procedures for the Flashbased vSphere Web Client and the HTML5-based vSphere client are usually identical. For this reason, we’ll use Flash-based vSphere Web Client screen shots and step-by-step guidance throughout this book to ensure each instruction can be completed with the same client. Administering hosts without vCenter has also changed. You now access the user interface by browsing to the URL of each ESXi host. This loads an HTML5-based user interface (UI) but only for that particular host. No client installation is needed. This can be a little confusing if this is your first foray into the VMware landscape, so let us recap. The vSphere Web Client, based on Flash, has been deprecated. The Windows-installable vSphere Desktop Client (for connecting to vCenter and hosts) has been deprecated. To administer vCenter, and hosts attached to a vCenter Server, use the new HTML5-based vSphere Client or the Flash-based vSphere Web Client. To administer ESXi hosts directly, without vCenter, use the HTML5-based vSphere Host Client. Examining the Features in VMware vSphere In the following sections, we’ll take a closer look at some of the features available in the vSphere product suite. We’ll start with Virtual SMP. vSphere Virtual Symmetric Multi-Processing The vSphere Virtual Symmetric Multi-Processing (vSMP or Virtual SMP) product allows you to construct VMs with multiple virtual processor cores and/or sockets. vSphere Virtual SMP is not the licensing product that allows ESXi to be installed on servers with multiple processors; it is the technology that allows the use of multiple processors inside a VM. Figure 1.2 identifies the differences between multiple processors in the ESXi host system and multiple virtual processors. VMkernel Virtual SMP VMware ESXi Figure 1.2 vSphere Virtual SMP allows VMs to be created with more than one virtual CPU
With vSphere Virtual SMP, applications that require and can actually use multiple CPUs can be run in VMs configured with multiple virtual CPUs. This allows organizations to virtualize even more applications without negatively impacting performance or being unable to meet service-level agreements(SLAs) This functionality also allows users to specify multiple virtual cores per virtual CPU. Using this feature, a user could provision a dual"socket"VM with two cores per"socket"for a total of four virtual cores. This approach gives users tremendous flexibility in carving up CPU process- ing power among the VMs VSPHERE VMOTION AND VSPHERE STORAGE VMOTION If you have read anything about VMware, you have most likely read about the extremely useful feature called vMotion. VSphere vMotion, also known as live migration, is a feature of ESXi and v Center Server that allows you to move a running VM from one physical host to another occurs with no downtime and with no loss of network connectivity to the VM. The ability b G manually move a running VM between physical hosts on an as-needed basis is a powerful feature that has a number of use cases in todays datacenters. Suppose a physical machine has experienced a nonfatal hardware failure and needs to be repaired. You can easily initiate a series of vMotion operations to remove all VMs from an ESXI host that is to undergo scheduled maintenance. After the maintenance is complete and the server brought back online, you can use vMotion to return the VMs to the original server Alternately, consider a situation in which you are migrating from one set of physical servers to the details of vMotion in Chapter 12, "Balancing Resource Utilization"-you can use vMotion to move the VMs from the old servers to the newer servers, making quick work of a server migra tion with no interruption of service Even in normal day-to-day operations, vMotion can be used when multiple VMs on the same host are in contention for the same resource(which ultimately causes poor performance across all the VMs). With vMotion, you can migrate any VMs facing contention to another ESXi host with greater availability for the resource in demand. For example, when two VMs contend with each other for CPU resources, you can eliminate the contention by using vMotion to move one VM to an ESXi host with more available CPu resources vMotion moves the execution of a VM, relocating the CPU and memory footprint between physical servers but leaving the storage untouched. Storage vMotion builds on the idea and principle of vMotion: you can leave the CPU and memory footprint untouched on a physical server but migrate a VM's storage while the VM is still running Channel or FCoE or iSCSI SAN or NFS-is needed What happens when you need to migrate from an older storage array to newer storage hardware based on vSAN? What kind of downtime would be required? Or what about a situation where you need to rebalance utilization of the array, either from a capacity or performance perspective? Vith the ability to move storage for a running VM between datastores, Storage vMotion lets you address all of these situations without downtime. This feature ensures that outgrowing datastores or moving to new storage hardware does not force an outage for the affected VMs and provides you with yet another tool to increase your flexibility in responding to changing business needs
EXPLORING VMWARE vSPHERE 6.7 | 9 With vSphere Virtual SMP, applications that require and can actually use multiple CPUs can be run in VMs configured with multiple virtual CPUs. This allows organizations to virtualize even more applications without negatively impacting performance or being unable to meet service-level agreements (SLAs). This functionality also allows users to specify multiple virtual cores per virtual CPU. Using this feature, a user could provision a dual “socket” VM with two cores per “socket” for a total of four virtual cores. This approach gives users tremendous flexibility in carving up CPU processing power among the VMs. vSphere vMotion and vSphere Storage vMotion If you have read anything about VMware, you have most likely read about the extremely useful feature called vMotion. vSphere vMotion, also known as live migration, is a feature of ESXi and vCenter Server that allows you to move a running VM from one physical host to another physical host without having to power off the VM. This migration between two physical hosts occurs with no downtime and with no loss of network connectivity to the VM. The ability to manually move a running VM between physical hosts on an as-needed basis is a powerful feature that has a number of use cases in today’s datacenters. Suppose a physical machine has experienced a nonfatal hardware failure and needs to be repaired. You can easily initiate a series of vMotion operations to remove all VMs from an ESXi host that is to undergo scheduled maintenance. After the maintenance is complete and the server is brought back online, you can use vMotion to return the VMs to the original server. Alternately, consider a situation in which you are migrating from one set of physical servers to a new set of physical servers. Assuming that the details have been addressed—and we’ll discuss the details of vMotion in Chapter 12, “Balancing Resource Utilization”—you can use vMotion to move the VMs from the old servers to the newer servers, making quick work of a server migration with no interruption of service. Even in normal day-to-day operations, vMotion can be used when multiple VMs on the same host are in contention for the same resource (which ultimately causes poor performance across all the VMs). With vMotion, you can migrate any VMs facing contention to another ESXi host with greater availability for the resource in demand. For example, when two VMs contend with each other for CPU resources, you can eliminate the contention by using vMotion to move one VM to an ESXi host with more available CPU resources. vMotion moves the execution of a VM, relocating the CPU and memory footprint between physical servers but leaving the storage untouched. Storage vMotion builds on the idea and principle of vMotion: you can leave the CPU and memory footprint untouched on a physical server but migrate a VM’s storage while the VM is still running. Deploying vSphere in your environment generally means that lots of shared storage—Fibre Channel or FCoE or iSCSI SAN or NFS—is needed. What happens when you need to migrate from an older storage array to newer storage hardware based on vSAN? What kind of downtime would be required? Or what about a situation where you need to rebalance utilization of the array, either from a capacity or performance perspective? With the ability to move storage for a running VM between datastores, Storage vMotion lets you address all of these situations without downtime. This feature ensures that outgrowing datastores or moving to new storage hardware does not force an outage for the affected VMs and provides you with yet another tool to increase your flexibility in responding to changing business needs
10CHAPTER1 INTRODUCING VMWARE vSPHERE 6.7 VSPHERE DISTRIBUTED RESOURCE SCHEDULER vMotion is a manual operation, meaning that you must initiate the vMotion operation. What if VMware vSphere could perform vMotion operations automatically? That is the basic idea behind vSphere Distributed Resource Scheduler(DRS). If you think that vMotion sounds exciting, your anticipation will only grow after learning about DRS. DRS, simply put, leverages vMotion to provide automatic distribution of resource utilization across multiple ESXi hosts that are confi ured in a cluster Given the prevalence of Microsoft Windows Server in todays datacenters, the use of the term cluster often draws IT professionals into thoughts of Microsoft Windows Server Failover Clusters Windows Server clusters are often active-passive or active-active-passive clusters. However, ESXI usters are fundamentally different, operating in an active-active mode to aggregate and combine resources into a shared pool. Although the underlying concept of aggregating physical hardware to serve a common goal is the same, the technology, configuration, and feature sets are quite different between VMware ESXi clusters and Windows Server clusters AGGREGATE CAPACITY AND SINGLE HOST CAPACITY Although we say that a dRS cluster is an implicit aggregation of CPU and memory capacity, it important to keep in mind that a VM is limited to using the CPU and RAM of a single physical host t any given time. If you have two small ESXi servers with 64 GB of RAM each in a DRS cluster, the cluster will correctly report 128 GB of aggregate RAM available, but any given VM will not be able to use more than approximately 64 GB of RAM at a time An ESXi cluster is an implicit aggregation of the CPu power and memory of all hosts involved in the cluster. After two or more hosts have been assigned to a cluster, they work in unison to provide CPU and memory to the VMs assigned to the cluster(keeping in mind that any given VM can only use re from one host; see the sidebar"Aggregate Capacity and Single Host Capacity). The goal of DRS is twofold: At startup, DRS attempts to place each VM on the host that is best suited to run that VM at Once a VM is running, DRS seeks to provide that VM with the required hardware resources while minimizing the amount of contention for those resources in an effort to The first part of DRS is often referred to as intelligent placement. DRS can automate the placement of each VM as it is powered on within a cluster, placing it on the host in the cluster that it deems to be best suited to run that vm at that moment DRS isnt limited to operating only at VM startup, though. DRS also manages the VM's location while it is running. For example, let's say three hosts have been configured in an ESXi cluster with DRS enabled. When one of those hosts begins to experience a high contention for CPU utilization, DRS detects that the cluster is imbalanced in its resource usage and uses ar internal algorithm to determine which VM(s) should be moved in order to create the least imbalanced cluster. For every VM, DRS will simulate a migration to each host and the results will
10 | CHAPTER 1 INTRODUCING VMWARE vSPHERE 6.7 vSphere Distributed Resource Scheduler vMotion is a manual operation, meaning that you must initiate the vMotion operation. What if VMware vSphere could perform vMotion operations automatically? That is the basic idea behind vSphere Distributed Resource Scheduler (DRS). If you think that vMotion sounds exciting, your anticipation will only grow after learning about DRS. DRS, simply put, leverages vMotion to provide automatic distribution of resource utilization across multiple ESXi hosts that are configured in a cluster. Given the prevalence of Microsoft Windows Server in today’s datacenters, the use of the term cluster often draws IT professionals into thoughts of Microsoft Windows Server Failover Clusters. Windows Server clusters are often active-passive or active-active-passive clusters. However, ESXi clusters are fundamentally different, operating in an active-active mode to aggregate and combine resources into a shared pool. Although the underlying concept of aggregating physical hardware to serve a common goal is the same, the technology, configuration, and feature sets are quite different between VMware ESXi clusters and Windows Server clusters. Aggregate Capacity and Single Host Capacity Although we say that a DRS cluster is an implicit aggregation of CPU and memory capacity, it’s important to keep in mind that a VM is limited to using the CPU and RAM of a single physical host at any given time. If you have two small ESXi servers with 64 GB of RAM each in a DRS cluster, the cluster will correctly report 128 GB of aggregate RAM available, but any given VM will not be able to use more than approximately 64 GB of RAM at a time. An ESXi cluster is an implicit aggregation of the CPU power and memory of all hosts involved in the cluster. After two or more hosts have been assigned to a cluster, they work in unison to provide CPU and memory to the VMs assigned to the cluster (keeping in mind that any given VM can only use resources from one host; see the sidebar “Aggregate Capacity and Single Host Capacity”). The goal of DRS is twofold: ◆ At startup, DRS attempts to place each VM on the host that is best suited to run that VM at that time. ◆ Once a VM is running, DRS seeks to provide that VM with the required hardware resources while minimizing the amount of contention for those resources in an effort to maintain balanced utilization levels. The first part of DRS is often referred to as intelligent placement. DRS can automate the placement of each VM as it is powered on within a cluster, placing it on the host in the cluster that it deems to be best suited to run that VM at that moment. DRS isn’t limited to operating only at VM startup, though. DRS also manages the VM’s location while it is running. For example, let’s say three hosts have been configured in an ESXi cluster with DRS enabled. When one of those hosts begins to experience a high contention for CPU utilization, DRS detects that the cluster is imbalanced in its resource usage and uses an internal algorithm to determine which VM(s) should be moved in order to create the least imbalanced cluster. For every VM, DRS will simulate a migration to each host and the results will
EXPLORING VMWARE vSPHERE 6.711 be compared. The migrations that create the least imbalanced cluster will be recommended or automatically performed, depending on the DRS configuration DRS performs these on-the-fly migrations without any downtime or loss of network connec- tivity to the VMs by leveraging vMotion, the live migration functionality we described earlier This makes DRS extremely powerful because it allows clusters of ESXi hosts to dynamically rebalance their resource utilization based on the changing demands of the VMs running on that cluster FEWER BIGGER SERVERS OR MORE SMALLER SERVERS? Recall from Table 1.2 that VMware ESXi supports servers with up to 768 logical CPU cores and up to 16 TB of RAM. With vSphere DRS, though, you can combine multiple smaller servers for the pur- pose of managing aggregate capacity. This means that bigger, more powerful servers might not be better servers for virtualization projects. These larger servers, in general, are significantly more expensive than smaller servers, and using a greater number of smaller servers(often referred to caling out") may provide greater flexibility than a smaller number of larger servers(often referred to as scaling up"). The key thing to remember is that a bigger server isnt necessarily a better server. VSPHERE STORAGE DRS vSphere Storage DRS takes the idea of vSphere DRS and applies it to storage. Just as vSphere DRS helps to balance CPU and memory utilization across a cluster of ESXi hosts, Storage DRS helps balance storage capacity and storage performance across a cluster of datastores using mechanisms that echo those used by vSphere DRS Earlier, we described vSphere DRS's feature called intelligent placement, which automates the placement of new VMs based on resource usage within an ESXi cluster. In the same fashion, Storage DRS has an intelligent placement function that automates the placement of VM virtual disks based on storage utilization. Storage DRS does this through the use of datastore clusters When you create a new VM, you simply point it to a datastore cluster, and Storage DRS auto Likewise, just as vSphere DRS uses vMotion to balance resource utilization dynamicalhuster matically places the VM's virtual disks on an appropriate datastore within that datastore cl Storage DRS uses Storage vMotion to rebalance storage utilization based on capacity and/or latency thresholds. Because Storage vMotion operations are typically much more resource-inten- sive than vMotion operations, vSphere provides extensive controls over the thresholds, timing, nd other guidelines that will trigger a Storage DRS automatic migration via Storage vMotion. STORAGE I/O CONTROL AND NETWORK I/O CONTROL VMware vSphere has always had extensive controls for modifying or controlling the allocation of CPU and memory resources to VMs. Before the release of vSphere 4.1, however, VSphere could not apply extensive controls to storage I/O and network I/O Storage I/O Control and Network I/O Control address that shortcoming Storage I/O Control(SIOC)allows you to assign relative priority to storage I/O as well assign storage I/O limits to VMs. These settings are enforced cluster-wide; when an ESXi host detects storage congestion through an increase of latency beyond a user-configured threshold, it rill apply the settings configured for that VM. The result is that you can help the VMs that need
EXPLORING VMWARE vSPHERE 6.7 | 11 be compared. The migrations that create the least imbalanced cluster will be recommended or automatically performed, depending on the DRS configuration. DRS performs these on-the-fly migrations without any downtime or loss of network connectivity to the VMs by leveraging vMotion, the live migration functionality we described earlier. This makes DRS extremely powerful because it allows clusters of ESXi hosts to dynamically rebalance their resource utilization based on the changing demands of the VMs running on that cluster. Fewer Bigger Servers or More Smaller Servers? Recall from Table 1.2 that VMware ESXi supports servers with up to 768 logical CPU cores and up to 16 TB of RAM. With vSphere DRS, though, you can combine multiple smaller servers for the purpose of managing aggregate capacity. This means that bigger, more powerful servers might not be better servers for virtualization projects. These larger servers, in general, are significantly more expensive than smaller servers, and using a greater number of smaller servers (often referred to as “scaling out”) may provide greater flexibility than a smaller number of larger servers (often referred to as “scaling up”). The key thing to remember is that a bigger server isn’t necessarily a better server. vSphere Storage DRS vSphere Storage DRS takes the idea of vSphere DRS and applies it to storage. Just as vSphere DRS helps to balance CPU and memory utilization across a cluster of ESXi hosts, Storage DRS helps balance storage capacity and storage performance across a cluster of datastores using mechanisms that echo those used by vSphere DRS. Earlier, we described vSphere DRS’s feature called intelligent placement, which automates the placement of new VMs based on resource usage within an ESXi cluster. In the same fashion, Storage DRS has an intelligent placement function that automates the placement of VM virtual disks based on storage utilization. Storage DRS does this through the use of datastore clusters. When you create a new VM, you simply point it to a datastore cluster, and Storage DRS automatically places the VM’s virtual disks on an appropriate datastore within that datastore cluster. Likewise, just as vSphere DRS uses vMotion to balance resource utilization dynamically, Storage DRS uses Storage vMotion to rebalance storage utilization based on capacity and/or latency thresholds. Because Storage vMotion operations are typically much more resource-intensive than vMotion operations, vSphere provides extensive controls over the thresholds, timing, and other guidelines that will trigger a Storage DRS automatic migration via Storage vMotion. Storage I/O Control and Network I/O Control VMware vSphere has always had extensive controls for modifying or controlling the allocation of CPU and memory resources to VMs. Before the release of vSphere 4.1, however, vSphere could not apply extensive controls to storage I/O and network I/O. Storage I/O Control and Network I/O Control address that shortcoming. Storage I/O Control (SIOC) allows you to assign relative priority to storage I/O as well as assign storage I/O limits to VMs. These settings are enforced cluster-wide; when an ESXi host detects storage congestion through an increase of latency beyond a user-configured threshold, it will apply the settings configured for that VM. The result is that you can help the VMs that need
12CHAPTER1 INTRODUCING VMWARE vSPHERE 6.7 priority access to storage resources get more of the resources they need. In vSphere 4. 1, Storag I/O Control applied only to VMFS storage; VSphere 5 extended that functionality to nFS datastores controls ove hes toM seatwork l/o Control (NIOC), which provides you with more granular pread adoption of 10 Gigabit Ethernet(10GbE)and faster continues, Network I/O Control provides you with a way to more reliably ensure that network bandwidth is properly allocated to VMs based on priority and limit POLICY-BASED STORAGE With profile-driven storage, vSphere administrators can use storage capabilities and VM storage profiles to ensure VMs reside on storage that provides the necessary levels of capacity, perfor torage capabilities, leveraging vSphere APIs for storage awareness (vaSa) VM storage profiles Storage capabilities are either provided by the storage array itself (if the array can use VASA nd/or defined by a vSphere administrator. These storage capabilities represent various attrib- /M storage profiles define the storage requirements for a VM and its virtual disks. You create VM storage profiles by selecting the storage capabilities that must be present for the VM to run. Datastores that have all the capabilities defined in the VM storage profile are compliant with the VM storage profile and represent possible locations where the VM could be stored This functionality gives you much greater visibility into storage capabilities and helps ensure that the appropriate functionality for each VM is indeed being provided by the underlying storage. These storage capabilities can be explored extensively by using VVOLs or vSAN Refer to Table 1.1 to find out which chapter discusses profile-driven storage in more detail VSPHERE HIGH AVAILABILITY In many cases, high availability--or the lack of high availability-is the key argument used against virtualization. The most common form of this argument more or less sounds like this Before virtualization, the failure of a physical server affected only one application or workload After virtualization, the failure of a physical server will affect many more applications or workloads running on that server at the same time. We can't put all our eggs in one basket! Aware addresses this concern with another feature present in ESXi clusters called vSphere High Availability(HA). Once again, by nature of the naming conventions(clusters, high avail- ability), many traditional Windows administrators will have preconceived notions about this feature. Those notions, however, are incorrect in that vSphere HA does not function like a high-availability configuration in Windows. The vSphere HA feature provides an automated process for moving and restarting VMs that were running on an ESXi host at a time of server failure(or other qualifying infrastructure failure, as we'll describe in Chapter 7, "Ensuring High Availability and Business Continuity?). Figure 1.3 depicts the VM migration that occurs when an ESXi host that is part of an HA-enabled cluster experiences failure
12 | CHAPTER 1 INTRODUCING VMWARE vSPHERE 6.7 priority access to storage resources get more of the resources they need. In vSphere 4.1, Storage I/O Control applied only to VMFS storage; vSphere 5 extended that functionality to NFS datastores. The same goes for Network I/O Control (NIOC), which provides you with more granular controls over how VMs use network bandwidth provided by the physical NICs. As the widespread adoption of 10 Gigabit Ethernet (10GbE) and faster continues, Network I/O Control provides you with a way to more reliably ensure that network bandwidth is properly allocated to VMs based on priority and limits. Policy-Based Storage With profile-driven storage, vSphere administrators can use storage capabilities and VM storage profiles to ensure VMs reside on storage that provides the necessary levels of capacity, performance, availability, and redundancy. Profile-driven storage is built on two key components: ◆ Storage capabilities, leveraging vSphere APIs for storage awareness (VASA) ◆ VM storage profiles Storage capabilities are either provided by the storage array itself (if the array can use VASA and/or defined by a vSphere administrator. These storage capabilities represent various attributes of the storage solution. VM storage profiles define the storage requirements for a VM and its virtual disks. You create VM storage profiles by selecting the storage capabilities that must be present for the VM to run. Datastores that have all the capabilities defined in the VM storage profile are compliant with the VM storage profile and represent possible locations where the VM could be stored. This functionality gives you much greater visibility into storage capabilities and helps ensure that the appropriate functionality for each VM is indeed being provided by the underlying storage. These storage capabilities can be explored extensively by using VVOLs or vSAN. Refer to Table 1.1 to find out which chapter discusses profile-driven storage in more detail. vSphere High Availability In many cases, high availability—or the lack of high availability—is the key argument used against virtualization. The most common form of this argument more or less sounds like this: “Before virtualization, the failure of a physical server affected only one application or workload. After virtualization, the failure of a physical server will affect many more applications or workloads running on that server at the same time. We can’t put all our eggs in one basket!” VMware addresses this concern with another feature present in ESXi clusters called vSphere High Availability (HA). Once again, by nature of the naming conventions (clusters, high availability), many traditional Windows administrators will have preconceived notions about this feature. Those notions, however, are incorrect in that vSphere HA does not function like a high-availability configuration in Windows. The vSphere HA feature provides an automated process for moving and restarting VMs that were running on an ESXi host at a time of server failure (or other qualifying infrastructure failure, as we’ll describe in Chapter 7, “Ensuring High Availability and Business Continuity”). Figure 1.3 depicts the VM migration that occurs when an ESXi host that is part of an HA-enabled cluster experiences failure