EXPLORING VMWARE vSPHERE 6.713 E13 will restart any VMs that were pre unning on an eSXi ost that experiences A ESXi host The vSphere HA feature, unlike DRS, does not always use the vMotion technology as a means migrating servers to another host. vMotion applies only to planned migrations, where both the urce and destination ESXi host are running and functioning. Let us explain what Sphere HA failover situation, there is no anticipation of failure; it is not a planned outage, which means there is no time to perform a vMotion operation. vSphere HA is intended to minimize unplanned downtime because of the failure of a physical ESXi host or other infrastructure components. We'll go into more detail in Chapter 7 on what kinds of failures vSphere HA helps VSPHERE HA IMPROVEMENTS FROM VSPHERE 5 vSphere HA received a few notable improvements over the last few releases. Scalability was signifi- tly improved, and it was closely integrated with the intelligent placement functionality of vSphere DRS, giving vSphere HA greater ability to restart VMs in the event of a host failure However, perhaps the most significant improvement is the complete rewrite of the underlying architecture for vSphere HA; this newer architecture, known as Fault Domain Manager(FDM), elim- inated many of the constraints found in earlier versions of VMware Sphere(before version 5.0) By default, vSphere HA does not provide failover in the event of a guest OS failure, although you can configure vSphere Ha to monitor VMs and restart them automatically if they fail to respond to an internal heartbeat. This feature is called VM Failure Monitoring, and it uses a combination of internal heartbeats and I/O activity to attempt to detect if the guest OS inside a VM has stopped functioning. If the guest OS has stopped functioning the VM can be restarted
EXPLORING VMWARE vSPHERE 6.7 | 13 The vSphere HA feature, unlike DRS, does not always use the vMotion technology as a means of migrating servers to another host. vMotion applies only to planned migrations, where both the source and destination ESXi host are running and functioning. Let us explain what we mean. In a vSphere HA failover situation, there is no anticipation of failure; it is not a planned outage, which means there is no time to perform a vMotion operation. vSphere HA is intended to minimize unplanned downtime because of the failure of a physical ESXi host or other infrastructure components. We’ll go into more detail in Chapter 7 on what kinds of failures vSphere HA helps protect against. vSphere HA Improvements from vSphere 5 vSphere HA received a few notable improvements over the last few releases. Scalability was significantly improved, and it was closely integrated with the intelligent placement functionality of vSphere DRS, giving vSphere HA greater ability to restart VMs in the event of a host failure. However, perhaps the most significant improvement is the complete rewrite of the underlying architecture for vSphere HA; this newer architecture, known as Fault Domain Manager (FDM), eliminated many of the constraints found in earlier versions of VMware vSphere (before version 5.0). By default, vSphere HA does not provide failover in the event of a guest OS failure, although you can configure vSphere HA to monitor VMs and restart them automatically if they fail to respond to an internal heartbeat. This feature is called VM Failure Monitoring, and it uses a combination of internal heartbeats and I/O activity to attempt to detect if the guest OS inside a VM has stopped functioning. If the guest OS has stopped functioning, the VM can be restarted automatically. ESXi host ESXi host VM Restart Figure 1.3 The vSphere HA feature will restart any VMs that were previously running on an ESXi host that experiences server or storage path failure
14CHAPTER1 INTRODUCING VMWARE vSPHERE 6.7 With vSphere HA in a failure scenario, it's important to understand that there will be ar nterruption of service. If a physical host or storage device fails, vSphere HA restarts the VM, and while the VM is restarting, the applications or services provided by that VM are unavailable. The only time that this is not true is if Proactive HA is enabled on the host. Proactive HA uses hardware monitoring to proactively move VMs from a host that is suffering from hard ware Issues For users who need even higher levels of availability than can be provided using vSphere HA, vSphere Fault Tolerance(FT), which is described in the next section, can help YSPHERE FAULT TOLERANCE Although vSphere HA provides a certain level of availability for VMs in the event of physical host failure, this might not e gooa enougn for some workloads vSphere FT might help in these situations As we described in the previous section, vSphere HA protects against unplanned physic server failure by providing a way to automatically restart VMs upon physical host failure. This need to restart a VM in the event of a physical host failure means that some downtime-gener ally less than three minutesis incurred. vSphere Ft goes even further and eliminates any downtime in the event of a physical host failure. vSphere FT maintains a mirrored secondary VM on a separate physical host that is kept in lockstep with the primary VM. vSphere's newer Fast Checkpointing technology supports FT of VMs with one to four vCPUs. Everything that occurs on the primary (protected) VM also occurs simultaneously on the secondary (mirrored)VM,so that if the physical host for the primary VM fails, the secondary VM can immediately step in and take over without any loss of connectivity. vSphere Ft will also automatically re-create the secondary(mirrored) VM on another host if the physical host for the secondary VM fails, as llustrated in Figure 1. 4. This ensures protection for the primary VM at all times. FIGURE 1.4 vSphere FT pro protection again failures with no enced by the vMs. A田 ESXi host
14 | CHAPTER 1 INTRODUCING VMWARE vSPHERE 6.7 With vSphere HA in a failure scenario, it’s important to understand that there will be an interruption of service. If a physical host or storage device fails, vSphere HA restarts the VM, and while the VM is restarting, the applications or services provided by that VM are unavailable. The only time that this is not true is if Proactive HA is enabled on the host. Proactive HA uses hardware monitoring to proactively move VMs from a host that is suffering from hardware issues. For users who need even higher levels of availability than can be provided using vSphere HA, vSphere Fault Tolerance (FT), which is described in the next section, can help. vSphere Fault Tolerance Although vSphere HA provides a certain level of availability for VMs in the event of physical host failure, this might not be good enough for some workloads. vSphere FT might help in these situations. As we described in the previous section, vSphere HA protects against unplanned physical server failure by providing a way to automatically restart VMs upon physical host failure. This need to restart a VM in the event of a physical host failure means that some downtime—generally less than three minutes—is incurred. vSphere FT goes even further and eliminates any downtime in the event of a physical host failure. vSphere FT maintains a mirrored secondary VM on a separate physical host that is kept in lockstep with the primary VM. vSphere’s newer Fast Checkpointing technology supports FT of VMs with one to four vCPUs. Everything that occurs on the primary (protected) VM also occurs simultaneously on the secondary (mirrored) VM, so that if the physical host for the primary VM fails, the secondary VM can immediately step in and take over without any loss of connectivity. vSphere FT will also automatically re-create the secondary (mirrored) VM on another host if the physical host for the secondary VM fails, as illustrated in Figure 1.4. This ensures protection for the primary VM at all times. ESXi host Logging Connection ESXi host VM Failover No Downtime Figure 1.4 vSphere FT provides protection against host failures with no downtime experienced by the VMs
EXPLORING VMWARE vSPHERE 6.715 In the event of multiple host failures-say, the hosts running both the primary and secondary VMs failed--vSphere HA will reboot the primary VM on another available server, and vSphere FT will automatically create a new secondary VM. Again, this ensures protection for the primary VM at all times vSphere FT can work in conjunction with vMotion. As of vSphere 5.0, vSphere FT is a c tegrated with vSphere DRS, although this feature does require Enhanced vMotion Compatibility(EvC). VMware recommends that multiple FT virtual machines with multiple vCPUs have 10GbE networks between hosts VSPHERE STORAGE APIS FOR DATA PROTECTION AND VMWARE DATA PROTECTION One of the most critical aspects of any IT infrastructure, not just virtualized infrastructure, is a solid backup strategy as defined by a company's disaster recovery and business continuity plan. To help address organizational backup needs, VMware vSphere has a key component: the Storage APIs for Data Protection(VADP) VADP is a set of application programming interfaces(APIs)that back up vendors leverage in order to provide enhanced backup functionality of virtualized environments. VADP enables image backups; native integration with backup software; and support for multiple storge R functionality like file-level backup and restore; support for incremental, differential, and ful protocols. On its own, though, VADP is just a set of interfaces, like a framework for making backups possible. You can't actually back up VMs with VADP. You'll need a VADP-enabled backup application. There are a growing number of third-party backup applications that are designed to work with VaDP from vendors such as Comm vault, DellEMc and veritas VSPHERE DATA PROTECTION In vSphere 5.1, VMware phased out its earlier data protection tool, VMware Data Recovery (VDr) in favor of vSphere Data Protection(VDP). Although VDR was provided with vSphere 5.0, VDR is not supported with vSphere 5.1 and later. Subsequently, VMware has also discontinued VDP from vSphere 6.5. Backups of your vSphere environment now need to be handled by another vendor. VIRTUAL SAN (VSAN) vsaN was a major new feature included with, but licensed separately from, vSphere 5.5 and later. It is the evolution of work that VMware has been doing for a number of years now. VSAN lets organizations leverage the internal local storage found in individual compute nodes and turn t into a virtual SAN VSAN es a minimum of two ESXi hosts(or nodes)for some limited configurations, but it will scale to as many as 64 vSan also requires solid-state(flash) storage in each of the com- pute nodes providing VSAN storage; this is done to help improve I/O performance given that most compute nodes have a limited number of physical drives present. vsan pools the aggre- gate storage across the compute nodes, allowing you to create a datastore that spans multiple compute nodes. VSAN employs policies and algorithms to ensure performance or to help protect
EXPLORING VMWARE vSPHERE 6.7 | 15 In the event of multiple host failures—say, the hosts running both the primary and secondary VMs failed—vSphere HA will reboot the primary VM on another available server, and vSphere FT will automatically create a new secondary VM. Again, this ensures protection for the primary VM at all times. vSphere FT can work in conjunction with vMotion. As of vSphere 5.0, vSphere FT is also integrated with vSphere DRS, although this feature does require Enhanced vMotion Compatibility (EVC). VMware recommends that multiple FT virtual machines with multiple vCPUs have 10GbE networks between hosts. vSphere Storage APIs for Data Protection and VMware Data Protection One of the most critical aspects of any IT infrastructure, not just virtualized infrastructure, is a solid backup strategy as defined by a company’s disaster recovery and business continuity plan. To help address organizational backup needs, VMware vSphere has a key component: the vSphere Storage APIs for Data Protection (VADP). VADP is a set of application programming interfaces (APIs) that back up vendors leverage in order to provide enhanced backup functionality of virtualized environments. VADP enables functionality like file-level backup and restore; support for incremental, differential, and fullimage backups; native integration with backup software; and support for multiple storage protocols. On its own, though, VADP is just a set of interfaces, like a framework for making backups possible. You can’t actually back up VMs with VADP. You’ll need a VADP-enabled backup application. There are a growing number of third-party backup applications that are designed to work with VADP from vendors such as CommVault, DellEMC, and Veritas. vSphere Data Protection In vSphere 5.1, VMware phased out its earlier data protection tool, VMware Data Recovery (VDR), in favor of vSphere Data Protection (VDP). Although VDR was provided with vSphere 5.0, VDR is not supported with vSphere 5.1 and later. Subsequently, VMware has also discontinued VDP from vSphere 6.5. Backups of your vSphere environment now need to be handled by another vendor. Virtual SAN (vSAN) vSAN was a major new feature included with, but licensed separately from, vSphere 5.5 and later. It is the evolution of work that VMware has been doing for a number of years now. vSAN lets organizations leverage the internal local storage found in individual compute nodes and turn it into a virtual SAN. vSAN requires a minimum of two ESXi hosts (or nodes) for some limited configurations, but it will scale to as many as 64. vSAN also requires solid-state (flash) storage in each of the compute nodes providing vSAN storage; this is done to help improve I/O performance given that most compute nodes have a limited number of physical drives present. vSAN pools the aggregate storage across the compute nodes, allowing you to create a datastore that spans multiple compute nodes. vSAN employs policies and algorithms to ensure performance or to help protect
16 CHAPTER1 INTRODUCING VMWARE vSPHERE 6.7 against data loss, such as ensuring that the data exists on multiple participating vsAN nodes at the same time Theres more information on VSAN in Chapter 6, Creating and Configuring Storage Devices. VSPHERE REPLICATION vSphere Replication brings data replication, which is a feature typically found in hardware storage platforms, into vSphere itself. It's been around since vSphere 5.0, when it was only enabled for use in conjunction with VMware Site Recovery Manager(SRM)5.0. In vSphere 5.1, vSphere Replication was decoupled from SRM and enabled for independent use without VMware SRM Sphere Replication enables customers to replicate VMs from one vSphere environment to another vSphere environment. Typically, this means from one data center(often referred to as the primary or production data center)to another datacenter(typically the secondary, backup, or disaster recovery [DR] site). Unlike hardware-based solutions, vSphere Replication operates on a cated and which workloads won't be replicatedcor per-VM basis, so it gives customers very granular control over which workloads will be repli You can find more information about vSphere Replication in Chapter 7. VSPHERE FLASH READ CACHE Since the release of vSphere 5.0 in 2011, the industry has seen tremendous uptake in the use of solid-state or"flash"storage across a wide variety of use cases. Because solid-state storage can it can handle the increasing I/ demands of virtual workloads. However, depending on the ps) provide massive numbers of I/O operations per second (IOPS)and very large bandwidth(M performance, solid-state storage is still typically more expensive on a per-gigabyte basis than traditional, magnetic-disk-based storage and therefore is often first deployed as a caching mechanism to help speed up frequently accessed data. Unfortunately, without support in vSphere for managing solid-state storage as a caching mechanism, vSphere architects and administrators have had difficulty fully leveraging solid-state torage in their environments. In vSphere 5.5 and later, VMware addresses that limitation through a feature called vSphere Flash Read Cache vSphere Flash Read Cache brings full support for using solid-state storage as a caching mechanism to vSphere. Using this feature, you can assign solid-state caching space to VMs in much the same way as you assign CPU cores, RAM, or network connectivity to VMs vSphere manages how the solid-state caching capacity is allocated and assigned as well as how it is used by the VMs. VMWARE VSPHERE COMPARED TO MICROSOFT HYPER-V AND CITRIX HYPERVISOR It some virtualization solutions to others, because they are fundamentally different in approach and purpose. Such is the case with VMware ESXi and some of the other virtu- alization solutions on the market To make accurate comparisons between vSphere and others, you must include only Type 1(bare- metal")virtualization solutions. This would include ESXi, Microsoft Hyper-V, and Citrix Hypervisor
16 | CHAPTER 1 INTRODUCING VMWARE vSPHERE 6.7 against data loss, such as ensuring that the data exists on multiple participating vSAN nodes at the same time. There’s more information on vSAN in Chapter 6, “Creating and Configuring Storage Devices.” vSphere Replication vSphere Replication brings data replication, which is a feature typically found in hardware storage platforms, into vSphere itself. It’s been around since vSphere 5.0, when it was only enabled for use in conjunction with VMware Site Recovery Manager (SRM) 5.0. In vSphere 5.1, vSphere Replication was decoupled from SRM and enabled for independent use without VMware SRM. vSphere Replication enables customers to replicate VMs from one vSphere environment to another vSphere environment. Typically, this means from one data center (often referred to as the primary or production data center) to another datacenter (typically the secondary, backup, or disaster recovery [DR] site). Unlike hardware-based solutions, vSphere Replication operates on a per-VM basis, so it gives customers very granular control over which workloads will be replicated and which workloads won’t be replicated. You can find more information about vSphere Replication in Chapter 7. vSphere Flash Read Cache Since the release of vSphere 5.0 in 2011, the industry has seen tremendous uptake in the use of solid-state or “flash” storage across a wide variety of use cases. Because solid-state storage can provide massive numbers of I/O operations per second (IOPS) and very large bandwidth (Mbps) it can handle the increasing I/O demands of virtual workloads. However, depending on the performance, solid-state storage is still typically more expensive on a per-gigabyte basis than traditional, magnetic-disk-based storage and therefore is often first deployed as a caching mechanism to help speed up frequently accessed data. Unfortunately, without support in vSphere for managing solid-state storage as a caching mechanism, vSphere architects and administrators have had difficulty fully leveraging solid-state storage in their environments. In vSphere 5.5 and later, VMware addresses that limitation through a feature called vSphere Flash Read Cache. vSphere Flash Read Cache brings full support for using solid-state storage as a caching mechanism to vSphere. Using this feature, you can assign solid-state caching space to VMs in much the same way as you assign CPU cores, RAM, or network connectivity to VMs. vSphere manages how the solid-state caching capacity is allocated and assigned as well as how it is used by the VMs. VMware vSphere Compared to Microsoft Hyper-V and Citrix Hypervisor It’s not possible to compare some virtualization solutions to others, because they are fundamentally different in approach and purpose. Such is the case with VMware ESXi and some of the other virtualization solutions on the market. To make accurate comparisons between vSphere and others, you must include only Type 1 (“baremetal”) virtualization solutions. This would include ESXi, Microsoft Hyper-V, and Citrix Hypervisor
EXPLORING VMWARE vSPHERE 6.717 It would not include products such as VMware Fusion or Workstation and Windows Virtual PC, all of which are Type 2(hosted") virtualization products. Even within the Type 1 hypervisors, there are architectural differences that make direct comparisons difficult. For example, both Microsoft Hyper-V and Citrix Hypervisor route all the VM I/O through the"par- ent partition"or"dom0. This typically provides greater hardware compatibility with a wider range of products. In the case of Hyper-V, for example, as soon as Windows Server-the general-purpose operating system running in the parent partition--supports a particular type of hardware, Hyper-V supports it also Hyper-V"piggybacks"on Windows hardware drivers and the 1/O stack. The same can be said for Citrix Hypervisor, although its dom0"runs Linux and not windows VMware ESXi, on the other hand, handles I/O within the hypervisor itself. This typically provides greater throughput and lower overhead at the expense of slightly more limited hardware compatibil ity. To add more hardware support or updated drivers, the hypervisor must be updated because the 1O stack and device drivers are in the hypervisor. This architectural difference is fundamental, and nowhere is it more greatly demonstrated than in ESXi, which has a small footprint yet provides a full-featured virtualization solution. Both Citrix lypervisor and Microsoft Hyper-V require a full installation of a general-purpose operating system (Windows Server for Hyper-V, Linux for Citrix Hypervisor) in the parent partition/domO in order to operat In the end, each of the virtualization products has its own set of advantages and disadvantages, and large organizations may end up using multiple products. For example, VMware vSphere might be best suited in a large corporate datacenter, whereas Microsoft Hyper-V or Citrix Hypervisor might be acceptable for test, development, or branch office deployment. Organizations that dont require VMware vSphere's advanced features like vSphere DRS, vSphere FT, or Storage vMotion may also find that Microsoft Hyper-V or Citrix Hypervisor is a better fit for their needs. As you can see, VMware vSphere offers some pretty powerful features that will change the way you view the resources in your datacenter vSphere also has a wide range of features and functionality. Some of these features, though, might not be applicable to all organizations, which is why VMware has crafted a flexible licensing scheme for organizations of all sizes Licensing VMware vSphere With each new version, VMware usually revises the licensing tiers and bundles intended to provide a good fit for every market segment Introduced with vSphere 5. 1(and continuing on through vSphere 6.7), VMware refined this licensing arrangement with the vCloud Suite-a bundling of products including vSphere, vRealize Automation, v Center Site Recovery manager nd vRealize Operations Management Suite Although licensing vSphere via the vCloud Suite is likely the preferred way of licensing vSphere moving forward, discussing all the other products included in the vCloud Suite is beyond the scope of this book. Instead, we'll focus on vSphere and explain how the various features discussed so far fit into vSphere's licensing model when vSphere is licensed stand-alone. One thing that you need to be aware of is that VMware may change the licensing tiers and pabilities associated with each tier at any time. You should visit the vSphere products web page(www.vmware.com/products/vsphere.html)ortalktoyourVmwaRerepresentative
EXPLORING VMWARE vSPHERE 6.7 | 17 It would not include products such as VMware Fusion or Workstation and Windows Virtual PC, all of which are Type 2 (“hosted”) virtualization products. Even within the Type 1 hypervisors, there are architectural differences that make direct comparisons difficult. For example, both Microsoft Hyper-V and Citrix Hypervisor route all the VM I/O through the “parent partition” or “dom0.” This typically provides greater hardware compatibility with a wider range of products. In the case of Hyper-V, for example, as soon as Windows Server—the general-purpose operating system running in the parent partition—supports a particular type of hardware, Hyper-V supports it also. Hyper-V “piggybacks” on Windows’ hardware drivers and the I/O stack. The same can be said for Citrix Hypervisor, although its “dom0” runs Linux and not Windows. VMware ESXi, on the other hand, handles I/O within the hypervisor itself. This typically provides greater throughput and lower overhead at the expense of slightly more limited hardware compatibility. To add more hardware support or updated drivers, the hypervisor must be updated because the I/O stack and device drivers are in the hypervisor. This architectural difference is fundamental, and nowhere is it more greatly demonstrated than in ESXi, which has a small footprint yet provides a full-featured virtualization solution. Both Citrix Hypervisor and Microsoft Hyper-V require a full installation of a general-purpose operating system (Windows Server for Hyper-V, Linux for Citrix Hypervisor) in the parent partition/dom0 in order to operate. In the end, each of the virtualization products has its own set of advantages and disadvantages, and large organizations may end up using multiple products. For example, VMware vSphere might be best suited in a large corporate datacenter, whereas Microsoft Hyper-V or Citrix Hypervisor might be acceptable for test, development, or branch office deployment. Organizations that don’t require VMware vSphere’s advanced features like vSphere DRS, vSphere FT, or Storage vMotion may also find that Microsoft Hyper-V or Citrix Hypervisor is a better fit for their needs. As you can see, VMware vSphere offers some pretty powerful features that will change the way you view the resources in your datacenter. vSphere also has a wide range of features and functionality. Some of these features, though, might not be applicable to all organizations, which is why VMware has crafted a flexible licensing scheme for organizations of all sizes. Licensing VMware vSphere With each new version, VMware usually revises the licensing tiers and bundles intended to provide a good fit for every market segment. Introduced with vSphere 5.1 (and continuing on through vSphere 6.7), VMware refined this licensing arrangement with the vCloud Suite—a bundling of products including vSphere, vRealize Automation, vCenter Site Recovery Manager, and vRealize Operations Management Suite. Although licensing vSphere via the vCloud Suite is likely the preferred way of licensing vSphere moving forward, discussing all the other products included in the vCloud Suite is beyond the scope of this book. Instead, we’ll focus on vSphere and explain how the various features discussed so far fit into vSphere’s licensing model when vSphere is licensed stand-alone. One thing that you need to be aware of is that VMware may change the licensing tiers and capabilities associated with each tier at any time. You should visit the vSphere products web page (www.vmware.com/products/vsphere.html) or talk to your VMware representative before making any purchasing decisions