Figure 6.43 In this dialog box, you can enable or disable storage policies on a per-cluster level. Figure 6.44 You'l use the edit Multipathing button in the Datastore Manage Settings area to modify the multipathing policy. Figure 6.45 This datastore resides on an active-passive array; specifically, a Synology nas. You can tell this by the currently assigned path selection policy and the storage array type information Figure 6. 46 NFS uses the networking stack, not the storage stack, for high availability and load balancing. Figure 6.47 The choices to configure highly available nFs datastores depend on your network infrastructure and configuration. Figure 6.48 If you have a network switch that supports multi-switch link aggregation, you can easily create a network team that spans switches Figure 6. 49 If you have a basic network switch without multi-switch link aggregation or dont have the experience or control of your network infrastructure, you can use VMkernel routing by placing multiple vmkernel network interfaces on separate vSwitches and different subnets Figure 6.50 Every nFS datastore has two TCP connections to the NFS server but only one for data Figure 6.51 When configuring NFS datastores, it's important to extend the esXi host time-outs to match the vendor best practices. This host is not configured with the recommended settings .& Figure 6.52 Mounting an NFS datastore requires that you know the IP address and the export name from the nFS server Figure 6.53 NFS datastores are listed among VMFS datastores, but the information provided for each is different Figure 6.54 This VM has both a virtual disk on a vmfs datastore and an rDm Figure 6.55 a thin-provisioned virtual disk uses only as much as the guest OS in the VM uses. A flat disk doesn't pre-zero unused space,So an array with thin provisioning would show only 100 gB used. a
Figure 6.43 In this dialog box, you can enable or disable storage policies on a per-cluster level. Figure 6.44 You’ll use the Edit Multipathing button in the Datastore Manage Settings area to modify the multipathing policy. Figure 6.45 This datastore resides on an active-passive array; specifically, a Synology NAS. You can tell this by the currently assigned path selection policy and the storage array type information. Figure 6.46 NFS uses the networking stack, not the storage stack, for high availability and load balancing. Figure 6.47 The choices to configure highly available NFS datastores depend on your network infrastructure and configuration. Figure 6.48 If you have a network switch that supports multi-switch link aggregation, you can easily create a network team that spans switches. Figure 6.49 If you have a basic network switch without multi-switch link aggregation or don’t have the experience or control of your network infrastructure, you can use VMkernel routing by placing multiple VMkernel network interfaces on separate vSwitches and different subnets. Figure 6.50 Every NFS datastore has two TCP connections to the NFS server but only one for data. Figure 6.51 When configuring NFS datastores, it’s important to extend the ESXi host time-outs to match the vendor best practices. This host is not configured with the recommended settings. Figure 6.52 Mounting an NFS datastore requires that you know the IP address and the export name from the NFS server. Figure 6.53 NFS datastores are listed among VMFS datastores, but the information provided for each is different. Figure 6.54 This VM has both a virtual disk on a VMFS datastore and an RDM. Figure 6.55 A thin-provisioned virtual disk uses only as much as the guest OS in the VM uses. A flat disk doesn’t pre-zero unused space, so an array with thin provisioning would show only 100 GB used. A
kly provisioned (eager zeroed) virtual disk consumes 500 GB immediately because it is pre-zeroed. Figure 6.56 VMES datastores support all three virtual disk types. Figure 6.57 The Summary tab of a VM will report the total provisioned space as well as the used space. Figure 6.58 The Edit Settings dialog box tells you what kind of disk is configured, but it doesn't provide current space usage statistics Figure 6.59 A VM can use various virtual SCSi adapters. You can configure up to four virtual SCSi adapters for each VM. Figure 6.60 This vM storage policy requires a specific user-defined storage capability. Figure 6. 61 The Enable vM Storage Policies dialog box shows the current status of vm policies and licensing compliance for the featur Figure 6.62 This vM does not have a vm storage policy assigned yet. Figure 6.63 Each virtual disk can have its own VM storage policy, so you tailor vM storage capabilities on a per-virtual disk basis. Figure 6.64 The storage capabilities specified in this vm storage polic don' t match the capabilities of the VMs current storage locatioucy Figure 6.65 This VM's current storage is compliant with its assigned VM storage policy Chapter z Figure z 1 Each laver has its own forms of high availability. Figure 7.2 An nlB cluster can contain up to 32 active nodes (only 5 are shown here), and traffic is distributed equally across each available node. The nlB software allows the nodes to share a common name and IP address that is referenced by clients. Figure 7.3 Server clusters are best suited for applications and services like SQL Server, DHCP, and so on, which use a common dataset. Figure 7. 4 A cluster-in-a-box configuration does not provide protection against a single point of failure. Therefore, it is not a common or suggested form of deploying microsoft server clusters in VMs. Figure 7.5 A Microsoft cluster built on VMs residing on separate ESXi
thickly provisioned (eager zeroed) virtual disk consumes 500 GB immediately because it is pre-zeroed. Figure 6.56 VMFS datastores support all three virtual disk types. Figure 6.57 The Summary tab of a VM will report the total provisioned space as well as the used space. Figure 6.58 The Edit Settings dialog box tells you what kind of disk is configured, but it doesn’t provide current space usage statistics. Figure 6.59 A VM can use various virtual SCSI adapters. You can configure up to four virtual SCSI adapters for each VM. Figure 6.60 This VM storage policy requires a specific user-defined storage capability. Figure 6.61 The Enable VM Storage Policies dialog box shows the current status of VM policies and licensing compliance for the feature. Figure 6.62 This VM does not have a VM storage policy assigned yet. Figure 6.63 Each virtual disk can have its own VM storage policy, so you tailor VM storage capabilities on a per-virtual disk basis. Figure 6.64 The storage capabilities specified in this VM storage policy don’t match the capabilities of the VM’s current storage location. Figure 6.65 This VM’s current storage is compliant with its assigned VM storage policy. Chapter 7 Figure 7.1 Each layer has its own forms of high availability. Figure 7.2 An NLB cluster can contain up to 32 active nodes (only 5 are shown here), and traffic is distributed equally across each available node. The NLB software allows the nodes to share a common name and IP address that is referenced by clients. Figure 7.3 Server clusters are best suited for applications and services like SQL Server, DHCP, and so on, which use a common dataset. Figure 7.4 A cluster-in-a-box configuration does not provide protection against a single point of failure. Therefore, it is not a common or suggested form of deploying Microsoft server clusters in VMs. Figure 7.5 A Microsoft cluster built on VMs residing on separate ESXi
hosts requires shared storage access from each VM using an RDM. Figure 7. 6 A node in a Microsoft Windows Server cluster requires at least two NICs. One adapter must be able to communicate on the production network, and the second adapter is configured for internal cluster heartbeat communication Figure 7.7 Add a new device of type rdm disk for the first node in a cluster and Existing Hard Disk for additional nodes. Figure 7. 8 The Scsi bus sharing for the new scsi adapter must be set to Physical to support running a Microsoft cluster across multiple esx hosts Figure z. 9 The RdM presented to the first cluster node is formatted and assigned a drive letter. Figure 7.10 Clustering physical machines with VM counterparts can be a cost-effective way of providing high availability. Figure 7.11 Using a single powerful ESXi system to host multiple failover clusters is one use case for physical-to-virtual clustering. Figure 7 12 vSphere ha provides an automatic restart of vMs that were running on an ESXi host when it failed. Figure 7.13 The status of an ESXi host as either master or slave is provided on the host's Summary tab. Here you can see both a master host and a slave host. Figure 7 14 vSphere ha uses the host-X-poweron files for a slave host to notify the master that it has become isolated from the network. Figure 7.15 VMCP allows you to determine what actions should be taken against affected VMs during storage access failures. Figure 7.16 vSphere HA is enabled or disabled for an entire cluster. Figure 7.17 As you can see in the Tasks pane, vSphere haelects a master host when it is enabled on a cluster of esXi hosts Figure 7.18 Deselecting Enable Host Monitoring when performing network maintenance will prevent vSphere Ha from unnecessarily triggering network isolation or network partition responses. Figure z19 The Admission Control Policy settings will determine how
hosts requires shared storage access from each VM using an RDM. Figure 7.6 A node in a Microsoft Windows Server cluster requires at least two NICs. One adapter must be able to communicate on the production network, and the second adapter is configured for internal cluster heartbeat communication. Figure 7.7 Add a new device of type RDM Disk for the first node in a cluster and Existing Hard Disk for additional nodes. Figure 7.8 The SCSI bus sharing for the new SCSI adapter must be set to Physical to support running a Microsoft cluster across multiple ESXi hosts. Figure 7.9 The RDM presented to the first cluster node is formatted and assigned a drive letter. Figure 7.10 Clustering physical machines with VM counterparts can be a cost-effective way of providing high availability. Figure 7.11 Using a single powerful ESXi system to host multiple failover clusters is one use case for physical-to-virtual clustering. Figure 7.12 vSphere HA provides an automatic restart of VMs that were running on an ESXi host when it failed. Figure 7.13 The status of an ESXi host as either master or slave is provided on the host’s Summary tab. Here you can see both a master host and a slave host. Figure 7.14 vSphere HA uses the host-X-poweron files for a slave host to notify the master that it has become isolated from the network. Figure 7.15 VMCP allows you to determine what actions should be taken against affected VMs during storage access failures. Figure 7.16 vSphere HA is enabled or disabled for an entire cluster. Figure 7.17 As you can see in the Tasks pane, vSphere HA elects a master host when it is enabled on a cluster of ESXi hosts. Figure 7.18 Deselecting Enable Host Monitoring when performing network maintenance will prevent vSphere HA from unnecessarily triggering network isolation or network partition responses. Figure 7.19 The Admission Control Policy settings will determine how
a vSphere Ha-enabled cluster determines availability constraints. Figure 7.20 You can define cluster default vM options to customize the Behavior of vsphere HA. Figure 7.21 Use the VM Overrides setting to specify which VMs should be restarted first or ignored entirely Figure 7.22 High-priority VMs from a failed ESXi host might not be powered on because of a lack of resources-resources consumed by VMs with a lower priority that are running on the other hosts in a vSphere ha-enabled cluster Figure 7.23 The option to leave VMs running when a host is isolated should be set only when the virtual and the physical networking infrastructures support high availability Figure 7.24 You can configure vSphere ha to monitor for guest OS and application heartbeats and restart a vM when a failure occurs. Figure 7. 25 The Custom option provides specific control over how vSphere HA monitors VMs for guest OS failure. Figure 7.26 Select the shared datastores that v Sphere ha should use for datastore heartbeating. Figure 7.27 This blended figure shows the difference between a VM currently listed as Unprotected by v Sphere ha and one that is listed as Protected by vSphere HA; note the icon next to the windows logo. VMs may be unprotected because the master has not yet been notified by vCenter Server that the vM has been powered on and needs to be protected. Figure 7.28 The vSphere HA Summary tab holds a wealth of information about v Sphere HA and its operation. The current vSphere HA master, the number of protected and unprotected VMs, and the datastores used for heartbeating are all found here. Figure 7.29 You can turn on vSphere FT from the context menu for a Figure z30 You need to select a datastore for each virtual machine obiect when you enable SMP-FT Figure 7. 31 vSphere SMP-FT uses xv Motion to create the virtual
a vSphere HA–enabled cluster determines availability constraints. Figure 7.20 You can define cluster default VM options to customize the behavior of vSphere HA. Figure 7.21 Use the VM Overrides setting to specify which VMs should be restarted first or ignored entirely. Figure 7.22 High-priority VMs from a failed ESXi host might not be powered on because of a lack of resources—resources consumed by VMs with a lower priority that are running on the other hosts in a vSphere HA–enabled cluster. Figure 7.23 The option to leave VMs running when a host is isolated should be set only when the virtual and the physical networking infrastructures support high availability. Figure 7.24 You can configure vSphere HA to monitor for guest OS and application heartbeats and restart a VM when a failure occurs. Figure 7.25 The Custom option provides specific control over how vSphere HA monitors VMs for guest OS failure. Figure 7.26 Select the shared datastores that vSphere HA should use for datastore heartbeating. Figure 7.27 This blended figure shows the difference between a VM currently listed as Unprotected by vSphere HA and one that is listed as Protected by vSphere HA; note the icon next to the Windows logo. VMs may be unprotected because the master has not yet been notified by vCenter Server that the VM has been powered on and needs to be protected. Figure 7.28 The vSphere HA Summary tab holds a wealth of information about vSphere HA and its operation. The current vSphere HA master, the number of protected and unprotected VMs, and the datastores used for heartbeating are all found here. Figure 7.29 You can turn on vSphere FT from the context menu for a VM. Figure 7.30 You need to select a datastore for each virtual machine object when you enable SMP-FT. Figure 7.31 vSphere SMP-FT uses xvMotion to create the virtual
machine runtime and files as it is powered on for the first time Figure 7. 32 The darker VM icon indicates that v Sphere SMP-FT is nabled for this vm. Figure 7.33 The vSphere Web client shows vSphere SMP-FTstatus information in the Fault Tolerance area on the summary tab of a vm. Figure 7.34 Running backup agents inside the guest oS can provide application-and oS-level integration, but not without some drawbacks. Figure 7. 35 vSphere Replication can work between datacenters, as long as there is a network joining them. Figure 7. 36 The network configuration for the vSphere Replication appliance happens before it is deployed. Figure 7.37 New menus are often added in the vSphere Web client when virtual appliances that add functionality are deployed. Figure 7. 38 Always configure the recovery settings within vSphere Replication to match (or exceed) your applications rPo requirements. Chapter 8 Figure8.1 The vicfg-user command prompts for a password to execute the command and then prompts for a password for the new user you are creating. Figure 8.2 For a user, you can change the UID, username, or password, but you can't change the Login field. Figure 8.3 The Security Profile area of the Configuration tab in the traditional vSphere Client shows the current ESXi firewall configuration. Figure 8.4 Traffic to the selected network traffic on this ESXi host will be limited to addresses from the specified subnet. Figure 8.5 Adding the correct XMl to the services. xml file allows you to customize the esXi host firewall ports. Figure 8.6 vCenter Server and ESXi share a common security model for assigning access control. Figure 8.7 Custom roles strengthen management capabilities and add flexibility to permission delegations
machine runtime and files as it is powered on for the first time Figure 7.32 The darker VM icon indicates that vSphere SMP-FT is enabled for this VM. Figure 7.33 The vSphere Web Client shows vSphere SMP-FT status information in the Fault Tolerance area on the Summary tab of a VM. Figure 7.34 Running backup agents inside the guest OS can provide application- and OS-level integration, but not without some drawbacks. Figure 7.35 vSphere Replication can work between datacenters, as long as there is a network joining them. Figure 7.36 The network configuration for the vSphere Replication appliance happens before it is deployed. Figure 7.37 New menus are often added in the vSphere Web Client when virtual appliances that add functionality are deployed. Figure 7.38 Always configure the recovery settings within vSphere Replication to match (or exceed) your application’s RPO requirements. Chapter 8 Figure 8.1 The vicfg-user command prompts for a password to execute the command and then prompts for a password for the new user you are creating. Figure 8.2 For a user, you can change the UID, username, or password, but you can’t change the Login field. Figure 8.3 The Security Profile area of the Configuration tab in the traditional vSphere Client shows the current ESXi firewall configuration. Figure 8.4 Traffic to the selected network traffic on this ESXi host will be limited to addresses from the specified subnet. Figure 8.5 Adding the correct XML to the services.xml file allows you to customize the ESXi host firewall ports. Figure 8.6 vCenter Server and ESXi share a common security model for assigning access control. Figure 8.7 Custom roles strengthen management capabilities and add flexibility to permission delegations