offers the option to detach it from other objects at the same time. Figure 4.21 Different types of scans are initiated depending on the check boxes selected at the start of the scan. Figure 4.22 When multiple baselines are attached to an obiect compliance is reflected on a per-baseline basis. Figure 4.23 VUM can show partial compliance when viewing objects that contain other objects. Figure 4.24 The vSphere Desktop Client reflects when the process of staging patches Is complete Figure 4.25 The Remediate dialog box allows you to select the baselines or baseline groups against which you would like to remediate an ESX/ESXi host. Figure 4.26 When remediating a host, you need to specify a name for the remediation task and a schedule for the task. Figure 4.27 Host remediation options available if the host has to enter M maintenance mode Figure 4.28 Cluster options during host remediation Figure 4.29 VUM supports different schedules for remediating powered-on VMs, powered-off VMs, and suspended vMs. Figure 4.30 VUM integrates with vCenter Servers snapshot functionality to allow remediation operations to be rolled back in the event of a problem. Figure 4.31 Select the ESXi image to use for the host upgrade. Figure 4.32 ESXi image import Figure 4.33 All the packages contained in the imported ESXi image are shown Figure 4.34 Select the correct upgrade baseline in the right pane if multiple versions are listed. Figure 4. 35 Upgrades can ignore third-party software on legacy hosts. Figure 4.36 VUM Power cLI cmdlets available Figure 4.37 Dump Collector services not running by default
offers the option to detach it from other objects at the same time. Figure 4.21 Different types of scans are initiated depending on the check boxes selected at the start of the scan. Figure 4.22 When multiple baselines are attached to an object, compliance is reflected on a per-baseline basis. Figure 4.23 VUM can show partial compliance when viewing objects that contain other objects. Figure 4.24 The vSphere Desktop Client reflects when the process of staging patches is complete. Figure 4.25 The Remediate dialog box allows you to select the baselines or baseline groups against which you would like to remediate an ESX/ESXi host. Figure 4.26 When remediating a host, you need to specify a name for the remediation task and a schedule for the task. Figure 4.27 Host remediation options available if the host has to enter Maintenance mode Figure 4.28 Cluster options during host remediation Figure 4.29 VUM supports different schedules for remediating powered-on VMs, powered-off VMs, and suspended VMs. Figure 4.30 VUM integrates with vCenter Server’s snapshot functionality to allow remediation operations to be rolled back in the event of a problem. Figure 4.31 Select the ESXi image to use for the host upgrade. Figure 4.32 ESXi image import Figure 4.33 All the packages contained in the imported ESXi image are shown. Figure 4.34 Select the correct upgrade baseline in the right pane if multiple versions are listed. Figure 4.35 Upgrades can ignore third-party software on legacy hosts. Figure 4.36 VUM PowerCLI cmdlets available Figure 4.37 Dump Collector services not running by default
Figure 4.38 ESXi Dump Collector Manage tab Figure 4.39 Configuring a host to redirect dumps to a dump collector Figure 4.40 Configuring a host to a dump collector via its host profile Figure 4.41 The Network Syslog Collector with hosts registered in v Center Figure 4.42 Setting host syslog settings in the vSphere Web client Figure 4.43 Setting host syslog settings via the hosts command line Figure 4.44 Opening up the firewall ports to communicate with the Syslog collector Chapter 5 Figure 5.1 Successful virtual networking is a blend of virtual and physical network adapters and switches. Figure 5.2 Virtual switches alone cant provide connectivity; they need ports or port groups and uplinks to connect to provide connectivity external to the host. Figure 5.3 Virtual switches can contain two connection types: VMkernel port and vm port group. Figure 5.4 You can create virtual switches with both connection types on the same switch Figure 5.5 VMs communicating through an internal-only vSwitch do not pass any traffic through a physical adapter. Figure 5.6 A vSwitch with a single network adapter allows vMs to communicate with physical servers and other vMs on the network. Figure 5.7 A vSwitch using NIC teaming has multiple available adapters for data transfer. NIC teaming offers redundancy and load distribution Figure 5.8 Virtual switches using NIC teaming are identified by the multiple physical network adapters assigned to the vSwitch Figure 5.9 The vSphere Web client offers a way to enable management networking when configuring networking. Figure 5.10 To configure ESXi's Management Network, use the
Figure 4.38 ESXi Dump Collector Manage tab Figure 4.39 Configuring a host to redirect dumps to a Dump Collector Figure 4.40 Configuring a host to a Dump Collector via its host profile Figure 4.41 The Network Syslog Collector with hosts registered in vCenter Figure 4.42 Setting host syslog settings in the vSphere Web Client Figure 4.43 Setting host syslog settings via the host’s command line Figure 4.44 Opening up the firewall ports to communicate with the Syslog Collector Chapter 5 Figure 5.1 Successful virtual networking is a blend of virtual and physical network adapters and switches. Figure 5.2 Virtual switches alone can’t provide connectivity; they need ports or port groups and uplinks to connect to provide connectivity external to the host. Figure 5.3 Virtual switches can contain two connection types: VMkernel port and VM port group. Figure 5.4 You can create virtual switches with both connection types on the same switch. Figure 5.5 VMs communicating through an internal-only vSwitch do not pass any traffic through a physical adapter. Figure 5.6 A vSwitch with a single network adapter allows VMs to communicate with physical servers and other VMs on the network. Figure 5.7 A vSwitch using NIC teaming has multiple available adapters for data transfer. NIC teaming offers redundancy and load distribution. Figure 5.8 Virtual switches using NIC teaming are identified by the multiple physical network adapters assigned to the vSwitch. Figure 5.9 The vSphere Web Client offers a way to enable management networking when configuring networking. Figure 5.10 To configure ESXi’s Management Network, use the
Configure Management Network option in the System Customization menu Figure 5.11 From the Configure Management Network menu, users can modify assigned network adapters, change the vlan iD, alter the ip and modify dns and dnS search configuration. management networking and applies any changes that were made Figure 5.12 The Restart Management Network option restarts ESXi Figure 5.13 Use the Network Restore Options screen to manage network connectivity to an ESXi host. Figure 5.14 A VMkernel port is associated with an interface and assigned an ip address for accessing iScSi or NFS storage devices or for other management services. Figure 5.15 It is recommended to add only one type of management traffic to a vmkernel interface Figure 5.16 A comparison of the supported vMkernel traffic types in vSphere 5.5 (left) and vSphere 6.0(right). With the release of vSphere 6.0, VMkernel ports can now also carry Provisioning traffic, vSphere Replication traffic, and vSphere Replication NFC traffic. Figure_5.1Z Using the CLI helps drive home the fact that the port group and the vMkernel port are separate objects. Figure 5.18 The Analyze Impact section shows administrators dependencies on vMkernel ports. Figure 5.19 TCP/IP stack settings are located with other host networking configuration options. Figure 5.20 Each TCP/IP stack can have its own DNS configuration, routing information, and other advanced settings. Figure 5.21 VMkernel ports can be assigned to a tcP/ip stack only at the time of creation. Figure 5.22 A vSwitch with a vM port group uses an associated physical network adapter to establish a switch-to-switch connection with a physical switch. Figure 5.23 Virtual LANs provide secure traffic segmentation without the cost of additional hardware
Configure Management Network option in the System Customization menu. Figure 5.11 From the Configure Management Network menu, users can modify assigned network adapters, change the VLAN ID, alter the IP, and modify DNS and DNS search configuration. Figure 5.12 The Restart Management Network option restarts ESXi’s management networking and applies any changes that were made. Figure 5.13 Use the Network Restore Options screen to manage network connectivity to an ESXi host. Figure 5.14 A VMkernel port is associated with an interface and assigned an IP address for accessing iSCSI or NFS storage devices or for other management services. Figure 5.15 It is recommended to add only one type of management traffic to a VMkernel interface. Figure 5.16 A comparison of the supported VMkernel traffic types in vSphere 5.5 (left) and vSphere 6.0 (right). With the release of vSphere 6.0, VMkernel ports can now also carry Provisioning traffic, vSphere Replication traffic, and vSphere Replication NFC traffic. Figure 5.17 Using the CLI helps drive home the fact that the port group and the VMkernel port are separate objects. Figure 5.18 The Analyze Impact section shows administrators dependencies on VMkernel ports. Figure 5.19 TCP/IP stack settings are located with other host networking configuration options. Figure 5.20 Each TCP/IP stack can have its own DNS configuration, routing information, and other advanced settings. Figure 5.21 VMkernel ports can be assigned to a TCP/IP stack only at the time of creation. Figure 5.22 A vSwitch with a VM port group uses an associated physical network adapter to establish a switch-to-switch connection with a physical switch. Figure 5.23 Virtual LANs provide secure traffic segmentation without the cost of additional hardware
Figure 5.24 Supporting multiple networks without VLANs can increase the number of vSwitches, uplinks, and cabling that is required. Figure 5.25 VLANS can reduce the number of vSwitches, uplinks, and cabling required Figure 5.26 The physical switch ports must be configured as trunk ports in order to pass the vlan information to the eSXi hosts for the port groups to use. Figure 5.27 You must specify the correct VlaN ID in order for a port group to receive traffic intended for a particular VLan Figure 5.28 Virtual switches with multiple uplinks offer redundancy and load balancing. Figure 5.29 The vSphere Web client shows when multiple physical network adapters are associated with a vSwitch using Nic teaming Figure 5.30 All the physical network adapters in a niC team must belong to the same layer 2 broadcast domain. Figure 5.31 Create a nic team by adding network adapters that belong to the same layer 2 broadcast domain as the original adapter. Figure 5. 32 The vSwitch port-based load-balancing policy assigns ead virtual switch port to a specific uplink Failover to another uplink ach occurs when one of the physical network adapters experiences failure. Figure 5.33 The source MAC-based load balancing policy, as the name suggests, ties a virtual network adapter to a physical network adapter based on the Mac address. Figure 5.34 The IP hash-based policy is a more scalable load-balancing policy that allows vms to use more than one physical network adapter when communicating with multiple destination hosts. Figure 5. 35 The physical switches must be configured to support the IP hash-based load-balancing policy. Figure 5.36 Select the load-balancing policy for a vSwitch in the Teaming And Failover section Figure 5.37 The beacon- probing failover-detection policy sends beacons out across the physical network adapters of a nic team to identify upstream network failures or switch misconfigurations
Figure 5.24 Supporting multiple networks without VLANs can increase the number of vSwitches, uplinks, and cabling that is required. Figure 5.25 VLANs can reduce the number of vSwitches, uplinks, and cabling required. Figure 5.26 The physical switch ports must be configured as trunk ports in order to pass the VLAN information to the ESXi hosts for the port groups to use. Figure 5.27 You must specify the correct VLAN ID in order for a port group to receive traffic intended for a particular VLAN. Figure 5.28 Virtual switches with multiple uplinks offer redundancy and load balancing. Figure 5.29 The vSphere Web Client shows when multiple physical network adapters are associated with a vSwitch using NIC teaming. Figure 5.30 All the physical network adapters in a NIC team must belong to the same Layer 2 broadcast domain. Figure 5.31 Create a NIC team by adding network adapters that belong to the same layer 2 broadcast domain as the original adapter. Figure 5.32 The vSwitch port-based load-balancing policy assigns each virtual switch port to a specific uplink. Failover to another uplink occurs when one of the physical network adapters experiences failure. Figure 5.33 The source MAC-based load balancing policy, as the name suggests, ties a virtual network adapter to a physical network adapter based on the MAC address. Figure 5.34 The IP hash-based policy is a more scalable load-balancing policy that allows VMs to use more than one physical network adapter when communicating with multiple destination hosts. Figure 5.35 The physical switches must be configured to support the IP hash-based load-balancing policy. Figure 5.36 Select the load-balancing policy for a vSwitch in the Teaming And Failover section. Figure 5.37 The beacon-probing failover-detection policy sends beacons out across the physical network adapters of a NIC team to identify upstream network failures or switch misconfigurations
Figure 5.38 The failover order helps determine how adapters in a nic team are used when a failover occurs. Figure 5.39 Standby adapters automatically activate when an active dapter fails Figure 5. 40 Failover order for a niC team is determined by the order of network adapters as listed in the Active Adapters, Standby adapters and Unused Adapters lists. Figure 5.41 Traffic shaping reduces the outbound (or egress. bandwidth available to a port group require a separate vSwitch with the appropriate connection tve Figure 5.42 Without port groups, VLANS, or VGT, each IP subnet Figure 5.43 The use of the physically separate Ip storage network limits the reduction in the number of vSwitches and uplinks. Figure 5. 44 With the use of port groups and VLANs in the vSwitches, even fewer vSwitches and uplinks are required. Figure 5.45 If you want to support all the features included in vSphere 6.0.you must use a version 6.0.0 distributed switch, Figure 5.46 The number of uplinks controls how many physical adapters from each host can serve as uplinks for the distributed switch. Figure 5. When you're working with distributed switches, the v Sphere Web client offers a single wizard to add hosts, remove hosts. or manage host networking. Figure 5.48 All adapter- related changes to distributed switches are consolidated into a single wizard Figure 5.49 The esxcli command shows full details on the configuration of a distributed switch. Figure 5.50 The vSphere Web Client won't allow a host to be removed from a distributed switch if a vm is still attached. Figure 5. 51 The vSphere Distributed Switch Health Check helps identify potential problems in configuration. Figure 5.52 The New Distributed Port Group wizard gives you extensive access to customize the new distributed port groups settings
Figure 5.38 The failover order helps determine how adapters in a NIC team are used when a failover occurs. Figure 5.39 Standby adapters automatically activate when an active adapter fails. Figure 5.40 Failover order for a NIC team is determined by the order of network adapters as listed in the Active Adapters, Standby Adapters, and Unused Adapters lists. Figure 5.41 Traffic shaping reduces the outbound (or egress) bandwidth available to a port group. Figure 5.42 Without port groups, VLANs, or VGT, each IP subnet will require a separate vSwitch with the appropriate connection type. Figure 5.43 The use of the physically separate IP storage network limits the reduction in the number of vSwitches and uplinks. Figure 5.44 With the use of port groups and VLANs in the vSwitches, even fewer vSwitches and uplinks are required. Figure 5.45 If you want to support all the features included in vSphere 6.0, you must use a version 6.0.0 distributed switch. Figure 5.46 The number of uplinks controls how many physical adapters from each host can serve as uplinks for the distributed switch. Figure 5.47 When you’re working with distributed switches, the vSphere Web Client offers a single wizard to add hosts, remove hosts, or manage host networking. Figure 5.48 All adapter-related changes to distributed switches are consolidated into a single wizard. Figure 5.49 The esxcli command shows full details on the configuration of a distributed switch. Figure 5.50 The vSphere Web Client won’t allow a host to be removed from a distributed switch if a VM is still attached. Figure 5.51 The vSphere Distributed Switch Health Check helps identify potential problems in configuration. Figure 5.52 The New Distributed Port Group wizard gives you extensive access to customize the new distributed port group’s settings