Figure 5.53 A distributed port group is selected as a network connection for VMs, just like port groups on a vSphere Standard VSwitch Figure 5. 54 The vSphere Web Client provides a summary of the distributed port group's configuration Figure 5.55 The Topology view for a distributed switch provides easy access to view and edit distributed port groups. Figure 5.56 You can apply both ingress (inbound) and egress outbound) traffic-shaping policies to a distributed port group on distributed switch Figure 5.5 The Teaming And Failover item in the distributed port group edit Settings dialog box provides options for modifying how a distributed port group uses uplinks. Figure 5.58 The Block policy is set to either Yes or No. Setting the Block policy to Yes disables all the ports in that distributed port group. Figure 5.59 The Manage Virtual Network Adapters screen of the wizard allows you to add new adapters as well as migrate existing adapters. Figure 5.60 Migrating a virtual adapter involves assigning it to an existing distributed port group Figure 5.61 To manage uplinks on a distributed switch, make sure only the Manage Physical Adapters option is selected. Figure 5.62 The Migrate Virtual Machine Networking wizard automates the process of migrating VMs between a source and destination network. Figure 5.63 You cannot migrate VMs matching your source network selection if the destination network is listed as inaccessible Figure 5.64 You'll need the IP address and port number for the Netflow collector in order to send flow information from a distributed switch Figure 5.65 NetFlow is disabled by default. You enable NetFlow on a per-distributed port group basis. discovery information with other Lldp-enabled devices over the Figure 5.66 LLDP support enables distributed switches to excha
Figure 5.53 A distributed port group is selected as a network connection for VMs, just like port groups on a vSphere Standard vSwitch. Figure 5.54 The vSphere Web Client provides a summary of the distributed port group’s configuration. Figure 5.55 The Topology view for a distributed switch provides easy access to view and edit distributed port groups. Figure 5.56 You can apply both ingress (inbound) and egress (outbound) traffic-shaping policies to a distributed port group on a distributed switch. Figure 5.57 The Teaming And Failover item in the distributed port group Edit Settings dialog box provides options for modifying how a distributed port group uses uplinks. Figure 5.58 The Block policy is set to either Yes or No. Setting the Block policy to Yes disables all the ports in that distributed port group. Figure 5.59 The Manage Virtual Network Adapters screen of the wizard allows you to add new adapters as well as migrate existing adapters. Figure 5.60 Migrating a virtual adapter involves assigning it to an existing distributed port group. Figure 5.61 To manage uplinks on a distributed switch, make sure only the Manage Physical Adapters option is selected. Figure 5.62 The Migrate Virtual Machine Networking wizard automates the process of migrating VMs between a source and destination network. Figure 5.63 You cannot migrate VMs matching your source network selection if the destination network is listed as inaccessible. Figure 5.64 You’ll need the IP address and port number for the NetFlow collector in order to send flow information from a distributed switch. Figure 5.65 NetFlow is disabled by default. You enable NetFlow on a per–distributed port group basis. Figure 5.66 LLDP support enables distributed switches to exchange discovery information with other LLDP-enabled devices over the
k Figure 5.67 The v Sphere Distributed Switch supports both basic multicast filtering and IGMP/MLD snooping. Figure 5.68 Private VLAN entries consist of a primary vlan and one or more secondary vlan entries Figure 5.69 When a distributed port group is created with PVLANs, the distributed port group is associated with both the primary vlan Id and a secondary vlan id Figure 5.70 Basic LACP support in a version 5.1.0 vSphere Distributed Switch is enabled in the uplink group but requires other settings as well Figure 5.71 vSphere 5.5 and vSphere 6.0's enhanced LACP support eliminates many of the limitations of the support found in vSphere 5.1. Figure 5.72 With a version 5.5.0 or 6.0.0 distributed switch, the LACP properties are configured on a per-Lag basis instead of for the entire distributed switch Figure 5.73 Once a Lag has been created, physical adapters can be added to it Figure 5.74 LAGs appear as physical uplinks to the distributed port groups Figure 5.75 The default security profile for a vSwitch prevents Promiscuous mode but allows mac address changes and forged transmits Figure 5.76 The default security profile for a distributed port group on a distributed switch also denies mac address changes and forged transmits. Figure 5.77 Promiscuous mode, though it reduces security, is required when using an intrusion-detection system Figure 5.78 A VMs initial MAC address is automatically generated and listed in the configuration file for the vM and displayed within the vSphere Web client. Figure5.,'s source MAC address is the effective MAC address. which by default matches the initial MAC address configured in the
network. Figure 5.67 The vSphere Distributed Switch supports both basic multicast filtering and IGMP/MLD snooping. Figure 5.68 Private VLAN entries consist of a primary VLAN and one or more secondary VLAN entries. Figure 5.69 When a distributed port group is created with PVLANs, the distributed port group is associated with both the primary VLAN ID and a secondary VLAN ID. Figure 5.70 Basic LACP support in a version 5.1.0 vSphere Distributed Switch is enabled in the uplink group but requires other settings as well. Figure 5.71 vSphere 5.5 and vSphere 6.0’s enhanced LACP support eliminates many of the limitations of the support found in vSphere 5.1. Figure 5.72 With a version 5.5.0 or 6.0.0 distributed switch, the LACP properties are configured on a per-LAG basis instead of for the entire distributed switch. Figure 5.73 Once a LAG has been created, physical adapters can be added to it. Figure 5.74 LAGs appear as physical uplinks to the distributed port groups. Figure 5.75 The default security profile for a vSwitch prevents Promiscuous mode but allows MAC address changes and forged transmits. Figure 5.76 The default security profile for a distributed port group on a distributed switch also denies MAC address changes and forged transmits. Figure 5.77 Promiscuous mode, though it reduces security, is required when using an intrusion-detection system. Figure 5.78 A VM’s initial MAC address is automatically generated and listed in the configuration file for the VM and displayed within the vSphere Web Client. Figure 5.79 A VM’s source MAC address is the effective MAC address, which by default matches the initial MAC address configured in the
VMX file. The guest OS, however, may change the effective MAC address Figure 5.80 The MAC Address Changes and Forged Transmits security options deal with incoming and outgoing traffic, respectively. Chapter6 Figure 6.1 When ESXi hosts are connected to that same shared storage, they share its capabilities Figure 6.2 In a raid o configuration, the data is striped across all the disks in the Raid set, providing very good performance but very poor availability Figure 6.3 This RAID 10 2+2 configuration provides good performance and good availability but at the cost of so percent of the usable capacity. Figure 6.4 A RAID 5 4+1 configuration offers a balance between performance and efficiency. Figure 6.5 A RAID 6 4+2 configuration offers protection against double drive failures. Figure 6.6 vsaN abstracts the ESXi hosts local disks and presents them to the entire vsan cluster to consume. Figure 6.7 Both Fibre Channel and iSCSI SANs present LUNS from a target array (in this case, a Synology ds412+ to a series of initiators Gin this case, the vMware iSCSI Software Adapte Figure 6. 8 The most common Fibre Channel configuration: a switched Fibre Channel(Fc-SW san. This enables the Fibre Channel lun to be easily presented to all the hosts while creating a redundant network design. Figure 6.9 The Edit Multipathing Policies dialog box shows the storage runtime(shorthand) name. Figure 6.10 There are many ways to configure zoning. From left to right: multi-initiator/multi-target zoning, single-initiator/multi-target zoning, and single-initiator /single-target zoning. Figure 6 11 FCoE encapsulates Fibre Channel frames into Ethernet frames for transmission over a lossless Ethernet transport
VMX file. The guest OS, however, may change the effective MAC address. Figure 5.80 The MAC Address Changes and Forged Transmits security options deal with incoming and outgoing traffic, respectively. Chapter 6 Figure 6.1 When ESXi hosts are connected to that same shared storage, they share its capabilities. Figure 6.2 In a RAID 0 configuration, the data is striped across all the disks in the RAID set, providing very good performance but very poor availability. Figure 6.3 This RAID 10 2+2 configuration provides good performance and good availability, but at the cost of 50 percent of the usable capacity. Figure 6.4 A RAID 5 4+1 configuration offers a balance between performance and efficiency. Figure 6.5 A RAID 6 4+2 configuration offers protection against double drive failures. Figure 6.6 VSAN abstracts the ESXi host’s local disks and presents them to the entire VSAN cluster to consume. Figure 6.7 Both Fibre Channel and iSCSI SANs present LUNs from a target array (in this case, a Synology DS412+) to a series of initiators (in this case, the VMware iSCSI Software Adapter). Figure 6.8 The most common Fibre Channel configuration: a switched Fibre Channel (FC-SW) SAN. This enables the Fibre Channel LUN to be easily presented to all the hosts while creating a redundant network design. Figure 6.9 The Edit Multipathing Policies dialog box shows the storage runtime (shorthand) name. Figure 6.10 There are many ways to configure zoning. From left to right: multi-initiator/multi-target zoning, single-initiator/multi-target zoning, and single-initiator/single-target zoning. Figure 6.11 FCoE encapsulates Fibre Channel frames into Ethernet frames for transmission over a lossless Ethernet transport
Figure 6.12 Using iScsI, SCSi control and data are encapsulated in both TCP/IP and Ethernet frames. Figure 6. 13 Notice how the topology of an iScSi san is the same as a switched Fibre Channel san. Figure 6. 14 The iSCSI IETF standard has several different elements. Figure 6.15 Some parts of the stack are handled by the adapter card versus the ESXi host CPu in various implementations. Figure 6.16 The topology of an NeS configuration is similar to iSCSI from a connectivity standpoint but very different from a configuration standpoint. Figure 6. 17 VMFS stores metadata in a hidden area of the first extent. Figure 6. 18 vSphere's Pluggable Storage Architecture is highly modular and extensible Figure 6. 19 Only the SATPs for the arrays to which an ESXi host is connected are loaded. Figure 6.20 vSphere ships with three default PSPs. Figure 6.21 The SATP for this datastore is vMW SATP ALUA CX, which is the default SaTP for EMC VNX arrays. Figure 6.22 It is possible to adjust the advanced properties for advanced use cases, increasing the number of consecutive requests allowed to match adjusted queues. Figure 6.23 If all hardware offload features are supported, the Hardware Acceleration status is listed as Supported. Figure 6.24 The VAAI support detail is more granular when using ESXCLI compared with the Web client. Figure 6.25 VAAI works hand in hand with claim rules that are used by the Psa for assigning an Satp and psp for detected storage device Figure 6.26 The Storage Providers area is where you go to enable communication between the vASa provider and vCenter Server. Figure 6.27 The New Tag dialog box can be expanded to also create a tag category. Figure 6.28 The VM Storage Policies area in the vSphere Web client is
Figure 6.12 Using iSCSI, SCSI control and data are encapsulated in both TCP/IP and Ethernet frames. Figure 6.13 Notice how the topology of an iSCSI SAN is the same as a switched Fibre Channel SAN. Figure 6.14 The iSCSI IETF standard has several different elements. Figure 6.15 Some parts of the stack are handled by the adapter card versus the ESXi host CPU in various implementations. Figure 6.16 The topology of an NFS configuration is similar to iSCSI from a connectivity standpoint but very different from a configuration standpoint. Figure 6.17 VMFS stores metadata in a hidden area of the first extent. Figure 6.18 vSphere’s Pluggable Storage Architecture is highly modular and extensible. Figure 6.19 Only the SATPs for the arrays to which an ESXi host is connected are loaded. Figure 6.20 vSphere ships with three default PSPs. Figure 6.21 The SATP for this datastore is VMW_SATP_ALUA_CX, which is the default SATP for EMC VNX arrays. Figure 6.22 It is possible to adjust the advanced properties for advanced use cases, increasing the number of consecutive requests allowed to match adjusted queues. Figure 6.23 If all hardware offload features are supported, the Hardware Acceleration status is listed as Supported. Figure 6.24 The VAAI support detail is more granular when using ESXCLI compared with the Web Client. Figure 6.25 VAAI works hand in hand with claim rules that are used by the PSA for assigning an SATP and PSP for detected storage devices. Figure 6.26 The Storage Providers area is where you go to enable communication between the VASA provider and vCenter Server. Figure 6.27 The New Tag dialog box can be expanded to also create a tag category. Figure 6.28 The VM Storage Policies area in the vSphere Web Client is
one place to create user-defined storage capabilities. You can also create them from the Datastores And Datastore Clusters view. Figure 6. 29 VM storage policies can match user-defined tags or vendor- specific capabilities. Figure 6.30 The layout of virtual volumes differs greatly from traditional LUNs. Figure 6.31 For proper iScSi multipathing and scalability, only one uplink can be active for each iScSI VMkernel adapter. All others must be set to unused. Figure 6.32 This storage adapter is where you will perform all the configuration for the software iScSi initiator. Figure 6. 3 Only compliant port groups will be listed as available to bind with the vMkernel adapter Figure 6.34 These settings allow for robust multipathing and greater bandwidth for iSCSi storage configurations. Figure 6.35 You'l choose from a list of available LuNs when creating a new vmFS datastore. Figure 6.36 The Partition Layout screen provides information on th partitioning action that will be taken to create a vmfs datastore on the selected lun Figure 6.37 From the Datastores subsection of the related objects tab, you can increase the size of the datastore. Figure 6.38 If the Expandable column reports Yes, the vMFS volume can be expanded into the available free space Figure 6.39 This 20 GB datastore actually comprises two 10 GB extents Figure 6. 40 The columns in the Datastores list can be rearranged and reordered, and they include a column for VMFS version Figure 6. 41 Among the other details listed for a datastore, the VMfS version is included. Figure 6.42 I recommend that you run the latest version of VMFS, provided all your connected hosts can support it
one place to create user-defined storage capabilities. You can also create them from the Datastores And Datastore Clusters view. Figure 6.29 VM storage policies can match user-defined tags or vendorspecific capabilities. Figure 6.30 The layout of Virtual Volumes differs greatly from traditional LUNs. Figure 6.31 For proper iSCSI multipathing and scalability, only one uplink can be active for each iSCSI VMkernel adapter. All others must be set to unused. Figure 6.32 This storage adapter is where you will perform all the configuration for the software iSCSI initiator. Figure 6.33 Only compliant port groups will be listed as available to bind with the VMkernel adapter. Figure 6.34 These settings allow for robust multipathing and greater bandwidth for iSCSI storage configurations. Figure 6.35 You’ll choose from a list of available LUNs when creating a new VMFS datastore. Figure 6.36 The Partition Layout screen provides information on the partitioning action that will be taken to create a VMFS datastore on the selected LUN. Figure 6.37 From the Datastores subsection of the Related Objects tab, you can increase the size of the datastore. Figure 6.38 If the Expandable column reports Yes, the VMFS volume can be expanded into the available free space. Figure 6.39 This 20 GB datastore actually comprises two 10 GB extents. Figure 6.40 The columns in the Datastores list can be rearranged and reordered, and they include a column for VMFS version. Figure 6.41 Among the other details listed for a datastore, the VMFS version is included. Figure 6.42 I recommend that you run the latest version of VMFS, provided all your connected hosts can support it