Big Little Book on VMware vSphere

Page 1

Big Little Book on VMWare vSphere

Content Introduction 3 What Is VMware? ...................................................................................................................... 4 Online Resources ..................................................................................................................... 10 Deployment Concepts & Definitions 17 First ESXi Host Installation ..................................................................................................... 21 VCSA Installation and Configuration ...................................................................................... 28 Additional ESXi Host Installation Using Auto Deploy ........................................................... 36 Adding Software and Images ................................................................................................... 52 Managing DRS Resources ....................................................................................................... 66 Conclusion ............................................................................................................................... 82

Introduction

Hi there and welcome to the Big Little Book series. This book was written with both the new IT recruit and the seasoned IT professional in mind, but with a view to aid fast recall of information without having to lift a four-hundred page tome!

Big Little Books is focused on condensing technical subjects into memorable, bite-sized books that you can take with you on the go. They can be both an introduction to a specialist topic or an aide memoir before that crucial job interview, for example.

As well as condensing detailed technical facts, it is accompanied with useful hints to help the concepts and understanding of the ideas stand out. Handy for those work or interview situations where there’s just not enough time to filter out the pertinent information from all that text. We aim to continue bringing you a variety of subjects and enhance your ability to learn and recall these subjects as and when necessary.

As a convention commands in this book are typically highlighted as shown in the following example.

command would go here

It’s like telling a story, we all remember stories to a considerable extent or can recall song lyrics. It can be similar with learning a new subject. It just depends on how the information is presented. So without further ado get browsing and downloading any of our available titles. If there’s a particular subject we don’t cover yet and that you would like a Big Little Book for, just email us at info@gridlockaz.com.

Take care and happy reading.

U V Omos

What Is VMware?

VMWare is different things to different people. To some, it is a virtualisation platform capable of automatically deploying, managing migrating, and moving servers as virtual machines across their full lifecycles. To others, it is a means of implementing and managing virtual network infrastructure. Whereas, to another audience it is a way of implementing hyperconvergence, giving them the ability to abstract their compute, networking and storage infrastructure into an easily usable and homogenous platform despite the fact that there might be disparate types of compute, networking and storage resources provided by different vendors.

To summarise, VMWare can be defined as a singular platform software application for implementing Infrastructure as a Service (IaaS) and/or Platform as a Service (PaaS) on a customer’s own premises. In essence, it gives customers the ability to build their own Software-Defined Data Centre (SDDC).

Some would argue that this is too trite a conversation, but for the sake of brevity let’s use this as a starting point for understanding VMWare as a piece of software and we can expand on its constituent parts throughout this book. This book will only focus on the large-scale deployment application of VMWare and will ignore components focusing on desktop implementations. On that basis, let’s look at the various types of VMWare components that are used for large-scale deployments.

This book focuses on VMWare 6.7 and newer versions only.

All Linux commands shown were tested successfully on CentOS7. Note that some of the commands may need changing if you are using a different operating system and version.

Design Considerations

VMs per ESXi host: this depends on the hardware specification of the physical devices but a. typical deployment would be as follows.

• 1024 VMs per host, maximum 4096 vCPUs (e.g. 4 vCPUs per VM on average)

• 2000 ESXi hosts per vCenter with. 25000 running machines and 35000 in the inventory( e.g. 10000 shutdown from the total compliment

VMWare vSphere Architecture Components

The VMWare vSphere architecture is specifically aimed at providing an interface for the management of ESXi hosts and their resources. The main components of this architecture are shown below.

• ESXi Host

• vSphere or VCSA (vCenter Server Appliance)

Each of these is discussed in further detail below.

ESXi Host

This is also known as the hypervisor. In virtualisation, a hypervisor is a component tasked with abstracting the hardware components from the virtual machines that will use it. This is what allows different virtual machines or VMs to use and share the same hardware on a physical piece of kit.

vSphere or VCSA

This is the software platform used for the centralised management and orchestration of the various ESXi Hosts, their VMs and other configurations such as cluster management, vMotion, DRS, storage, virtual networking and compute. It provides a single pane of glass from which all of these features and more can be managed. It is installed on a suitable ESXi Host and once installed, other ESXi Hosts can be added to it for management purposes.

Below is a summarised version of a typical VMware deployment. This is a very abstracted diagram to show the main deployment elements. Note that typical vSphere deployments can consist of dozens if not hundreds of ESXi hosts managed by a single or multiple clusters of vSphere/VCSA servers. The switch shown could be a physical or virtual switch, as could the router, DNS server, DHCP server, images server and PC shown in the diagram.

Architectural Definitions

Several concepts relevant to the overall design and architecture of VCSA are shown below.

• Datacenter

• Cluster

• Datastore

• Datastore Cluster

• Folder

• Resource Pool

These are described further below.

Datacenter

This is a virtual representation of the entirety of resources available on the infrastructure. It includes all the currently deployed clusters, their datastores, VMs and networking and other resources. Multiple datacenters can be created to group environments based on requirements. A datacenter aggregates different types of objects in the VMware environment. For instance, it can consist of ESXi hosts, switches, networks, virtual machines, templates and datastores. It is a representation of all these objects combined to carry out a particular technical and/or business function. The datacenter defines namespaces for networks and datastores and the names for its contained objects must be unique within it.

So, you cannot have two datastores with the same name in the same datacenter, but you can have a datastore with the same name in two different datacenters.

Additionally, you can have two VMs with the same name in a datacenter, but these must be in different folders in this datacenter.

Same-named objects in different datacenters are not necessarily the same object, and care should be taken when reviewing objects as such. For instance, it could be that the second object was created totally independently in some instances, whilst in others it could have been migrated using vMotion or manual steps from another datacenter into the current one.

It could be prudent to avoid this by ensuring naming uniqueness across the board, but that is a design decision based on your requirements.

Cluster

A cluster is a logical grouping of VMware hosts. Its main purpose is to serve as another unit of organisation. A cluster manages the resources of all its hosts. Features such as High Availability (HA) and Distributed Resource Scheduler (DRS) can be managed for hosts at the cluster level. Hosts must be able to carry out DNS resolution of all other hosts in the same cluster. Each cluster requires a minimum of two (ESXI) hosts for standard deployments, but needs at least three hosts if redundancy and Fault Tolerance protection are implemented.

A cluster can be created in VCSA by right-clicking on the relevant datacenter, selecting New Cluster, typing in the cluster’s name, selecting DRS and HA features, selecting Enhanced vMotion Compatibility (EVC) settings as required, vSAN cluster features as required and then clicking on OK.

Think of a cluster as a unit of overarching feature management for a group of similar VMs.

Datastore

A datastore is an aggregation of the storage resources that can be used by a cluster or datacenter. They are storage containers for files, templates, images and similar text or binary

objects. They can be formatted with VMware’s clustered file system VMFS (Virtual Machine File System). A datastore obfuscates the specifics of each storage device to provide a uniform model for storing and viewing virtual machine files and storage resources.

Datastore Cluster

A datastore cluster is a collection of datastores that have shared resources as well as a shared management interface. A datastore cluster groups similar datastores into a pool of storage resources. This utilises the vSphere Storage DRS feature when it is enabled by automating the initial VM placement as well as balancing storage resource allocation across the cluster.

Folder

This is a means of organising a specific object type Objects of the same type are placed in the same folder, which makes for more straightforward management of items such as permission. Folders themselves can contain other folders as well.

A folder can be created by selecting a datacenter or another folder as the parent object, right click on it and then click New Folder in the sub menu that appears. Datacenters can have one of the Host and Cluster, Network, Storage or VM and Template folder types. Select the required folder type, then type in a name for it and click OK.

You can now move objects into the folder by right clicking on the object and selecting the Move To option. The select the target folder and click OK. Alternatively, you can drag the object to the target folder.

Resource Pool

A resource pool is a combination of CPU, memory and storage resources that can be assigned to, reserved by and used by a group of VMs within a cluster. Similar to a datastore, it is a logical abstraction of these resources for enhanced resource management. Resource pools can be organised into hierarchies for further assignment. Resource pools/VMs at the higher level are called parents and the resource pools/VMs within them are called children. Resource pools/VMs at the same level of these hierarchies are called siblings. A resource pool can contain child resource pools, VMs or both. Each standalone host and DRS cluster has a hidden root resource pool that groups its resources.

As both resource pools and clusters deal with managing groups of VMs, resource pools are assigned to clusters.

The host/cluster is not shown, as the host/cluster resources and the root resource pool are the same.

Distributed Resource Scheduler (DRS)

DRS is vSphere’s resource management and load balancing function. It carries out VM placement dependent on resources available. It also enforces user-defined resource allocation policies at the cluster level.

The main aim of DRS is to ensure all VMs and their applications are getting the right amount of compute resources to run efficiently.

Increased load on certain VMs can cause resource imbalances in the cluster, which DRS will aim to rectify as it monitors the cluster every 5 minutes. On detecting an imbalance, DRS checks if a VM would be better served on a different ESXi host and if so, will migrate it across using vMotion. On initial deployment, a VM goes through a DRS decision known as VM placement or initial placement based on the anticipated resource distribution change due to the VM’s inclusion (so far as there are no constraint violations caused by the VM’s inclusion). DRS will attempt to rectify load balancing issues in accordance with its schedule and algorithm.

Any imbalanced loads lead to a VM migration as mentioned previously. DRS uses the demand of every VM to determine this.

CPU demand is based on current VM CPU usage. Memory demand is based on the following formula.

VM memory demand = Function(Active memory used, swapped/shared) + 25% (idle consumed memory).

In summary, DRS looks mostly at a VM’s active memory usage and a small amount of its idle memory usage to budget for anticipated memory utilisation.

Automation Levels

DRS has the following automation levels based on who or what configures the 1) initial placement and 2) load balancing settings.

• Manual: both settings done by user

• Partially Automated: DRS applies initial placement, user applies load balancing

• Fully Automated: both settings done by DRS

This can be configured by going to the cluster’s Edit settings and selecting the vSphere DRS option.

In these options, vSphere will make recommendations where it doesn’t actually carry out an automation action e.g. in the manual and partially automated settings.

Aggression Levels

DRS has 5 different aggression levels or migration thresholds, with 1 as do not load balance VMs, only make recommendations and 5 being automate load balancing.

VM Overrides

VMs can be set to override DRS automation and migration threshold changes that are typically applied at the cluster level. For instance, a VM could be designated not need to be

moved if its movement would or could adversely affect network or service operations based on its function.

This can be done by going to Cluster > Manage > Settings > VM Overrides and setting the automation or migration threshold for the VM to a non-cluster level setting if not disable them fully

VM/Host Rules

There are instances where VMs need to be kept together or on a particular host. In these instances, VM/Host rules can be set in accordance with the following requirements.

• Keep Together VM-VM: always runs these VMs on the same host

• Keep Separate VM-VM: always run these VMs on different hosts

• VM-Host: for groups of one or more VMs on one or more hosts (configure this at Cluster > Manage > Settings > VM/Host Groups)

VM/Host rules can be configured with the following settings.

• Should: preferred but DRS can drop if cluster imbalance is very high

• Must: mandatory at all times

VM-VM rules are always set to must. HA can ignore these rules during a failover, but DRS will then fix this during its first invocation.

Online Resources

Before getting into further deployment and operational aspects of VMware, let’s review some tools and resources that can be found online. There are several online resources that can be used to learn how to use various VMWare tools. Some of these are listed below.

VMWare Hands-on Labs (HoL)

This can be found at the following URL.

https://labs.hol.vmware.com

This site hosts numerous labs and is continuously updated with new ones that you can try. Simply log on using an account and run through the lab steps on screen.

You can click on the Lab Details accordion to view the lab’s details as shown below.

vSphere Management Assistant (vMA)

This is a download tool that can be used to run VMWare command line tools suchas esxcli, vicfg-* and the older -cmd tools. This is now a deprecated product and is mentioned here for reference only.

Ruby vSphere Console

This is a Ruby consoled for vSphere accessible from the VCSA CLI by typing in the following command.

rvc

It is useful for troubleshooting vSAN issues and can be copied to your local device or used in a docker container. It can manage ESXi and VCSA and you can run various VCSA objects using it. You can navigate inventory and it follows the VCSA organisation model’s Managed Object Browser. It can also run Unix-like commands to navigate the file system e.g. cd and ls type commands.

Ruby vSphere needs Ruby to run. At the time of writing, Ruby could be downloaded from the following link.

https://rubyinstaller.org/downloads/

Note that it only runs on Windows. Selecting the most relevant download link for your system from the page, download and install it on a Windows platform by running the downloaded .exe file. New users are advised by the site to download the DevKit version as it contains the most Ruby features (known as gems) that can be used. Download the most appropriate version for your OS and device e.g. if 64-bit capable download that version instead of the 32-bit version. Ruby installation itself is outside of this book’s scope, but there are loads of resources online that cover this. So, check them out and complete the Ruby installation as you will need this to carry out the next step that installs RVC.

On successful installation of Ruby, go to the Windows command line (e.g. DOS prompt) of the device it is installed on. Then, from this command line type in the following command to install RVC using the Ruby gem utility (which is similar to yum or apt as a package installation tool).

gem install rvc

If you followed a default installation and set the RVC path to its default, you should be able to launch it from the following folder.

C:\Program Files\VMware\vCenter Server\rvc

To do this, change the current path of your command line to it by running the following command.

cd “C:\Program Files\VMware\vCenter Server\rvc”

RVC uses batch files to carry out a number of its actions. It is worth running the dir command filtering on batch files in this folder to get an idea of the various standard actions RVC. This can be done as follows.

dir *.bat

There is a batch file called rvc.bat in this folder. Run it from the CLI as follows to set up a connection to the relevant VCSA server.

rvc

You will be prompted for the VCSA username and password. Type them in to gain access to VCSA and run commands from the command line accordingly.

VMA

This tool can be found locally on the VCSA by running the following command.

vmware-cmd

The following are examples of commands that can be launched using vim-cmd on the VCSA command line.

The following variables are used in these commands.

• vmid: the ID of VM

• vmx_filepath: path of VMX file needed for registering VM

• snapshot_name: target name of snapshot being created

• vcenter: vCenter Server hostname

• esxhost: ESX/ESXi hostname

• datastore: display name of datastore

• path_to_vmx_on_datastore: path to VM VMX file relative to datastore on which it resides

• vm_name: display name of VM

• guest_admin_user: user account with administrative access within a VM guest OS

• guest_admin_password: password for the account noted by guest_admin_user

Register a VM

vmware-cmd server esxhost –s register vmx_filepath

vmware-cmd server vcenter vihost esxhost –s register vmx_filepath

Unregistser a VM

vmware-cmd server esxhost –s unregister vmx_filepath

vmware-cmd server vcenter vihost esxhost –s unregister vmx_filepath

Delete a VM

vmware-cmd server esxhost –s unregister vmx_filepath

vmware-cmd server vcenter vihost esxhost –s unregister vmx_filepath vifs server esxhost rm “[datastore] path_to_vmx_on_datastore”

Get list of host VMs

vmware-cmd –-server esxhost –-username root –l

vmware-cmd server vcenter –-vihost esxhost -l

Check for VM snapshot

vmware-cmd server esxhost vmx_filepath hassnapshot

vmware-cmd server vcenter vihost esxhost vmx_filepath hassnapshot

Add a VM snapshot

vmware-cmd server esxhost vmx_filepath createsnapshot snapshot_name

vmware-cmd server vcenter vihost esxhost vmx_filepath createsnapshot snapshot_name

Remove a VM snapshot

vmware-cmd server esxhost vmx_filepath removesnapshots

vmware-cmd server vcenter vihost esxhost vmx_filepath removesnapshots

Get a VM’s power state

vmware-cmd server esxhost vmx_filepath getstate

vmware-cmd server vcenter vihost esxhost vmx_filepath getstate

Get a VM’s uptime

vmware-cmd server esxhost vmx_filepath getuptime

vmware-cmd server vcenter vihost esxhost vmx_filepath getuptime

Power on a VM

vmware-cmd server esxhost vmx_filepath start

vmware-cmd server vcenter vihost esxhost vmx_filepath start

Shut down a VM

vmware-cmd server esxhost vmx_filepath stop soft

vmware-cmd server vcenter vihost esxhost vmx_filepath stop soft

Power off a VM

vmware-cmd server esxhost vmx_filepath stop hard vmware-cmd server vcenter vihost esxhost vmx_filepath stop hard

Reboot a VM

vmware-cmd server esxhost vmx_filepath reset soft

vmware-cmd server vcenter vihost esxhost vmx_filepath reset soft

Reset a VM

vmware-cmd server esxhost vmx_filepath reset hard

vmware-cmd server vcenter vihost esxhost vmx_filepath reset hard

Display a VM’s IP address

vmware-cmd server esxhost vmx_filepath getguestinfo ip

vmware-cmd server vcenter vihost esxhost vmx_filepath getguestinfo ip

VIM-CMD

This tool can be found locally on ESXi hosts by running the following command.

vim-cmd

This will show the following output.

Commands available under /: hbrsvc/ internalsvc/ solo/ vmsvc/ hostsvc/ proxysvc/ vimsvc/ help

Each of these commands option carries out various tasks. We will focus on vmsvc/ in this book but feel free to experiment in a safe lab environments with the other available commands.

Run vim-cmd vmsvc/ on its own to view its various commands as follows.

vim-cmd vmsvc/

The following are examples of some commands that can be launched using vim-cmd vmsvc/ on the ESXi host command line.

The following variables are used in these commands.

• vm_name: name of the VM

• vmid: the ID of a VM

• vmx_filepath: the path of the VMX file needed for registering a VM

• snapshot_name: the target name of the snapshot being created

Get the VM ID

vmid=$(vim-cmd vmsvc/getallvms | grep vm_name)

The $vmid setting in the following command assumes this has been obtained using the command above.

Register a VM

vim-cmd solo/registervm vmx_filepath

Unregistser a VM

vim-cmd vmsvc/unregister $vmid

Delete a VM

vim-cmd vmsvc/destroy $vmid

Get list of host VMs

esxcli vm process list

vim-cmd vmsvc/getallvms

Check for VM snapshot

vim-cmd vmsvc/get.snapshot $vmid

Add a VM snapshot

vim-cmd vmsvc/snapshot.create $vmid snapshot_name

Remove a VM snapshot

vim-cmd vmsvc/snapshot.remove $vmid

Get a VM’s power state

vim-cmd vmsvc/power.getstate $vmid

Get a VM’s uptime

vim-cmd vmsvc/get.summary $vmid |grep uptimeSeconds

Power on a VM vim-cmd vmsvc/power.on $vmid

Shut down a VM vim-cmd vmsvc/power.shutdown $vmid

Power off a VM esxcli vm process kill –w world_id

vim-cmd vmsvc/power.off $vmid

Reboot a VM vim-cmd vmsvc/power.reboot $vmid

Reset a VM vim-cmd vmsvc/power.reset $vmid

Upgrade a VM vim-cmd vmsvc/tools.upgrade $vmid

Display a VM’s IP address vim-cmd vmsvc/get.guest $vmid |grep -m 1 "ipAddress = \""

cd /vmfs/volumes/datastore_name/new

VCSA-CLI

This tool can be found on the VCSA by running the following command. This is added for reference but is out of this book’s scope.

api

PowerCLI & PowerNSX

This is a set of VMWare-developed CLI functions and tools that can be launched using Microsoft’s Powershell. There will be a brief bit more on this later in the book and a considerable amount on this in the Big Little Book on VMWare Automation

PowerOps

This can be used with PowerNSX. This is out of this book’s scope but noted here for information. But do check it out online if so inclined.

Deployment Concepts & Definitions

There are a number of concepts and items requiring definition for you to fully understand the structure of the VMWare deployment lifecycle. These are summarised below.

• Image Profile

• Host Profile

• Script Bundle

• PowerCLI

• Auto Deploy Rule

• Software Depot

Let’s define each of these terms.

Image Profile

An image profile is a templated configuration telling vSphere the image specification that should be applied to the ESXi host. It defines the set of VIBs used by an ESXi installation and a patch typically contains 2 - 4 image profiles. An image profile is usually found at a URL or a local zip file. It can be used to upgrade a host using the following commands directly on the host.

esxcli software profile update or

esxcli software profile install

These commands are discussed later in this book. Upgrades can also be done via the vSphere GUI.

Host Profile

A host profile is the templated configuration telling vSphere of the host characteristics that should be applied to the ESXi host.

PowerCLI

As mentioned previously, PowerCLI is a set of VMWare developed PowerShell commands and tools that can be used to manage and deploy VMWare assets ad hoc and in scripts.

Auto Deploy Rule

An Auto Deploy rule is a rule set up to tell the vSphere deployment process which attributes and setting the newly deployed host should have, such as the image and hosts settings depending on the vendor and IP address of the host.

Software Depot

This is an online or local location where assets used to deploy VMWare are located. These locations contain image profiles which are then attached to Auto Deploy rules.

Script Bundle

A script bundle is a collection of scripts that can be used for further post-deployment host configuration. The script runs after an ESXi host has been provisioned using Auto Deploy. It can be used to add further configuration such as firewall rules or interface configurations that might not be available with Host Profiles. As of vSphere 6.7 Update 1, you can add or remove a customer script by using the vSphere Client. A script bundle can include a number of scripts but must be delivered a s a single compressed .tgz file for uploading to vCenter. After uploading, it can be added to an Auto Deploy rule but it must be able to run in the ESXi shell as a pre-requisite.

You can add a script bundle in vSphere by going to Home, then Auto Deploy (must be an Administrator). Then select the Script Bundles tab, Upload option and then select the script bundle .tgz file in the screen dialogue and click on Upload. The script bundle will now be available for use on the server.

File Types

It is worth noting the following characteristics of the various file types that can be used in VMware deployments. These file types are as follows.

• OVF

• OVA

• VMDK

• ISO

These are described in summary below.

OVF

This is a file bundle containing the following files.

• Descriptor: file with config showing disk, OS and other items

• Manifest: SHA-1 digest of package files to check for corruption

• Signature: manifest signed with X.509 certificate’s public key for author verification

• Virtual disks: OVF doesn’t specify a disk image so can be Dynamic VHD, VMDK, etc

OVA

An OVA is a Single Tape Archive file, containing all the OVF files. It can actually contain multiple OVFs. An OVF is uncompressed and gives direct access to all the individual files, while OVA is a single compressed file. A few pointers between OVF and OVA are as follows.

• OVA better for single file usages such as web downloads

• OVA imports take longer to upload

• OVF packaging does not guarantee cross-hypervisor compatibility for its contained virtual machines

VMDK

A VMDK is a file contain the VM machine data. This is presented to the OS as a drive and can be modified by it during usage. It also contains some material in addition to the OS. Note that on launching a VM in vSphere, its hard disk is saved as a VMDK and can be backed up from the ESXi server’s datastore.

ISO

This is a file type containing the contents of a DVD and is read only. It can be used to present its content as though they are on a CD to the OS. You can actually navigate the contents of an ISO. Below is a screen print showing some of a vSphere installation ISO’s content.

As well as booting from a DVD, an ISO can all be used to make a USB bootable. There are various tools that can be found online that do this, such as Rufus and UUBYTEISOEditor, but their installation and usage is outside of this book’s scope. Also, be very careful when downloading any such tools and ensure that the as the legitimate versions by using checksums for verification where possible and that they are suitable for your particular operating system. If any of this seems convoluted, then the advice would be to not use this software.

First ESXi Host Installation

Install Locations

ESXi can be installed on any of the following drive types.

• Hard drive

• Flash drive

• Storage Area Network (SAN)

These are discussed further below.

Hard drive

This entails deployment to local, directly-connected or smart array controller-connected SAS, SATA, or solid-state drives (SSDs)

Flash drive

USB drive or SD card. This can only be used to host datastores if the storage on a local disk or a SAN.

Storage Area Network (SAN)

This entails deployment to a SAN using connectivity such as iSCSI, Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE). Ensure that the customer image contains the correct iSCSI, FC, and FCoE drivers as well as network interface controller (NIC) drivers.

ESXI Installation Steps

The following are the steps for installing an ESXi host.

Note that this example uses the Service Pack for Proliant (SPP) and therefore assumes the server hardware is Hewlett Packard (HP) Proliant.

• Select ESXi image

• Select SPP (based on previously selected image)

• Update the physical device firmware

• Install ESXi

These steps are discussed in further detail below.

Select ESXi Image

Each ESXi host requires the use of an image for its hardware. This image could be as follows.

• VMWare-supplied base image

• Vendor’s own custom image

• Customer’s own custom image

These are discussed in further detail below.

VMWare-supplied base image

This is a version of the hypervisor coupled with an initial set of drivers relevant to the current build. Using the base image might mean you have to install some of the hardware vendor’s own custom drivers in addition to those currently in the image.

Vendor’s own custom image

Each of the bigger vendors will typically supply their own custom image, complete with all the software and drivers required to implement and manage the hardware throughout its lifecycle. For instance, HPE provides its own custom image and more on the assignment of this custom image can be found here at the time of writing.

http://h17007.www1.hpe.com/us/en/enterprise/servers/supportmatrix/vmware.aspx#.W19lSN JKjIU

HPE custom images themselves can be found at the following URL.

http://www.hpe.com/info/esxidownload

Customer’s own custom image

Each customer can develop their own images using the Image Builder tool from VMWare 6.7 (this book does not deal with topics related to earlier VMWare versions). Image Builder is a PowerShell extension that can be used to generate a customer-specific custom image. It assists with the following.

• the creation/editing of an image for use with VMWare Auto Deploy or

• offline depot.zip creation for VMWare Update Manager (VUM)

Image Builder can be used as follows.

• ESXi 7.0

• Forward depot.zip file

Note that this will not contain the additional metadata for ESXi 7.0.

You can start from the base image or the vendor’s custom image and add/remove VMWare Installation Bundles (VIBs) to suit.

A VMWare Installation Bundle (VIB) is a software component or package relevant to a specific type of ESXi host’s build image. It is like a tar or zip archive as it is a collection files packaged into a single archive for distribution.

Select SPP (based on previously selected image)

Note that this section is only relevant if Hewlett Packard (HP) servers are being used. If not, consult the relevant vendor’s website for notes on VMware deployment.

SPP stands for Service Pack for Proliant. Proliants are a Hewlett Packard (HP) server model. The next installation step entails the mapping of an SPP download to the previously selected build image. You would typically do this by consulting the relevant HP Depot for the downloadable software files at the following link.

http://vibsdepot.hpe.com/

The SPP contains the following assets.

• Drivers

• Software

• Firmware

Ensure that you select and download the correct SPP for your base or custom image.

This can be done using typical command line tools. A few examples of downloading for a file are shown below.

You can use Smart Update Manager (SUM). Using SUM provides the following benefits.

• Inter-component firmware, software and driver dependency knowledge. Driver, software and firmware updates can be done on a single or multiple servers and network-based targets

• SUM gathers the firmware, software and driver information then suggests installation or update actions based on available SPP components

• You can choose which components to update e.g. select one, a few or all components.

• SUM deploys these components to ESXi hosts running version that support the online-remote mode

• SUM runs on Linux and Windows and communicates with the server a s a networkbased (remote) target

Update the physical device firmware

After selecting the SPP, it might be necessary to update the physical device firmware to suit the VMware hypervisor or ESXi host requirements as a follow-on step. Consult your hardware manufacturer’s information on the make and model being used for further information on how to carry out this step.

Install ESXi Interactively

Now that the ESXi image and SPP have been selected and (if necessary) the physical device firmware update, ESXi can be installed interactively. This installation method is recommended for smaller deployments of less than 5 ESXi hosts. The following list shows

the minimum viable settings for an ESXi host installation. Scale these up depending on the number and function of the VMs the ESXi host will manage

1. 2 CPU cores minimum (check hardware manufacturer’s information and ensure the CPU supports virtualisation

2. 64-bit x86 processors released after September 2006

3. 4GB scratch partition

4. 40GB storage using SCSI disk or a local, non-network RAID LUN with unpartitioned space for the VMs

5. Serial ATA (SATA) must use a disk connected using supported SAS controllers or supporter on-bard SATA controllers (SATA disks must not be used as scratch partitions as they are viewed as remote disks by default)

6. SATA CD-ROMs cannot be connected to ESXi 6.7 hosts, if SATA CD-ROMs are needed connected them using IDE emulation mode

7. 4GB RAM minimum for a lab, but even this will struggle so recommend a minimum of 8GB for a lab, production environments will require considerably more

8. 1 or more Gigabit network adapters

9. NX/ND bit enabled for CPU in BIOS

10. Hardware virtualisation support (Intel VT-x or AMD RVI) must be enabled on x64 CPUs to support 64-bit VMs

11. Consult the VMware supported server list located at http://www.vmware.com/resources/compatibility at the time of writing to confirm that VMware supports your server hardware

Now that you have verified and prepared the relevant firmware, software and drivers, the VCSA software can now be installed. This entails responding to a number of onscreen prompts to configure the ESXi host in accordance with your requirements.

Note that the installation will reformat and partition the target disk, the install the ESXi boot image. In other words, all of the current disk’s vendor & OS partitions and data will be overwritten if present. So, make sure you have backed up any needed information from these disks before carrying out this installation.

Pre-requisites

Confirm the following settings on the target machine for the ESXi host installation

1. UTC clock setting in BIOS

2. Keyboard and mouse connected

3. Disconnect any present network storage (unless it contains a previous, needed ESXi installation) to reduce installation duration

4. Do not disconnect a VMFS datastore that has the Service Console of an existing ESXi installation

5. Ensure ESXi Embedded is not present on the host machine as it cannot co-exist on the same host as ESXi Installable (the version you are installing)

6. Set the target machine’s BIOS to boot from the media type you will use i.e. from USB or CD-ROM/DVD

Installation Media Setup

Now that you have confirmed all of these pre-requisites, carry out the following steps to set up the installation media.

1. Download the ESXi ISO from the VMware website

2. Create a bootable USB or CD-ROM/DVD from this ISO. For this installation, we used Rufus (https://rufus.ie/en) to create the bootable ISO (but you can use your preferred tool)

Now that you have the ESXi installation software ready in bootable ISO format on a USB or DVD, proceed to the ESXI host installation.

Installation

Make sure the target host is configured to boot from USB before starting. Consult your hardware manufacturer’s documentation on how to configure this requirement.

Now you have the installation media ready as well as the host set up, carry out the following steps to install ESXi on the target machine.

1. Insert the media to install from in the USB port or CD-ROM/DVD drive, whichever is relevant. Wait for the installation screen to appear.

2. When the Setup window appears, select the drive to install ESXi on and the press Enter

3. If prompted because ESXi already exists on the disk, carry out the required action e.g. whether to start anew or upgrade/migrate the existing installation (in this example though, we are starting anew)

4. When prompted to configure vSAN disk groups, configure as required (in this example we used a single SSD and were fine with all media being wiped clean)

5. Select the keyboard type to use when prompted

6. Type in the root password to use when prompted

7. Press Enter to start the installation

8. When the installation completes, remove the USB from its port or the CD-ROM/DVD from its drive

9. Press Enter to reboot the host

10. In BIOS, set the first boot device to be the drive on which ESXi was installed during this setup

This concludes the attended/interactive ESXi installation.

Unattended ESXi Host Installation Summary

ESXi hosts can be installed using scripts for unattended installations or upgrades. This is good for mass host deployments. The script contains the ESXi installation settings and this can be applied to hosts with similar configurations. The summarised steps are as follows.

Installation paths depend on which of the host’s disks the ESXi software is being installed on. These installation paths are shown below.

• Installation is always on first disk: only need one script for all hosts

• Installation is on different disks: multiple scripts, one for each disk location

The installation determines the disk using one of the following commands.

• Install

• Upgrade

• Install or upgrade

Other options are as follows.

• Accepteula

• Vmaccepteula

If all installations are on a single disk, then a single script can be used. But multiple scripts will be needed if the ESXi image is being installed on different disks for various machines.

The steps for this are as follows.

Create the required script

Upload the script to a reachable FTP/HTTP(S)/NFS server or USB flash drive

Type in boot options to start the script.

Start the target host.

When the following window appears, press Shift + O to edit the boot options.

When the runweasel command prompt appears type in the required kickstart command. The following command is an example of what can be used.

ks=http://00.00.00.00/kickstart/ks-osdc-pdp101.cfg

netmask=255.255.255.0

gateway=00.00.00.000

nameserver=00.00.0.0

ip=00.00.00.000

The ks filepath is the path of the kickstart file that you want to use to deploy this server.

A typical ks file is shown below.

#

# Sample scripted installation file

#

# Accept the VMware End User License Agreement vmaccepteula

# Set the root password for the DCUI and Tech Support Mode rootpw mypassword

# The install media is in the CD-ROM drive install firstdisk overwritevmfs

# Set the network to DHCP on the first network adapter network bootproto=dhcp device=vmnic0

# A sample post-install script

%post interpreter=python ignorefailure=true import time

stampFile = open('/finished.stamp', mode='w')

stampFile.write( time.asctime() )

This will run the installation and follow the onscreen instructions where relevant as done for the attended installation.

Running ESXi in a Test Virtual Machine

IMPORTANT: running ESXi in the virtual machine must NOT be done on a production environment. It is just for lab purposes.

It is possible to run what is referred to as a nested ESXi host in a virtual machine if the following requirements are met. Note that this is NOT for a production environment and is useful only for testing purposes if you do not have enough physical equipment or do not have at least two servers and a SAN.

Specifications are as follows.

• Minimum 1.5GB of RAM

• Minimum 2 vCPUs

• Enough disk space for the required VMs (calculate disk per VM x number of VMs)

• The physical CPU on the host must support native virtualisation e.g. Intel VT or AMD-V type processors

• VMWare Workstation 6.5 or VMWare Server 2 or VMWare Fusion 5 on the physical host

• 64-bit operating system is recommended

Let’s now look at installing VCSA (vCenter Server Appliance) on the deployed ESXi host.

VCSA Installation and Configuration

Now that we have set up the ESXi server(s), we can proceed with the VCSA (vCenter Server Appliance) setup. VCSA is the newer, non-Windows version of vCenter that can be installed on Linux. The installation and setup of a VCSA server requires a number of steps. These start with the actual installation of the VCSA software on an ESXi host that has already been built. From then, further ESXi hosts can be added to the VCSA GUI for management in a single pane of glass. Then, further configuration such as cluster, datacenter, folder, image profile, host profile and Auto Deploy configurations can be added as well to facilitate faster and automated ESXi host and virtual machine deployment. Other items such as network and distributed switch configuration can also be done in the VCSA GUI.

Recommended Large-scale Host Deployment Specifications

Use the following specifications for each destination vSphere/VCSA server when deploying a large estate consisting of thousands of virtual machines.

• Large (up to 1000 hosts, 10,000 VMs) – 16 CPUs, 32 GB RAM

• X-Large (up to 2000 hosts, 35,000 VMs) – 24 CPUs, 48 GB RAM – new to v6.5

Hardware Configuration Best Practice

The best practice settings to apply for the deployed (virtual) hardware of each vSphere server is as follows.

• 2GB for 4 image profiles with some additional free space

• Each image profile needs 350MB

• Ensure a suitable DHCP server to manage the required VLAN IP ranges

• replace the PXE boot gpxelinux.0 file name with

§ snponly64.efi.vmw-hardwired for UEFI

§ undionly.kpxe.vmw-hardwired for BIOS

• If you want to manage vSphere Auto Deploy with PowerCLI cmdlets, verify that Microsoft .NET Framework 4.5 or 4.5.x and Windows PowerShell 3.0 or 4.0 are installed on a Windows machine

You can install PowerCLI on the Windows system on which vCenter Server is installed or on a different Windows system.

Useful vSphere Downloads

The following link was available at the time of writing and contains various VMWare download assets for deploying ESXi and VCSA.

https://my.vmware.com/web/vmware/downloads/details?downloadGroup=VC670&productId =742&rPId=22641#product_downloads

You must download the required VCSA version ISO to carry out its installation in the later sections on this subject.

VCSA Installation

Now that the relevant hardware resources and requirements are in place, let’s continue with the deployment. This can be summarised as follows.

Download the ISO image from the relevant VMware vCenter location This can be done as follows, note that you must already have a VMware login for this so create an account if not.

Note that any references to VMware vCenter in these instructions also means VCSA.

• Log in VMware Customer Connect

• Go to Products and Accounts > All Products

• Search for VMware vSphere

• Click View Download Components

• Select the relevant version from the Select Version dropdown

• Select the required VMware vCenter Server version

• Click GO TO DOWNLOADS

• Download the vCenter Server appliance ISO image

• Check that the md5sum is correct using an MD5 checksum tool of your choice

As for the target ESXi host confirm the following.

• It is not in lockdown or maintenance mode and is not part of a fully automated DRS cluster

• If deploying it on a DRS cluster confirm that the cluster contains at least one ESXi host that is not in lockdown or maintenance mode

Now that you have obtained a copy of the required VCSA software from the VMWare download site. Copy and mount the vSphere installation ISO to the target ESXi host on which VCSA will run You can use the Linux mount command to carry out this step. Consult the Linux man pages for further help on the mount command.

An example of the required command on Linux is shown below.

sudo mkdir vcsa_media sudo mount -o loop VMware-vCSA-all-version_number-build_number.iso vcsa_media

Locate the ISO’s download folder on the target ESXi from a CLI, then navigate to its vcsacli-installer\templates folder. Create a new JSON file e.g. newinstall.json (it can be any name you choose) file by making any necessary changes to an existing template in the folder An example of a default template that can be found in this folder at the time of writing is shown below. Some sections have been truncated for brevity.

{ "__version": "2.13.0", "__comments": "Sample template to deploy a vCenter Server Appliance with an embedded Platform Services Controller on an ESXi host.",

"new_vcsa": {

"esxi": {

"hostname": "<FQDN or IP address of the ESXi host on which to deploy the new appliance>",

"username": "root",

"password": "<Password of the ESXi host root user. If left blank, or omitted, you will be prompted to enter it at the command console during template verification.>",

"deployment_network": "VM Network",

"datastore": "<A specific ESXi host datastore, or a specific datastore in a datastore cluster.>"

"appliance": {

"__comments": [

"You must provide the 'deployment_option' key with a value…(truncated)"

"thin_disk_mode": true,

"deployment_option": "small",

"name": "Embedded-vCenter-Server-Appliance"

"network": {

"ip_family": "ipv4",

"mode": "static",

"system_name": "<FQDN or IP address for the appliance. Optional when the mode is Static. Remove this if using dhcp.>",

"ip": "<Static IP address. Remove this if using dhcp.>",

"prefix": "<Network prefix length. Use only when the mode is 'static'…(truncated)>",

"gateway": "<Gateway IP address. Remove this if using dhcp.>",

"dns_servers": [

"<DNS Server IP Address…(truncated)>"

"os": {

"password": "<Appliance root password; >",

"ntp_servers": "time.nist.gov",

"ssh_enable": false

"sso": {

"password": "<vCenter Single Sign-On administrator password; refer to templatehelp for password policy…(truncated)>",

"domain_name": "vsphere.local"

"ceip": {

"description": {

"__comments": [

"++++VMware Customer Experience Improvement Program (CEIP)++++", "…(truncated)"

"settings": {

},
],
},
] },
},
} },
] },

Save this file after customising it to suit this installation. Change to the installer folder for all operating systems. cd ..

You should now be in the vcsa-cli-installer folder. Carry out the following test and implementation commands from this folder.

VCSA Testing

Use the vcsa-deploy tool to test the installation as follows.

These commands use the ..\templates\newinstall.json file but you will be changing this to suit the specific name you used for the JSON configuration file.

View installation help

lin64/vcsa-deploy install help

View installation JSON template help

lin64/vcsa-deploy install template-help

Basic template verification without installing

lin64/vcsa-deploy install accept-eula verify-template-only ..\templates\newinstall.json

Installation pre-check without installing

lin64/vcsa-deploy install accept-eula precheck-only ..\templates\newinstall.json

Run installation

Now we have looked at a few useful commands, the following command shows how to run the installation using the ISO mounted in the previous step.

lin64/vcsa-deploy install accept-eula ..\templates\newinstall.json

This will now install VCSA, after which it can be configured for VM deployment and management.

"ceip_enabled":
} } }
true

Let the installation run to completion and browse to the IP address for the vSphere server that is shown in the setup window.

Setting Up a Cluster

Now the vSphere server is installed, clusters can now be installed on it. A cluster is a vSphere object representing a group of ESXi hosts as described previously From the right-hand side menu in the VCSA GUI, select the required datacentre the cluster will be in. Right click on it so that the following submenu appears.

Select the New Cluster option as highlight below.

Click on it to enter the New Cluster setup dialogue window shown below.

After typing in the name of the new cluster, selecting the required DRS, HA and VSAN options in the following window, click on the Next button.

DRS Settings

VCSA can automatically assign and manage resources amongst multiple ESXi hosts. This can be done with the Distributed Resource Scheduler cluster (DRS cluster) function, which can be configured during cluster setup. DRS provides resource scheduling and load balancing functions. This feature automatically adds VMs into the cluster as manageable by DRS and checks every 5 minutes by default on the cluster’s load so that it can recommend or automatically carry out vMotion (moving a device to a different cluster). But it will only do this if the destination host is not under load and can carry the VM. Note that vSphere 6.7 also considers network load in its DRS calculations.

Affinity rules can also be configured to keep groups of related VMs together in the instance of a vMotion being required.

As a reminder, these previously described rules are as follows.

• VM-VM

• VM-Host

The DRS feature for a cluster is enabled by toggling the setting shown below in the New Cluster setup window.

You can view the default settings by clicking on the Information icon.

The VSAN settings can also be toggled in this window.

NB: when setting up a cluster in the VCSA GUI, do not select the following option (leave it unticked), so that you can have multiple images in a single cluster. If not, then you will need to upgrade to a minimum of ESXi 7.0.

Also, you will not be able to manager pre 7.0 versions with a single image with this vSphere server.

After clicking on the Next button, the following window will appear. Click on the Finish button to complete the cluster setup.

Uploading ISOs and Images

With a cluster set up, ISOs and other types of images can be uploaded to the relevant datastore on a server. To do this, select the required datacenter, go to its Files view in the main window and the navigate to the required folder that will host the ISOs and images. Then click on the Upload Files (or Upload Folder if the content is in a folder) link and following onscreen upload dialogue to locate the relevant file on your local drive and upload it.

Additional ESXi Host Installation Using Auto Deploy

We have installed ESXi hosts and a vSphere server. The Auto Deploy process is the process of onboarding physical hardware for usage within the vSphere software as ESXi hosts. It is how physical compute is made available for use so that virtual machines can be built on it. The Auto Deploy process itself uses the previously defined Auto Deploy Rules to carry out its functions. There are several setup steps required by Auto Deploy to carry out its functions. It requires the following for each host.

• Image Profile Assignment

• Extensivity Rules Setup

These are described further below

Image Profile Assignment

This is the assignment of an image profile that will be used by the Auto Deploy rule. It requires setup in the VCSA GUI and this is described later on in this document.

Extensivity Rules Setup

This ensure that the VIBs at the CommunitySupported level can only contain files from certain location such as the ESXCLI plug-in path. If a VIB is added from a location different to the image profile there is a warning, but this can be overridden with the force option. Extensivity rule setup is also described later on.

Now that we have introduced Auto Deploy as a concept, let’s review a few advisory notes and pointers to code and tools that can be used to set up Auto Deploy.

Auto Deploy Pre-requisite - Preboot Execution Environment (PXE)

Auto Deploy requires a PXE boot environment, as this will host the configuration files that instruct VCSA on how the ESXi hosts should be configured. PXE is an industry standard method for the automatic deployment of software on hardware that has just booted up. It requires the setup of the PXE software, its menu(s), network file hosting applications such as FTP or TFTP locations as well as the actual storage of any binaries such as ISO files that are needed for installing the required software on the hardware. A PXE setup is required to use Auto Deploy for automated ESXi host installations. PXE requires DHCP and TFTP services on the network, so make sure these servers are present on the environment. Also, the install scripts must be on the TFTP server.

Servers typically boot in either the older BIOS mode or the newer UEFI mode. As a preference, select servers that are UEFI boot capable, as these can PXE-boot using both IPv4 and IPv6 addresses whereas BIOS boot only devices can only PXE-boot using IPv4 addresses.

Why Use PXE Boot?

PXE boot is an established way of deploying a large number of devices in an unattended manner without human intervention. It has been battle-tested over the years and is straightforward to use once you know how. It provides the following advantages.

• Allows less technical users to deploy equipment

• Reduces the rollout duration

• Decrease the number of rollout errors

• Simplifies installation and centralisation of configuration assets

PXE boot follows a technical specification and standards which can be found here at the time of writing.

http://www.pix.net/software/pxeboot/archive/pxespec.pdf

A key component of the PXE boot method is the device’s network card. This must support PXE boot standards as a means of getting the OS onto the system unattended using a suitable storage medium on the device such as a CD-ROM or USB drive. This entails bootstrapping the device over the network, providing it with a source (e.g. TFTP) for the OS software, downloading the software to the device media provided for storage and installing it.

Technically, the deployment engineer should be able to power on the server, connect it to a functioning network using suitable connectivity and let PXE boot do the rest of the job of connecting giving the device the required assets for connecting to the network, then installing and configuration the operating system.

So, things to check to ensure a device is PXE-bootable are as follows.

• PXE-boot compliant NIC

• Suitable storage e.g. USB, CD-ROM or similar

• TFTP server (for storing the operating system on the network)

• DHCP server (to issue the new device with a reachable IP address and TFTP server and PXE boot binary and configuration data)

The device carries out the following steps.

• Boots using its firmware, which also requests an IP address from any available DHCP servers

• On receiving and accepting an offer (DHCP process outside this book’s scope), the device NIC firmware sends a request to the next-server option IP address found in the DHCP offer and receives TFTP server and related PXE boot information in the DHCP offer

• It then receives the requested boot file in a UDP data stream from the TFTP server

• At this stage the device will either:

• boot up an automated OS installer and install the OS from the local source OR

• have a root file system mounted via NFS as its OS and install from that source. The local source option is recommended to avoid potential network or other communication issues between the device and the NFS.

Types of PXE Boot

There are the different types of PXE boot options available for ESXI currently. These are summarised as follows.

• SYXLINUX

• PXELINUX

• gPXELINUX

These are described further below.

SYXLINUX

• Open-source boot environment for legacy BIOS

• Uses ESXi boot load mboot.c32 running as a SYSLINUX plugin

• Can boot from disk, ISO or network

• Requires setup of a configuration file

• Needs a kernel specified

• Package location at time of writing was http://www.kernel.org/pub/linux/utils/boot/syslinux/

PXELINUX

• Boot from a TFTP server using PXE standard

• Uses ESXi boot loader mboot.c32

• Binary is pxelinux.0

• Requires setup of a configuration file

• Needs a kernel specified

gPXELINUX

• Hybrid configuration of both PXELINUX and gPXE for booting from a web server

• This is outside of this book’s scope

At the time of writing, VMWare builds use the mboot.c32 plugin to work with SYSLINUX version 3.86, so you should only test or deploy PXE boot with this version.

The DHCP server could be configured to provide different initial boot loader filenames to different servers based on the host MAC address or other criteria if necessary.

Note that Apple products do not support PXE boot.

UEFI PXE and iPXE

UEFI devices typically include PXE support to boot from a TFTP server. This firmware can load the mboot.efi ESXi boot loader binary for UEFI devices. PXELINUX and other additional software is not required. An alternative to UEFI PXE is iPXE, which can also be

used for devices that do not have PXE in their software or for older UEFI devices. It can be installed on and booted from a USB flash drive.

This installation is beyond this book’s scope but check online for relevant resources.

PXE Boot Requirements

As mentioned previously, PXE boot needs the following services on the network.

• DHCP Server

• TFTP Server

The setup of these services is described further below.

Note that this book only covers PXE deployment on CentOS7. If you are not using CentOS7, check online for your operating system’s specific PXE setup instructions.

DHCP Server Setup

This describes the DHCP server setup on CentOS7. DHCP services must exist and be configured a certain way to support PXE booting on the network. The following script will set up a basic DHCP server on CentOS. Be sure to change the IP network, addressing and domain fields to those required for your environment.

The scripted steps for this shown below can be summarised as follows.

• Install DHCP

• Create the /etc/sysconfig/dhcpd configuration file

• Create the /etc/dhcp/dhcpd.conf configuration file

• Start and enable the DHCP server

The scripted commands are shown below. Run these commands from a CentOS7 shell.

MAKE SURE YOU CHANGE THE SETTINGS SHOWN E.G. THE INTERFACES, DOMAIN NAMES, IP ADDRESSES AND OTHER SPECIFIC SETTINGS TO MATCH YOUR IMPLEMENTATION.

yum install dhcp

cat > /etc/sysconfig/dhcpd << EOF

DHCPDARGS=eth1 # change this to the interface that DHCP is being sent on EOF

cat > /etc/dhcp/dhcpd.conf << EOF option domain-name "gridiron-app.com"; option domain-name-servers 192.168.8.1 8.8.8.8; default-lease-time 600; max-lease-time 7200; authoritative;

log-facility local7;

subnet 192.168.8.0 netmask 255.255.255.0 { option routers 192.168.8.1; option subnet-mask 255.255.255.0; option domain-search "gridiron-app.com"; option domain-name-servers 192.168.8.1 8.8.8.8; option time-offset -18000; # Eastern Standard Time range 192.168.8.120 192.168.8.125;

host gridserver01 { option host-name " gridserver01 gridservers.com"; hardware ethernet 00:12:12:5C:4A:CC; fixed-address 192.168.1.120;

allow booting; allow bootp; option client-system-arch code 93 = unsigned integer 16; class "pxeclients" { match if substring(option vendor-class-identifier, 0, 9) = "PXEClient"; next-server 192.168.1.1; if option client-system-arch = 00:07 or option client-system-arch = 00:09 { filename = "mboot.efi"; } else {

filename = "pxelinux.0";

EOF systemctl start dhcp

All Linux commands shown were tested successfully on CentOS7. Note that some of the commands may need changing if you are using a different operating system and version.

Take note of the pxeclients class section in /etc/dhcp/dhcpd.conf where the mboot.efi filename for EFI booting and the pxelinux.0 filename for BIOS booting are listed as options. This gives the device the option to pick from these depending on its architecture i.e. the client-system-arch setting, where EFI bootable systems have a client-system-arch setting of 00:07 or 00:09.

The DHCP server must send the TFTP server address to the target host, so it knows where the PXE-boot loader binary file (e.g. mboot.efi or pxelinux.0) is. The DHCP will send the TFTP server IP address shown and the location of the mboot.efi or pxelinux.0 binary files on the TFTP server to the target host.

TFTP Server Setup

}
}
} }

This describes the TFTP server setup on CentOS7. Now that the DHCP server has been set up to send the required PXE boot information as part of its DHCP configuration to the target devices, we can now set up the TFTP server as follows. The following files must be configured for this.

• /etc/sysconfig/dhcpd: to specify the DHCP server’s interface

• /etc/dhcp/dhcpd.conf: DHCP server configuration such as IP ranges, options, etc

• TFTP located pxelinux.cfg/default: PXE boot menu settings

• TFTP located boot.cfg: PXE boot configuration including kickstart location if required

• TFTP and NGINX located kickstart file: automated steps to deploy the application

In the following example, the kickstart file is called esxikickstart.cfg. This file is saved in both the tftp and nginx folders to give both options as part of the installation.

The following example uses the kickstart file in the nginx folder.

The scripted steps for this shown below can be summarised as follows.

• Install TFTP, TFTP server, SYSLINUX, VSFTPD, XINETD and NGINX

• Mount the ISO to use and copies it to the TFTP server

• Copy across the UEFI mboot.efi auto-installation file

• Copy across the BIOS mboot.c32 auto-installation file

• Create the pxelinux.cfg folder

• Save the boot.cfg file to the tftpboot, EFI and nginx HTML folders

• Save the /etc/xinetd.d/tftp file pointing to the correct tftpboot folder

• Save the default file to the pxelinux.cfg folder

• Start and enable the relevant services

• Set SELINUX permissions to allow TFTP

The scripted commands are shown below. Run these commands from a CentOS7 shell.

MAKE SURE YOU CHANGE THE SETTINGS SHOWN E.G. THE INTERFACES, DOMAIN NAMES, IP ADDRESSES AND OTHER SPECIFIC SETTINGS TO MATCH YOUR IMPLEMENTATION.

Note that the file referred to by the ESXI_ISO VMware-VMvisor-Installer-6.7.0.update0314320388.x86_64.iso variable setting should have been downloaded to the device prior to running these commands. This file is just what was used in this example, so you will obviously have to change its name and that of any other ISO files shown to those of the specific ISO files you are using during deployment. Also, note that VSFTPD is installed so you could also configure the server to allow installations using FTP. You can delete the VSFTPD-related installation if this is not required. The commands to run are as follows.

yum install dhcp tftp tftp-server syslinux xinetd nginx -y

yum install vsftpd -y # only needed if you want to set up FTP-related installations as well

TFTPSERVER=192.168.8.245

IPGATEWAY=192.168.8.1

RANGESTART=192.168.8.120

RANGEFINISH=192.168.8.125

IPSUBNET=192.168.8.0

IPPREFIX=24

IPMASK=255.255.255.255

BROADCASTIP=192.168.8.255

DOMAIN="gridiron-app.com"

PXEDOMAIN="pxe.gridiron-app.com"

DOMAINPREFIX="gridiron"

ISO="CentOS-8.3.2011-x86_64-boot.iso"

ISOPATH=http://mirror.cwcs.co.uk/centos/8.3.2011/isos/x86_64/CentOS-8.3.2011-x86_64boot.iso

OSLABEL="centos7_x64"

OSMENULABEL="CentOS 7_X64"

OSBUILD=centos7

DNSSERVER=192.168.8.245

DNSBACKUP=192.168.8.246

DNSCLIENT=192.168.8.247

REVERSEIP=8.168.192

cat > /etc/sysconfig/dhcpd << EOF DHCPDARGS=eno1

EOF

cat > /etc/dhcp/dhcpd.conf << EOF ddns-update-style interim; ignore client-updates; authoritative; allow booting; allow bootp; allow unknown-clients;

subnet $IPSUBNET netmask $IPMASK { range $RANGESTART $RANGEFINISH; option domain-name-servers $IPGATEWAY; option domain-name $PXEDOMAIN; option routers $IPGATEWAY; option broadcast-address $BROADCASTIP; default-lease-time 600; max-lease-time 7200; next-server $TFTPSERVER; filename "pxelinux.0";

host gridvsphere { option host-name "gridvsphere.$DOMAIN"; hardware ethernet 94:C6:91:A8:AB:EF; fixed-address 192.168.8.115;

}
}

EOF

cat > /etc/xinetd.d/tftp << EOF

service tftp

{

socket_type = dgram

protocol = udp

wait = yes

user = root

server = /usr/sbin/in.tftpd

server_args = -s /var/lib/tftpboot

disable = no

per_source = 11

cps = 100 2

flags = IPv4

} EOF

ESXI_ISO=VMware-VMvisor-Installer-6.7.0.update03-14320388.x86_64.iso

ESXI_TFTPFOLDER=/var/lib/tftpboot

if [[!-e "/mnt_esxi" ]];

then

mkdir /mnt_esxi

fi

mount -o loop $ESXI_ISO /mnt_esxi

if [[!-e "$ESXI_TFTPFOLDER" ]];

then

mkdir $ESXI_TFTPFOLDER

fi

echo "copy the esxi iso to the tftp folder"

cp -rf /mnt_esxi/* $ESXI_TFTPFOLDER

cp $ESXI_TFTPFOLDER/efi/boot/bootx64.efi $ESXI_TFTPFOLDER/mboot.efi

cp $ESXI_TFTPFOLDER/efi/boot/bootx64.efi /var/lib/tftpboot/mboot.efi

cp $ESXI_TFTPFOLDER/efi/boot/boot.cfg $ESXI_TFTPFOLDER/efi/boot/boot.cfg.backup

# add the esxikickstart.cfg kernelopt to boot.cfg

TFTPSERVER=192.168.8.245

ESXI_TFTPFOLDER=/var/lib/tftpboot

cat > $ESXI_TFTPFOLDER/boot.cfg << EOF bootstate=0

title=Loading ESXi installer

timeout=5

prefix=

kernel=/b.b00

kernelopt=netdevice=vmnic0 bootproto=dhcp ks=http://$TFTPSERVER/esxikickstart.cfg

/ata_pata.v03 /ata_pata.v04

/block_cc.v00 /bnxtnet.v00

/ehci_ehc.v00

/iser.v00 /ixgben.v00 /lpfc.v00 /lpnic.v00 /lsi_mr3.v00 /lsi_msgp.v00

/lsi_msgp.v01 /lsi_msgp.v02 /misc_cni.v00 /misc_dri.v00 /mtip32xx.v00

/ne1000.v00 /nenic.v00 /net_bnx2.v00 /net_bnx2.v01 /net_cdc_.v00

/net_cnic.v00 /net_e100.v00 /net_e100.v01 /net_enic.v00 /net_fcoe.v00

/net_forc.v00 /net_igb.v00 /net_ixgb.v00 /net_libf.v00 /net_mlx4.v00

/net_mlx4.v01 /net_nx_n.v00 /net_tg3.v00 /net_usbn.v00 /net_vmxn.v00

/nfnic.v00 /nhpsa.v00 /nmlx4_co.v00 /nmlx4_en.v00 /nmlx4_rd.v00

/nmlx5_co.v00 /nmlx5_rd.v00 /ntg3.v00 /nvme.v00 /nvmxnet3.v00

/nvmxnet3.v01 /ohci_usb.v00 /pvscsi.v00 /qcnic.v00 /qedentv.v00 /qfle3.v00 - /qfle3f.v00 /qfle3i.v00 /qflge.v00 /sata_ahc.v00 /sata_ata.v00 - /sata_sat.v00 -

/sata_sat.v01 /sata_sat.v02 /sata_sat.v03 /sata_sat.v04 /scsi_aac.v00

/scsi_adp.v00 /scsi_aic.v00 /scsi_bnx.v00 /scsi_bnx.v01 /scsi_fni.v00

/scsi_hps.v00 /scsi_ips.v00 /scsi_isc.v00 /scsi_lib.v00 /scsi_meg.v00

/scsi_meg.v01 /scsi_meg.v02 /scsi_mpt.v00 /scsi_mpt.v01 /scsi_mpt.v02

/scsi_qla.v00 /sfvmk.v00 /shim_isc.v00 /shim_isc.v01 /shim_lib.v00

/shim_lib.v01 /shim_lib.v02 /shim_lib.v03 /shim_lib.v04 /shim_lib.v05

/shim_vmk.v00 /shim_vmk.v01 /shim_vmk.v02 /smartpqi.v00 /uhci_usb.v00

/usb_stor.v00 /usbcore_.v00 /vmkata.v00 /vmkfcoe.v00 /vmkplexe.v00

/vmkusb.v00 /vmw_ahci.v00 /xhci_xhc.v00 /elx_esx_.v00 /btldr.t00

/esx_dvfi.v00 /esx_ui.v00 /esxupdt.v00 /weaselin.t00 /lsu_hp_h.v00

/lsu_inte.v00 /lsu_lsi_.v00 /lsu_lsi_.v01 /lsu_lsi_.v02 /lsu_lsi_.v03

/lsu_lsi_.v04 /lsu_smar.v00 /native_m.v00 /qlnative.v00 /rste.v00

/vmware_e.v00 /vsan.v00 /vsanheal.v00 /vsanmgmt.v00 /tools.t00 /xorg.v00

- /imgdb.tgz /imgpayld.tgz

build= updated=0

EOF

# update the efi version of boot.cfg

cp $ESXI_TFTPFOLDER/boot.cfg

$ESXI_TFTPFOLDER/efi/boot/boot.cfg

echo "create the required esxi tftpboot subfolder"

if [[!-e "$ESXI_TFTPFOLDER/pxelinux.cfg" ]]; then

mkdir $ESXI_TFTPFOLDER/pxelinux.cfg

fi

cat > /var/lib/tftpboot/esxikickstart.cfg << EOF accepteula

/ata_pata.v00
modules=/jumpstrt.gz /useropts.gz /features.gz /k.b00 /chardevs.b00 /user.b00 /procfs.b00 /uc_intel.b00 /uc_amd.b00 /uc_hygon.b00 /vmx.v00 /vim.v00 - /sb.v00 /s.v00 /ata_liba.v00
/ata_pata.v01 /ata_pata.v02
/ata_pata.v05
/ata_pata.v06 /ata_pata.v07
/bnxtroce.v00 /brcmfcoe.v00
/char_ran.v00
/elxiscsi.v00 /elxnet.v00 /hid_hid.v00 /i40en.v00 /iavmd.v00
/ipmi_ipm.v00 /ipmi_ipm.v01 /ipmi_ipm.v02
/igbn.v00 /ima_qla4.v00

install firstdisk overwritevmfs

network bootproto=dhcp device=vmnic0

rootpw Community01!

reboot

%firstboot interpreter=busybox

# enable & start SSH

vim-cmd hostsvc/enable_ssh

vim-cmd hostsvc/start_ssh

# enable & start ESXi Shell

vim-cmd hostsvc/enable_esx_shell

vim-cmd hostsvc/start_esx_shell

# Suppress ESXi Shell warning

esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

esxcli system settings advanced set -o /UserVars/HostClientWelcomeMessage -i 0

reboot

EOF

# copy to the default website so that boot.cfg can use it as its kernelopt http source cp /var/lib/tftpboot/esxikickstart.cfg /usr/share/nginx/html

cat > $ESXI_TFTPFOLDER/pxelinux.cfg/default << EOF default Default Installation

NOHALT 1

LABEL Default Installation

KERNEL mboot.c32

APPEND -c boot.cfg inst.ks=http://$TFTPSERVER/esxikickstart.cfg

IPAPPEND 2

EOF

systemctl start xinetd systemctl enable xinetd

systemctl start dhcpd systemctl enable dhcpd

systemctl start vsftpd # only needed if VSFTPD is installed

systemctl enable vsftpd # only needed if VSFTPD is installed

systemctl start tftp systemctl enable tftp

setsebool -P allow_ftpd_full_access 1

It might be necessary to configure any device firewall software running on the device (e.g. firewalld or iptables) to allow DHCP and TFTP traffic. Below is an example of this for firewalld, but consult your application firewall help instructions for further information if necessary.

firewall-cmd add-service=dhcp permanent

firewall-cmd add-port=69/tcp permanent

firewall-cmd add-port=69/udp permanent

firewall-cmd add-port=4011/udp permanent

firewall-cmd reload

systemctl restart xinetd

systemctl restart dhcpd

systemctl restart vsftpd

systemctl restart tftp

PXELINUX BIOS Boot File Configuration

In the above example, the configuration uses a filename called /tftpboot/pxelinux.cfg/default as a single configuration file for all PXE boot installations.

Alternatively, you can use the target host’s MAC address as a filename using the following filename format.

/tftpboot/pxelinux.cfg/<MAC_ADDRESS>

For instance, this could be saved as follows for a host with MAC address 01-00-21-5a-ce-40f6.

/tftpboot/pxelinux.cfg/01-00-21-5a-ce-40-f6

You would then just have to change the file contents to suit the specific requirements for the device with that MAC address. Booting in the above instance has been done with a TFTP server. But this can also be done with a HTTP server but for brevity that is outside of this book’s scope.

VCSA Preparation for Auto Deploy

With the PXE setup now in place, multiple ESXi hosts can be installed automatically using Auto Deploy. This requires the installation of a vCenter Server Appliance (VCSA), a preconfigured Linux (Photon) virtual machine version of vCenter It can be deployed with a Platform Services Controller (PSC) that is embedded or external. The PSC provides single sign-on, licencing and certificate authority functionality. These are the steps required to prepare a VCSA to carry out Auto Deploy of ESXi hosts. This can be summarised as follows.

• Install VCSA and ensure Auto Deploy is installed on VCSA (configure it to start up)

• Confirm the hosts meet ESXi standards

• Confirm they have network connectivity and meet port requirements

• Confirm the VLAN setup is correct

• Confirm that the storage requirements are suitable

Auto Deploy Ansible Role Code

There is a published Red Hat Ansible role that can be used to carry out Auto Deploy functions as well. Ansible is a configuration management tool that can be used to automate

the provisioning and orchestration of physical and virtual devices. For those of you new to Ansible, you can read more about it in the Big Little Book on Ansible at the following link.

https://www.amazon.co.uk/dp/B07PWTLRKJ/ref=dp-kindleredirect?_encoding=UTF8&btkr=1

The following link containing code that can be used to carry out Auto Deploy functions using Ansible was available at the time of writing and it contains an Ansible role for the automated deployment of ESXi hosts.

https://github.com/vmware-archive/ansible-role-Auto Deploy

VMWare Compatibility Guide

It is worth consulting the VMware compatibility guide at the following link to confirm hardware and other installation requirements.

https://www.vmware.com/resources/compatibility/search.php

Boot Process

The process is as follows (provide the following items).

• image profile

• optional host profile

• vCenter location information (datacentre, cluster, folder)

• script bundle

This can be done using the vSphere Web Client or PowerCLI.

Recommended Setup

This can be summarised as follows.

• Set up a remote syslog server

• Install ESXI dump collector and set up the first host so that all dumps are directed to it. The apply this host profile to all other hosts

• PXE boot with legacy BIOS is only possible on IPv4 but UEFI boot with both IPv4 and IPv6

Normal Scripted

This can be summarised as follows.

• script provisions host

• host boots from disk

Stateless Caching Auto Deploy

The Stateless Caching Auto Deploy method can load the ESXi image directly into memory and the vSphere server will continue to deploy the host form then onwards. But this could leave the host without a valid configuration if it cannot reach the vSphere server and it no longer has the image in memory. Regardless, this method is useful if several servers are connecting to Auto Deploy and being built simultaneously. It can be used for 100s of servers.

This can be summarised as follows.

• The Host Profile is configured for Stateless Caching

• No reboot of the ESXi host is needed

• Auto Deploy now provisions the ESXi host

• If rebooted, the new ESXi host tries to reach Auto Deploy

• if Auto Deploy not reachable, the new host boots from the cached image

• The ESXi needs Auto Deploy reachability ongoing

This means the configuration state is not stored on the ESXi host’s disk. It is stored in Auto Deploy as follows.

• Image Profile: containing image information

• Host Profile(s): containing host attribute information

This configuration state information must always be reachable by the ESXi hosts deployed using the Stateless Caching Auto Deploy method.

Stateful Auto Deploy

This is the alternative to the Stateless Caching Auto Deploy method. This requires the Auto Deploy to install the build’s Image and Host Profile locally on the target ESXi host, so that it is always present on the ESXi host even after reboots. From that point onwards, Auto Deploy will not be responsible for managing the host and this is similar to the scripted installation but just using a remotely hosted set of Image and Host Profiles.

The steps for this can be summarised as follows.

1. The Host Profile is configured for stateful install

2. Auto Deploy then provisions the ESXi host

3. On reboot, the ESXi host uses Auto Deploy to complete its deployment

4. Then there is an auto-reboot from disk

5. After this, the new ESXi host doesn't need Auto Deploy anymore as the Image and Host Profiles are now stored locally

This is useful for Image Profile deployment over networks. It doesn't need PXE boot.

This means the host boots from a configuration saved on its disk Auto Deploy does the following.

1. Auto Deploy specifies the image to deploy

2. Auto Deploy specifies the hosts to provision with this image

3. Optionally selects the Host Profiles for each host

4. Optionally applies the vCenter server datacentre, folder and cluster

5. Optionally applies the script bundle per host

It uses PXE boot to initialise the server with IP reachability and other functions.

VCSA uses rules and rule sets to specify the Auto Deploy behaviour. Auto Deploy's rule engine checks the rule set to match host patterns against requirements i.e. to determine Image and Host Profiles, vCenter location items such as datastore, folder and cluster as well as any relevant script objects required to provision each host.

TFTP and DHCP Settings In VCSA

VCSA also needs to be configured to use the TFTP and DHCP servers. Carry out the following steps to ensure VCSA itself is set up to correctly use the TFTP and DHCP servers.

TFTP Server Setup

This can be summarised as follows.

• Log in to the vCenter server using the web client

• Go to the Inventory list > vCenter server system > Manage tab > Settings > Auto Deploy > Click Download TFTP Boot Zip

• This will download the boot zip to the TFTP server, copy and unzip it in the same directory where the TFTP server stores its files

DHCP Server Setup

This can be summarised as follows.

• Ensure that the DHCP server is pointing to the TFTP server zip file location discussed in the TFTP Server Setup above by specifying this in DHCP option 66 (next-server)

• Specify the DHCP option 67 (boot-filename)

§ snponly64.efi.vmw-hardwired for UEFI

§ undionly.kpxe.vmw-hardwired for BIOS

• Set each host to boot using the network or PXE in accordance with its manufacturer's instructions

Triggering Auto Deploy in vSphere Client of VCSA

We now have VCSA installed (i.e. vSphere server with or without a Platform Services Controller on top of an ESXi host). This means we can expand the infrastructure by installing additional ESXi hosts that can be managed by VCSA. Each of these ESXi hosts can host VMs and this VM installation can all be handled by a single, centralised VCSA. This makes for a much more scalable infrastructure, as the load and weight of hosting VMs no longer needs to reside in a single ESXi host but on multiple ESXi hosts.

Carry out the following installation steps to install the additional ESXi hosts.

Login and go to the vSphere Web Client, then carry out the following actions.

• Home Page > Administration > System Configuration > Services

• Select Auto Deploy > Actions > Edit Startup Type > Automatic

If this is vSphere Web Client-managed: Home Page > Administration > System Configuration > Services > ImageBuilder Service > Edit Startup Type > Automatic

If this is PowerCLI-managed: Download PowerCLI from VMware site, double-click the PowerCLI executable then follow the prompts to install it

Considerations

Take note of the following items before carrying out automated host installations.

• You can ask the deployment to overwrite VMFS partitions (except for USB sources).

• HA environments are useful in Stateless Caching deployments in case a backup server needs to be contacted to obtain a version of the image.

• Boot order: stateless boot from network then disk, stateful boot from disk then network, - If image on disk configure the server for one-time PXE boot then Auto Deploy server provision it will then use a host profile with a stateful install configuration.

Connectivity Requirements

If you need to connect to Online Software Depots, make sure that the VCSA has connectivity to them. If that means Internet connectivity or VPN connectivity to any vendor locations, make sure that is in place and working.

Further Auto Deploy Documentation

Further VMware Auto Deploy documentation can be found at the following link at the time of writing.

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID62EB313B-F120-470F-98AA-074EB687DBAB.html#GUID-62EB313B-F120-470F-98AA074EB687DBAB

Auto Deploy requires that the production and management/deployment networks are separate for security purposes.

Adding Software and Images

Now that additional ESXi hosts have been added to the current VCSA estate, there will be times when you must add new software and images to VCSA that will be used for deploying new VMs. The following sections show the steps to carry out the addition of new software to the VCSA Software Depot and images to required image locations in VCSA.

Setting Up a Software Depot

If the required Software Depot doesn’t exist, add it by clicking on the New link to the right of the screen shown below.

The following window will appear.

You can set up an Online Software Depot as shown below, then click Add to commit the changes

Alternatively, set up a Custom Software Depot to use assets stored locally as shown below and click Add to commit the changes.

This new Software Depot will now appear in the dropdown list of Software Depots shown in the following view. Select the required Software Depot from the dropdown in the window. The options are Custom or Online Software Depots. In this instance, we will select a Custom Software Depot as shown below.

If using a Custom Software Depot and there are no packages listed in the main window, click on Import link to the right of the screen.

The following window will appear. Add the Software Depot’s name and file location in the window below at which point the Upload button will be available.

Click this button to commit these changes.

Setting Up an Image Profile

Now that you have set up the required Software Depot, you can now set up an Image Profile that will be used for automated deployment of ESXi hosts. To do this starting from the previously accessed Auto Deploy page, click on the New Image Profile link shown below.

In the windows that appears, type in the required Image Profile name and the vendor whose software will be deployed using it. Then click the Next button.

Now select the software package itself but selecting the Acceptance Level from the dropdown below.

The possible options are listed below.

Click on the Next button and in this case accept any further default settings then follow the onscreen dialogue to complete the setup.

Setting Up a Host Profile

Amongst other things, this is where you can configure Host Profiles that will use either Stateless Caching Auto Deploy or Stateful Auto Deploy described previously Host Profiles can be deployed from the vSphere Web client or using Auto Deploy rules in PowerCLI to apply them. Use Web Client to get a Host Profile Template for Application

• Provision a host with Auto Deploy

• Edit the host System Image Cache Configuration file

• Put the target host(s) in maintenance mode, apply the new Host Profile to each of them and then instruct the host to exit maintenance mode

If Stateless Caching, image cached an no reboot is required. If Stateful, the image is applied on reboot of the host.

Using PowerCLI to get a Host Profile Template for Application

Carry out the following steps.

• Provision a host with Auto Deploy

• Edit the host System Image Cache Configuration file

• Write a rule to use this Host Profile on other hosts

• Put the target host(s) in maintenance mode, apply the new Host Profile to each of them and then instruct the host to exit maintenance mode

If Stateless Caching, first boot Auto Deploy provisions host and caches image then on reboot Auto Deploy provisions the host.

If Stateful, first boot uses the image and this image is applied from disk on its reboot.

Add a Software Depot Containing Image Profiles for Rules

To use an image profile in a rule, you must first add a Sofware Depot to the ESXi Image Builder inventory from which the image profile can be downloaded. This is done as follows.

In the VCSA screen, navigate to Auto Deploy > Software Depot > Add Software Depot > Select from Online Depot or Custom Depot

The following screen will appear.

You can specify an online depot e.g. the VMWare online depot at the time of writing is as follows.

https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

You can paste this in the URL field and click the OK button to commit the changes.

If using a custom depot type in its name and click OK to commit the changes.

The Image Profiles page will look like this.

The Software Packages page will look similar to this.

If your software package is listed here that confirms completion of this addition.

Create a Deploy Rule Using the Web Client

A deploy rules is used to determine which hosts a particular deployment should be applied to. It does this by specifying a filter on the hosts that should be worked on. This can be done in the web GUI. This is done as follows.

In the VCSA screen, navigate to Auto Deploy > Deploy Rules

The following screen will appear.

Click on New Deploy Rule and complete the New Deploy Rule Wizard that appears onscreen (its first window is shown below).

The following types of information are needed to complete this step in the GUI.

• Rule Name

• Pattern (of the hosts to apply it to)

Then select the relevant options in the following window.

The following options are presented onscreen.

• Host Location

• Image Profile (to deploy on the host)

• Host Profile (to deploy on the host)

• Script Bundle

For this example, we will select all of the options as shown below and the click Next

You will then be asked to select the host location as shown below.

Expand the above dropdowns to list the relevant clusters. In this example, we have done this and selected the relevant cluster as shown below. Click on the now available Next button.

The following screen will appear, confirming that an image profile for this rule will be created from the selected image.

The following screen shows that the host profile should be attached directly to the cluster, as the cluster itself already has an image created for it.

This image profile can now be assigned to hosts. Click Next.

The image and host profiles will now be created automatically from the selected cluster.

Click on the Finish button for the following window to appear onscreen, confirming the rule addition as well as image and host profile creation.

Note that the rule is not yet activated, but you can activate in the window that appear below by clicking on Activate/Deactivate Rules.

Note that the status in the following screen is Inactive.

Clicking on the Activate/Deactivate Rules will make the following screen appear.

Select the rule you want to activate from the list, then click on the Activate button when it is highlighted.

Now it is in the top list, click on the OK button to commit this change.

Note that the status in the following screen is now Active and the Edit and Recreate Image Profile options are greyed out.

VMWare Update Manager (VUM)

VUM is a vSphere 6.7 tool that can be used to carry out tasks including but not limited to the following items.

• Upgrade and patch ESXi hosts

• Install and update third party software on hosts

• VMWare Tools update on a VM

• Patch updates on a VM

VUM requires network connectivity to the server and must be installed on its own instance or machine.

Deployment Types

It can be installed depending on the vCenter deployment type as follows.

Single vCenter

VUM must be installed on a per vCenter server basis i.e. only one instance for a standalone vCenter Server

Multiple vCenters

If there is a cluster of servers, then an individual VUM Server must be installed for each vCenter Server in the Cluster. You can only update VMs that are provisioned and managed by that vCenter Server.

Installation

VUM for vCenter server must be installed on a Windows machine. VUM for VSCA does not need a separate installation as it is installed automatically alongside VCSA and exists as a service on the same VM. The VUM client is a plugin that runs on vSphere Web Client or Flex and vSphere Client (HTML5). The VUM client is automatically enable when the VUM server component is installed on Windows of after the VCSA deployment. The vSphere Client is preferred over the vSphere Web Client version as the latter relies on the deprecated Adobe Flash Player. There is a VUM Download Server or UMDS that can be used to download VUM metadata and binaries in a secured network that doesn’t have internet access.

VUMs installed for different vSphere servers can be updated individually and the changes will not affect any other VUMs. By default, VUM has its own configuration proper ties set that can be used out of the box. Alternatively, you can configure VUM if you have the relevant privileges to do so. These would have been assigned to your account a by a relevant vCenter Server administrator.

Using the vSphere Client got to Home > Update Manager > Settings tab to start changing the current VUM’s settings.

These settings are as follows

• Connectivity: port IP DNS

• Network: port

• Download Sources: configure requirements for the download of patches and extensions from the Internet, a UMDS shared repository or a zip file

• Proxy: if downloads need to traverse a proxy server o the network

• Checking for Updates

• Configuring and Viewing Notifications

• Host and Cluster Settings

• Snapshots

• Patch Repository Location

• Update Download Task

• Privileges

Managing DRS Resources

DRS has been briefly discussed earlier when defining the various aspects of vSphere. But we will now look at the management of resources using DRS in this section. After deploying VMs and their network infrastructure, we now need to ensure that the resources dedicated to their respective functions are adequate for that purpose. Several features can be used to do this depending on the number, size, scale and location of the VMs being managed. These tools are provided by the previously described DRS function. The main DRS resource management options are as follows.

• Shares

• Reservations

• Limit Parameters

• Resource Pools

These are discussed in further detail in the following sections.

Shares

This is a type of resource allocation setting used to determine how much CPU, memory and storage a VM is entitled to and can therefore use. A share determines the relative importance of a VM or its resource pool i.e. if a VM has twice more shares than another VM it is allowed to consume twice the amount of resources from the resource pool if both VMs request them. Four types of shares are provided for VMs and resource pools (to change priority compared to siblings) as follows.

• Low

• Normal

• High

• Custom

During resource contention, root pool available resources are shared amongst the children based on their share values. Note that the cluster is the root resource pool.

Reservations

This is also type of resource allocation setting. It is used to guarantee a minimum amount of CPU, memory and storage a VM is assigned from a resource pool. This ensures that it will not be started if there are other VMs contenting for the same resources and these resources are limited or constrained.

Limit Parameters

This is also type of resource allocation setting. It sets the upper boundary of CPU, memory and storage that can be allocated to a VM. Resources allocated to a VM by vCenter can be higher than its reservation but cannot exceed its limit, even if there are abundant resources in the resource pool or on the system.

Resource Pools

Resources pools are a way of providing dedicated resources to a group of VMs. This is more suited to large-scale resource management than the previously described options in this section. These resources can be managed by cluster and created in an ESXi host or a DRSenabled cluster. Resource pools enable the centralised configuration of resources that can be applied to all VMs instead of having the configure these individually on each VM.

They work by giving resources to VMs that are running out of them from the relevant assigned Resource Pool. For instance, if VM1 (running critical services) and VM2 (running non-critical services) were both assigned to a Resource Pool called Normal and VM1 was assigned to a Resource Pool called High, if Normal’s memory or CPU resources were running out VM1 would be assigned further resources from High.

The following are the steps necessary to add and configure a Resource Pool.

Click on Menu then Hosts and Clusters as shown in the following window.

In the window that appears, right click on the relevant ESXi host to get the following submenu, then click on its New Resource Pool option.

CPU/Memory/Storage Resources >=
VM
Reservation but <= Limits

Alternatively, select the ESXi host then select its Resource Pools tab as shown below.

The following window will appear.

Set up the resources shown according to the requirement (such as the resource pool name, CPU and memory reservations) then click on the OK button. We have left the defaults in the example shown.

The new resource pool will now be listed in the window as shown below.

Adding VMs to a Resource Pool

This can be done by right-clicking on the Resource Pool and then clicking on New Virtual Machine. In the window that appears, follow the previously shown instructions for adding a new VM and it will be added directly to the Resource Pool.

Network Efficiency, DPDK and SR-IOV

Network setup is a key part of any modern virtualised infrastructure. The sheer number and type of VMs and associated services implemented means that a key focus must be placed on the assignment of networking resources. Items such as assigned bandwidth, routing, switching and other methods must be reviewed, as tolerances available in physical infrastructure do not always apply to a virtualised environment.

For instance, networking input/output (I/O) virtualisation methods and strategies such as SRIOV and DPDK were developed to distribute the burden of handling network traffic away from VM resources to dedicated network equipment, whether that be physical or virtual. Whilst on this topic, we will briefly discuss SR-IOV and DPDK for your awareness before continuing with VMware networking’s implementation.

Network Virtualisation Techniques

These are tools and functions that enable more efficient use of the available networking resources in vSphere. This section covers the following virtualisation techniques.

• SR-IOV

• DPDK

These are discussed further below.

Single Root Input/Output Virtualisation (SR-IOV)

SR-IOV is a means of virtualising a physical PCIe interface into multiple virtual PCIe interfaces. So, a single root port would look as though it has a number of different interfaces to the underlying hypervisor In SR-IOV terminology, the physical interface is called a PF or physical function and it consists of multiple VFs or virtual functions It is useful for workloads with high packet rates or very low latency requirements. The VFs do not have any configuration, this is to avoid the possible reconfiguration of them into what might not fit within the PF’s actual physical specifications. Therefore, the OS and hypervisor must have an awareness of SR-IOV and both the PF and VFs (whilst acknowledging that VFs are not physical and thus not configurable).

SR-IOV removes the need for the hypervisor, so the VNF accesses the physical NIC directly thus getting almost line throughput. Prior to SR-IOV, PCI passthrough was the fastest way to present the NIC to the OS without a hypervisor. The VM would think it was directly connected to the NIC. But each VNF would exclusively lock down the NIC it was using. SRIOV solves this issue by defining a standard way of virtualising the PCIe devices on a NIC. Each PCIe device would be split into multiple Virtual Functions (VFs) for presentation to the VNFs. The VFs on a NIC would belong to a Physical Function (PF) which itself represents the actual NIC being virtualised for usage. It extends the PCI Express (PCIe) specification.

• PFs are PCIe functions that configure/manage SR-IOV and move data in/out of VMs

• VFs have little configuration in SR-IOV

The VM’s adapter communicates data with relevant VF, which communicates with the PF. The PF driver sends the host switch PF its data Network configuration is based on the active policies for the port holding the VMs. The workflow itself which implements the functionality can be summarised as follows.

• Guest OS requests a config change on the VF

• The VF forwards the request to the PF through a mailbox system

• The PF driver checks the config request with the virtual switch

• The virtual switch verifies the config request against the policy on the port the VF enable VM adapter is associated

• The PF driver configures the VF if the new settings comply with the VM adapter port policy

Despite the considerable benefits of using SR-IOV, it does affect the usage of several vSphere’s features and its usage must be carefully considered. Items such the physical host, its NIC, the guest OS, the VF driver in ESXi for the physical NIC and the VF driver in the guest OS must be carefully reviewed to ensure they will work if this feature is implemented.

A typical situation would be the VF driver trying to change the MAC address, which would not be allowed if not permitted by the port or port group’s applicable security policy. The guest OS would think that it has changed but the underlying physical infrastructure will obviously still be using the original MAC address. This could lead to the VM not getting an IP address amongst other issues caused by this inconsistency.

Further information on VMware SR-IOV support can be found at the following link.

https://docs.vmware.com/en/VMwarevSphere/7.0/com.vmware.vsphere.networking.doc/GUID-E8E8D7B2-FE67-4B4F-921FC3D6D7223869.html

The following features were not available on VMs using SR-IOV at the time of writing this book

This may have changed so consult VMware’s documentation before implementing it in any solutions.

• vSphere vMotion

• Storage vMotion

• vShield

• NetFlow

• VXLAN Virtual Wire

• vSphere High Availability

• vSphere Fault Tolerance

• vSphere DRS

• vSphere DPM

• Virtual machine suspend and resume

• Virtual machine snapshots

• MAC-based VLAN for passthrough virtual functions

• Hot addition and removal of virtual devices, memory, and vCPU

• Participation in a cluster environment

• Network statistics for a virtual machine NIC using SR-IOV passthrough

Check if the device virtual and physical NIC are supported by this feature by visiting the VMware Compatibility Guide at the following link.

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io&details=1 &deviceTypes=6&pFeatures=65&page=1&display_interval=10&sortColumn=Partner&sort Order=Asc

DPDK

The Data Plane Development Kit or DPDK is a development kit consisting of libraries aimed at accelerating packet processing across various CPU architectures. It is an open-source software project managed by the Linux Foundation. Its set of data plane libraries and network interface controller polling mode drivers provide TCP packet processing offloading capabilities from the OS kernel to processes running in user space. This frees up the kernel and achieves higher computing efficiency and packet throughput than would be achieved by using interrupt driven processing in the kernel. DPDK is a set of User mode drivers used to increase traffic throughput on network cards. Normal traffic goes from the NIC > KERNEL > APPLICATION whereas DPDK totally bypasses the KERNEL. This increases speed as the Kernel uses interrupts i.e. there is a kernel interrupt to process packets as they arrive as well as the required context switch from Kernel to User space.

As DPDK bypasses the Kernel a) there is no interrupt (as User space uses the poll mode drivers instead) and b) there is no context switch which itself takes a fraction of CPU.

Running DPDK on an OVS

OVS stands for Open vSwitch and can work with and without DPDK. OVS sits in the HYPERVISOR and traffic can go from one VNF to another via the OVS itself. This technology was not designed for extremely high telco networks. It also has the same bottleneck speeds of standard Linux as it resides in the Kernel. So, bypassing the Kernel can be done using a DPDK forwarding path, creating a User space vSwitch on the VM which uses DPDK internally for packet forwarding. This increases the OVS performance as it runs entirely in User space.

Running DPDK on VNFs

DPDK can also run in the VNF i.e. there could be an OVS running DPDK in User space and DPDK also running on the VNF behind its vNIC.

DPDK vs SR-IOV

Intel did research on both to determine the benefits in different deployment scenarios. The findings are summarised below.

• East-West: DPDK outperformed SR-IOV

• North-South: SR-IOV outperformed DPDK

VMware Switching

The importance of network switching must be emphasized in VMware implementations. There are two main types of switches that can be configured in VMWare as follows.

• Standard Virtual Switch

• Distributed Virtual Switch

Both switches provide the following functions.

• Forwarding of L2 frames

• VLAN segmentation

• 802.1q encapsulation support

• NIC teaming of multiple uplinks

• Outbound traffic shaping

• Cisco Discovery Protocol (CDP) support

In addition, the Distributed Virtual Switch has some more advanced features. These are both described in further detail below.

Virtual switches contain two planes required for their function

• Management

• Data

These are described further below.

Management

This is tasked with the actual configuration management of the virtual switch, ensuring that policy settings and requirements are kept on the device.

Data

This deals with the functions required to implement and ensure the transfer of data using the virtual switch.

Standard Virtual Switch (vSS or vSwitch)

This is the default type of switch available in vSphere and it can only serve a single ESXi host at a time. This is basically a standalone switch available on each ESXi host. Like a physical switch, it works at Layer 2 and forwards frames to other switchports based on MAC address. Each vSwitch has both the management and data planes described previously, so can be managed as a standalone device from the ESXi host.

Standard switch features such as VLANs and port channels are also supported. They can be connected to an ESXi hosts physical NICs as uplinks to communicate with external networks. They are configured at the ESXi host level and therefore each vSS belongs to an ESXi host. Thus they are called host-centric switches.They provide the following connectivity.

• VM to VM within an ESXi host

• Between VMs on different ESXi hosts

• Between VMs and physical devices on external networks

• vMkernel to other networks for features such as vMotion, iSCSI, NFS and Fault Tolerance logging

Each logical port on a vSwitch is a member of a single port group. A vSwitch is created by default when the ESXi host is installed.

The management IP address assigned to an ESXi host is the first VMkernel port created on the default vSwitch0 of the ESXi host.

They also support features such as the following.

• Outbound traffic shaping

• NIC teaming

• Cisco Discovery Protocol (CDP) support

• Security policy application

Each switch can have the following settings.

• 4096 switchports per ESXi host

• 1016 active ports per ESXi host

• 512 port groups per switch

Note that a router is required to move data between machines on different IP subnets. But the vSS can transfer data between VMs on the same IP subnet and ESXi host.

An uplink port group consists of uplink ports, which connect to vmnics.

E.g.

Uplink portgroup > uplink port 0 > vmnic0 > physical switch

Uplink portgroup > uplink port 1 > vmnic1 > physical switch

Note that these port groups will not be visible on any other vSwitch on other ESXi hosts. This vSwitch will be managed in its entirety on the local ESXi host and if the same port group configuration is required for VMs on other ESXi hosts, this must be configured on that ESXi host. If a VM is vMotioned to another ESXi host and that host does not have the required port group configuration, this VM might not be able to connect. From this it should be clear that, because the management and data planes reside on the same device, the Standard Switch is not suitable for larger environments that need considerable management at scale. This is where the Distributed Virtual Switch is effective.

Distributed Virtual Switch (vDS)

The Distributed Virtual Switch is only available with vSphere Enterprise Plus or vSAN licencing.

The vDS splits the management and data planes described previously. This means that all of the management functionalities can reside on the vSphere server that is used to manage all of the relevant ESXi hosts. The data plane itself remains local to the ESXi host. This means that the vDS can serve multiple ESXi hosts at a time. This split of the management and data planes means that no vDS traffic goes across vCenter, so there are no interruptions if the vCenter server is unavailable.

The data plane component that is configured on the ESXi host is called the host proxy switch.

The vDS is able to split the management and data functions with the following features.

• Uplink Port Group

• Distributed Switch Port Group

These are described below.

Uplink Port Group

This is also known as the dvuplink port group. It is created when the vDS is created and faces the physical host connectivity. The dvuplink port group is an abstraction that supports the vDS’s distributed nature. It is via this dvuplink that the ESXi host’s physical uplinks can be mapped to an uplink port group ID. This uplink port group can then be configured. Any load-

balancer or resilience configurations from the dvuplink will be received by the ESXi host proxy switch.

Distributed Switch Port Group

By default, ESXi hosts have a standard vSwitch installed when they are built. But it is possible to create a switch connecting all of the ESXi hosts that is controlled centrally by the VCSA. This allows for advanced features such as port-mirroring, traffic filtering, link aggregation and private VLANs. This type of switch is called a Distributed Virtual Switch.

A vDS faces the VM connectivity and is a means of passing VMkernel traffic. The vDS port groups have network labels similar to those on a vSS port group unique for each datacenter. Policies such as teaming, resilience, VLAN configuration, traffic shaping, security and load balancing are configured on the vDS port groups. The vDS port group configuration is applied at the vSphere level and propagated to all of the virtual machines on the various ESXi hosts via the ESXi host proxy switches. A more detailed list of its features is shown below.

• Datacentre level management

• Network I/O control

• Link Aggregation Control Protocol (LACP) support

• Port state monitoring

• Port mirroring

• Traffic filtering & marking: ACLs, QoS, DSCP setup

• Bi-directional traffic shaping

• Configuration backup & restore

• Private VLANs

• Netflow

• Network health-check features

• Advanced network monitoring and troubleshooting

• NIOC, SR-IOV, BPDU filter

• Netdump

• VM backup and restore

• VM port blocking

• NSX-T support

• Tanzu support

It is effectively a combination of multiple, per-host virtual switches into a single centralised switch fabric. To summarise, a Distributed Virtual Switch is a virtual switch that connects all of the ESXi hosts in the control plane of the VCSA architecture. It does this by connecting to each of the default virtual switches. If a port group is created on the Distributed Virtual Switch, an equivalent port group is created on all of the ESXi virtual switches as well. The virtual switches are the IO plane in this setup responsible (only for data transmit) leaving traffic flow determinations and other decisions governing the utilisation of the network and its resources to the Distributed Virtual Switch.

ACLs can be configured to protect the VMs or just specific port groups. This is done using configurable rules, which govern access based on the following criteria.

• MAC source and destination address

• IP protocol type, source IP, destination IP, vMotion, traffic management, etc

A rule definition itself contains at least one of these criteria and an action, which is then sued to determine the priority of the traffic being governed by the rule. These rules are processed by VMkernel for efficiency.

Private VLANs provide for more efficient and scalable network operations. They consist of a standard VLAN and multiple secondary VLANs

The following diagram summarises the interaction between the various components i.e the vDS its port groups, the ESXi host port groups and vswitches.

The management/control plane is tasked with centralised management of the various features such as port mirroring, private VLANs, inbound filtering and other traffic and switch management features. A folder called .dvsData is created on the ESXi datastore if any of its virtual machines gets connected to the DVS.

The data plane actions are carried out by the ESXi hosts themselves.

Network Types

There are two types of networks as follows.

• Host Only

• Network Connected

These are discussed further below.

Host Only

This allows VM communication within the switch. It cannot route traffic and only caters for layer 2 reachable communication for any IP addresses on the local network. This is done by creating a vSwitch and not adding any physical ports to it.

Network Connected

This is done by giving a vSS or vDS at least one physical adapter. VMs can communicate outside of the virtual network and this network can carry the ESXi communication and reach DMZ and external destinations.

Port Group

This is the combination of similar ports that have a single purpose and allows VMs or virtual services to connect e.g. management. It is similar to a line card on a physical switch. Can be assigned VLAN IDs if necessary.

VMKernel Interface

A VMKernel interface is a routed interface that connects the outside world to the virtual switch and the available ESXi hosts on the management network. This is an adapter used for communication with external VMware ESXi host for service such has vMotion, Storage, Fault Tolerance, management and replication traffic. It is assigned an IP address, can have a VLAN tag, an optional IPv6 address, multiple VMkernel ports and a single default gateway for all VMkernel ports on the host. VMK 0 is always the first VMkernel adapter for traffic. This is only connected to the virtual switch i.e. is not configured on a VM host. Note that the management network cannot be assigned to a VM in ESXi.

Uplink Port Group

This is a port group used to connect a physical adapter to multiple ESXi hosts using a single vDS. This enables connectivity from VMware to an external, physical network.

The following diagram shows how the uplink port groups connect the external network to the VMs connected to the example’s production and test port groups. The VMKernel interface is configured with an IP address and set up to use the vMotion service, so that the VMs in the production and test port groups can be migrated using vMotion on this network.

Installing a Distributed Virtual Switch

This can be done as follows. Right-click on the datacenter icon in the menu system on the left and select the Distributed Switch, then New Distributed Switch as shown below.

Click on the New Distributed Switch option and the following window will now appear. Type in the distributed virtual switch’s name in the Name field and click Next.

On the next screen, select a distributed switch version compatible with your deployed ESXi versions and click Next.

Configure the number of uplinks, network I/O control and default port group availability Type in a name for the switch then click Next.

Review the settings in the following screen and if they are all OK click Finish to complete the setup.

You can click on the datacentre option, Networks tab and then the Distributed Switches link to view the distribution switch.

Note that an Uplink Port Group connecting this vDS to the outside world is also created automatically and can be viewed by clicking on its link.

Click on the Distributed Port Groups links to view the configured port group.

Now that the vDS has been configured, ESXi hosts can be added to it by right clicking on the relevant vDS and then clicking on its menu’s Add and Manage Hosts option.

The onscreen dialogue will now present options to add the relevant hosts from a tickbox list. Select a host and then in the window that appears with options to select a physical adapter, select the required adapter.

The select an uplink from the screen that appears after that. You can also select the Apply this uplink assignment to the rest of the hosts checkbox so these settings are duplicated.

Then, in the Manage VMkernel adapters screen, select the required adapters that need migrating to the new vDS.

The next screen is the Migrate VM networking window, select any VMs that need migrating to the new vDS.

The next window shows a Finish button, click it to complete the setup.

Conclusion

That’s it for now folks. Hopefully this has been a useful ‘reminder’ of some key vSphere concepts and will continue to be useful to you as a serious IT professional. We are constantly striving to enhance the Big Little Book series so let us know if there are any topics you would like to see in future editions of this book. That’s it for now, let us know if there’s anything you would like added to the next edition of this book by sending an email to info@gridlockaz.com.

Thanks for reading and wishing you all the best in your career pursuits. Take care.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.