As many of you know, yesterday, vSphere 6.7 was released. There are some awesome updates, and while this isn’t an all-encompassing list of features, I wanted to round up the ones I felt were the most important (especially to my customers).
First and foremost, it should be noted that you cannot upgrade from vSphere 5.5 directly to 6.7. If you are currently on vSphere 5.5, this will be a two step process – upgrading to 6.0 or 6.5 first, then 6.7. All hosts, VDS, Host Profiles, etc need to be upgraded to at least 6.0 before upgrading to 6.7.
This is also the last version to support the flash client as well as vCenter on the Windows platform.
Additionally, Windows 2003 and XP are no longer supported, and PLEASE make sure you check the compatibility guide to make sure your hardware (and more specifically, processors) are supported.
- vSphere 6.7 Release Notes
- VUM 6.7 Release Notes
- Important Information Before Upgrading
- vSphere 6.7 Launch Page
- vSphere Configuration Maximums
- VMware Product Interoperability Guide
- VMware Compatibility Guide
- Update Sequence for vSphere 6.7 and Compatible Products
vCenter Enhanced Linked Mode (ELM)
In vCenter 6.5, if you want to make use of ELM, you need to use external Platform Services Controllers (PSC) as it isn’t supported with embedded PSCs. Since the vCSA with an embedded PSC has the same performance and scalability as the external PSC topology, the only real reason to use external PSCs is if you need/want ELM.
If you have two vCenters with embedded PSCs, they are in their own respective SSO domains, and have no knowledge of each other. Hence, you need to login to each one individually to manage the objects and resources under it’s control. When you join multiple PSCs to the same SSO domain, and connect vCenters to those PSCs they are automatically aware of each other and you can login to either vCenter and see both in a single interface.
With vCenter 6.7, this is a thing of the past. You can now deploy up to 15 vCenter servers with embedded PSCs in the same SSO domain – having a single pane of glass, with a simplified architecture.
It should be noted, however, that if you’re on vSphere 6.0/6.5 with ELM today that there is no way to move to 6.7 with embedded PSCs without rebuilding. Additionally, upgrading from multiple vSphere 5.5 instances with embedded SSO, in their own domains, also provides no way to upgrade to multiple VCSAs with embedded PSC in the same SSO domain.
vCenter High Availability (vCHA) with ELM
One of the pain points with using external PSCs is if you want to also enable vCenter High Availability (vCHA); because you also need to configure PSC HA and place them behind a load balancer. While this is fully supported and functional, it has it’s challenges depending on the situation. Operational complexity is too much for some customers as well.
Now that we have the new functionality of using ELM with embedded PSCs, this make our architecture far less complex!
Notice the difference between the 6.5 architecture (top) and the 6.7 architecture (bottom):
While having less VMs to support, you also get the added benefit of protecting ALL vCenter and PSC services including VUM, Auto Deploy, etc.
When 6.0 was initially released, we could repoint vCenter across SSO sites, this functionality was later removed, but it’s back in 6.7. For external PSC deployments, you can once again repoint your vCenter across sites.
Additionally, a new feature that is going to be extremely useful is the ability to repoint your vCenter across SSO domains! This is going to be huge in SSO domain consolidation and re-architecture for customers upgrading and/or needing to change their existing environment. This of course is also only for external PSCs.
Enhanced vMotion Compatibility (EVC)
EVC mode, which allows you to migrate VMs between hosts with different CPU types, is a cluster specific setting. vSphere 6.7 introduces Per-VM EVC, moving this functionality from the cluster level to the VM level. This allows customers to migrate VMs between environments with dissimilar CPUs as well as cloud providers since they will likely have different CPUs as well. This requires virtual hardware version 14.
vSphere Client (HTML5)
The vSphere Client now includes functionality we’ve all been waiting for:
- VMware Update Manager
- Host Profiles
- Content Library
VMware is definitely closing the gap on remaining functionality between the Flash and HTML5 clients. Stay up to date on what still needs to be added by checking back with their Functionality Updates page regularly. Also, don’t forget the fling is updated regularly as well so you can always test the latest and greatest features.
The PSC UI – where you can manage things like Identity Sources, Password Policies, and Certificate Management has been moved into the H5 client as well under the Administration area.
Faster ESXi Updates/Patches
Double (triple) reboot – GONE! When you upgrade an ESXi host with VUM, an initial reboot is performed to kick off the upgrade process. Once the upgrade is complete the host will reboot again. If you work with Cisco UCS a lot, like I do, and you are upgrading firmware as well, this turns into three reboots. With the RAM test and hardware initialization, especially in monster hosts, this takes forever. Now multiply that by eleventy-billion hosts – the only bonus is catching up on every season of every show you’ve ever missed on Netflix during this time.
Upgrading from ESXi 6.5 to 6.7, the initial reboot is gone, leaving you with only a single reboot. Or, two in my case if I’m applying firmware as well.
vSphere Quick Boot is some new black magic that I seriously love. ESXi is restarted without rebooting the underlying server hardware. This is far faster than waiting for the entire host to reboot and do all of its self-tests. However long it takes your hosts to reboot, subtract that from your overall time to update. In my case, it seems like more than half the time is spent in the reboot cycle though I haven’t timed it yet.
Fault Tolerance (FT)
- Up to 8 vCPUs (up from 4 in 6.5)
- Up to 128GB of RAM for an FT protected VM. (up from 64GB in 6.5)
Note: only a maximum of 8 FT protected vCPUs per host however, so this would limit you to a single 8 vCPU FT protected VM per host or 2 4 vCPU FT protected VMs, etc.
ESXi Host Memory
- 1TB of Persistent Memory (new in 6.7)
- 16TB total RAM (up from 12TB in 6.5)
- 4,096 FC paths per host (up from 2,048 in 6.5)
- 1,024 VMFS volumes per host (up from 512 in 6.5)
- 1,024 FC LUNs per host (up from 512 in 6.5)
- 512 vvol PEs per host (up from 256 in 6.5)
- 16x 10Gb Ethernet ports
- 16x 20Gb Ethernet ports
- 4x 25Gb Ethernet ports
- 4x 40Gb Ethernet ports
- 4x 50Gb Ethernet ports
- 4x 100Gb Ethernet ports
vSphere Web Client
- Up to 80 simultaneous Flash Client Connections
- Up to 100 simultaneous H5 Client Connections
VMware Update Manager
- Up to 280 concurrent host scan/patch/upgrade operations
X-Post: Originally posted on Virtual Insanity.