There are also several other improvements starting with an increased vMotion network flexibility: vMotion traffic is now fully supported over an 元 connection. It’s curios that in the actual documents (but it’s still a RC and not the really final version), in the best practices for vMotion networking paragraph now is suggested to use jumbo frames for best vMotion performance (let’s see if it will be confirmed also in the final version). Each host must meet the networking requirements for vMotion.Each host must meet shared storage requirements for vMotion.Each host must be correctly licensed for vMotion.There are also addition requiremens both for hosts and VM but are almost similar than previous version of vMotion (see the vCenter and Host Management Guide for more information). So in all those migration scenarios a common EVC baseline is usually required (note: do not add virtual ESXi hosts to an EVC cluster because virtualizaed ESXi are not supported in EVC clusters). However, the processors must come from the same vendor class (AMD or Intel) to be vMotion compatible. Clock speed, cache size, and number of cores can differ between source and target processors. Seems a problem, expecially for the cases across datacenters (in in some large environments IP change can also appen across racks), but SDN (and for VMware NSX) is the solution for “virtualize” the network layer.Īnd also remember that a live migration requires that the processors of the target host provide the same instructions to the virtual machine after migration that the processors of the source host provided before migration. Note that all those operations still requires an L2 network connectivity just because it’s a VM migration that does not change the IP of the VM (otherwise we are talking about of site DR scenarios, and in those case, for example SRM, could still be a solution). Long Distance vMotion: Enable vMotion to operate across long distance (in vSphere 6.0 beta, the maximum supported network round-trip time for vMotion migrations is 100.This operation simultaneously changes compute, storage, network and vCenter. Cross vCenter vMotion: Allows VMs to move across boundaries of vCenter Server, Datacenter Objects and Folder Objects.This operation is transparent to the guest OS and works across different types of virtual switches (vSS to vSS, vSS to vDS, vDS to vDS). Cross vSwitch vMotion: Allows VMs to move between virtual switches in an infrastructure managed by the same vCenter.Now with vSphere 6.0 there are new vMotion scenarios: v5.1: with the vMotion also without a shared storage.v5.0: with the Multi-NIC vMotion, Support for higher latency networks (up to 10ms) and Stun During Page Send (SDPS) features.The history of vMotion enhancements is quite long, but most of the interesting new has appear in vSphere 5.x: When also all VM properties and policies can “follow” the VM (and the VMware SND and SDS approaches are going in this direction) will be really possible implement in a easy way the “one cloud” vision of VMware. Note that actually it’s not (yet) possible use this feature to live move to or from a vCloud Air service… but of course this is the first step to do this in the future. Now VMware reinvent vMotion to become more agile, more cloud oriented: breaking the boundaries and going outside the usual limit make possible have VM mobility across clouds. VMware vMotion was probably the first important virtualization related features that make important VMware (and its product) but, much important, that make relevant the virtualization approach: having VM mobility means handle planned downtime and also workload balancing. This post is also available in: Italian Reading Time: 8 minutesĪfter the new Virtual Volumes, the most “spoilered” features of the new VMware vSphere 6.0 was probably the new vMotion that allow VMs to move across boundaries of vCenter Server, Datacenter Objects, Folder Objects and also virtual switches and geographical area.