In June Xen reached version 3.4 introducing out-of-the-box support for Hyper-V and a series of enhancements that will make the platform a good client hypervisor.
At the beginning of this month Xen further progressed to version 3.4.1, which is just a maintenance release, but the truly interesting things are in the Xen 4.0 roadmap (with our emphasis):
- RDMA Live Migration Support
- Dom0 kernel in Linux 2.6.30 or later
- Dom0 support for Marvell 6480 disk driver
- Pass through USB-Controllers/Devices for PV Guests
- Fault Tolerance – Project Remus and/or Kemari
- Monitor, Limit, Control network traffic coming at DomUs
- Internationalization / Unicode Support
- Configure Virtual Bridge like Real Switch (e.g. Control VLAN, port status)
- VLan tagging per NIC in the VM Config File
- Virtual Ethernet Switch
- Physical Xen boot/install support via native UEFI (pUEFI) and virtual UEFI (vUEFI) support
- Limit I/O for individual disks of VM (similar to credit scheduler weight)
- Dynamic Memory Management for Overcommiting RAM
- PCI CGA Passthrough for VT-d (vendor cards like Nvidia, AIT, etc)
- Full AMD IOMMU Support
- Online resizing of DomU Disks
- Cross compliling Xen and Modular Builds
On top of this very interesting list, Ian Pratt, the Xen CTO (and Xen.org Chairman and XenSource Founder and Citrix Vice President of Advanced Products), informally indicated a few areas where contributors are welcome. And in this list there’s a lot of precious details there (our emphasis again):
- Xen will soon be including the openflow vswitch developed under the openvswitch.org project. In order to integrate support for SR-IOV network hardware, we need a special kind of bond driver in the guest that initially routes traffic via the vswitch, but then can receive instructions from the vswitch to route individual flows to the direct hardware path (falling back to the normal software path via the vswitch if the SR-IOV VF gets unplugged).
- Build on some of the existing work done in Cambridge to use Tungsten Graphics Gallium as a device-independent and API-independent 3D remoting protocol.
- Get the blkback/netback drivers working in a HVM guest, effectively allowing domain0 to optionally be a HVM guest.
- Fully implement domain0 restartability, effectively enabling a dom0 reboot or upgrade without rebooting the rest of the system. (There’s been plenty of work done on this already, but it needs finishing off)
- investigate how a hypervisor could best use large amount of NAND FLASH memory. (not just via a disk API, but as native FLASH)
- Deterministic replay for xen. (see the University of Michigan papers).
- work on the ARM xen port to get it to the same level as the x86 port
- implement UBC Remus for HVM guests and integrate it into the main Xen tree.
- virtualize a GPU in a device-dependent fashion (everyone has been doing it in a device-independent fashion, but there may be big performance and fidelity wins to be had doing it in a device-specific fashion). Since the Intel GPU drivers are open source it should be possible to do this on Intel GPUs.
- Extend Cambridge/UBC Parallax to implement content-addressable hashing to save disk space
- Switch the PV SCSI over to using the netchannel2 ring protocol for improved performance.
Only three major virtualization vendors are currently relying on Xen: Citrix, Oracle and Novell.
Each one will try to innovate with enterprise-grade capabilities to be added on top of this “basic” feature-set.
Customers can can now have a better idea of where the three companies are going. The only problem is that none of them is probably ready to share some release dates for some or all the features above.