I/O Hypervisor and Memory Externalization


I/O consolidation is a new paravirtual I/O model providing a centralized, scalable facility for handling I/O services. It does so by decoupling I/O from computation on the machines hosting the VMs, and shifts the processing of the I/O to a dedicated server (I/O hypervisor). Firewall, DPI (deep packet inspection) and block-level encryption are examples of such I/O services. These I/O services can consume a lot of CPU resources, thus, consolidating them in a dedicated server increases CPU utilization and accommodates changing load conditions where demand from different hosts fluctuates.

This software can be integrated and works with OpenStack.

This page provides all the information needed to install and use the results of this particular project outcome.

Note that the code is open source. 

Page Structure:

  • Background and Goals
  • Demo
  • Manuals and Source Code


Input / output Externalization

Input/output (I/O) virtualization is a methodology to simplify management, lower costs and improve performance of servers in enterprise environments. I/O virtualization environments are created by abstracting the upper layer protocols from the physical connections. One of the objective of this workpackage is to externalize I/O resources and consolidate all of them in a single dedicated appliance.

Memory externalization

Memory externalization is about running a program with part (or all) of its memory residing on a remote node. This increases vm efficiency. Orbit provides memory externalization APIs and protocols.

  • Post-copy code is being made available to external parties.
  • Progress has been made towards upstreaming the code.
  • Kernel changes now in upstream Linux kernel.

Availability of the developed mechanisms to OpenStack

  • Design and integration of OpenStack Management with various  components have been completed
  • Memory consolidation components with Libvirt and OpenStack Cloud Management I/O Hypervisor with OpenStack Management New nova-IORCL module.

3. Manuals and Source Code

Split I/O assumes layer 2 (Ethernet) connectivity between the I/O hypervisor and the virtual machines. A virtual machine can utilize a direct assign network device in order to communicate to the I/O hypervisor for best performance. However, it is not mandatory. The guest can use any other virtual network device, provided that it can communicate over layer 2 with the I/O hypervisor. Additionally, the guest drivers are agnostic to the underlying hypervisor (e.g., ESXi, Xen), even bare-metal.

Split I/O was originally implemented using a proprietary lightweight protocol (directly over layer 2), not TCP, which resulted in an incompatibility for integration. To address this issue, we decided that the best course of action is to mimic a legitimate TCP/IP connection. This includes:

  1. 3-way handshake
  2. sequence and acknowledge number handling for each packet
  3. populating TCP and IP header with valid data such as IP:PORT for the source as well as the destination.

Although it looks like a real TCP stream, it doesn’t incur the overhead associated with TCP/IP. The TCP stack is not being used on either end, the I/O hypervisor and the VM.

Installation, etc.

More details and extended installation steps are described in the manual on GitHub.

GitHub repository for I/O Hypervisor