Virtualizing Intel® Software Guard Extensions with KVM and QEMU

This guide provides a step-by-step procedure for virtualizing Intel® Software Guard Extensions (Intel® SGX) using the Kernel-based Virtual Machine (KVM) virtualization module in the Linux* kernel with the QEMU* virtual machine monitor. This creates a virtual machine on your Intel SGX capable hardware, which can use Intel SGX in the guest operating system.

Requirements

To use Intel SGX in a virtual machine, you must meet the following requirements:

  • The host system must support Intel SGX.
  • Intel SGX must be enabled, either explicitly in the BIOS or via the software enabling procedure.
  • If you want to use Flexible Launch Control in guest systems, the hardware must also support the feature.

Note: Intel SGX support in KVM and QEMU is still under active development. Significant changes to the software can and do occur during each revision. This software should not be used in production deployments at the current time.

Installation Procedure

This guide assumes you are starting from a clean installation of Ubuntu* 18.04.2 LTS Server, but the steps here can be adapted to other Linux distributions.

Note that the current release of KVM-SGX is based on the Linux 5.x kernel. You will be building and installing the 5.0.0 kernel as part of this process.

Install Prerequisites

Install the software packages that are needed to build the kernel:

localhost:~$ sudo apt install fakeroot kernel-package libelf-dev build-essential libncurses-dev flex bison libssl-dev libfdt-dev libncursesw5-dev pkg-config libgtk-3-dev libspice-server-dev libssh-dev

Building KVM-SGX

Clone the kvm-sgx repository. This takes several minutes, even over a high-speed/high-bandwidth connection.

localhost:~$ git clone http://github.com/intel/kvm-sgx
localhost:~$ cd kvm-sgx
localhost:~$ git checkout sgx-v5.0.0-r1

Prepare for the kernel build. Setting a shell variable that defines the build area makes this process less error-prone.

localhost:~$ set opt=”O=$HOME/build”

The O parameter specifies the output location of your kernel configuration and build. This allows you to build multiple kernels from the same source tree without them interfering with one another, and keeps your source tree clean.

With this convenience shell variable set, you are ready to configure the kernel.

localhost:~$ make $opt menuconfig

To enable Intel SGX support in KVM guests, you must enable the core functionality in the kernel from the Processor type and features menu. Scroll down to Intel SGX core functionality and select it. It will be off by default if you are building from a fresh source tree. When this item is selected, the menu expands and auto selects Intel SGX Driver (this cannot be changed).

Processor type and features screen

Next, you need to ensure that KVM virtualization is enabled. From the top-level menu, go to the Virtualization menu and ensure the following features are selected. These should already be enabled as loadable modules (an “M” instead of an asterisk “*”), so you should not have to change anything here.

  • Kernel-based virtual machine (KVM) support
  • KVM for Intel® processors support
  • The virtio options

The other options are not required, but most of them may already be set.

Virtualization screen

Save your settings and Exit the kernel configuration screen.

The next step is to compile the kernel. A first-time build from the source tree is a lengthy process that can take a couple of hours, depending on your hardware.

localhost:~$ make $opt

Note that you can use the -j option with make for a parallel build that speeds up the compilation, but being too aggressive with this parameter can lead to build failures with cryptic error messages.

When the build is finished, install the new kernel. You’ll need to be at the root level for this.

localhost:~$ sudo make $opt modules_install install

The installation procedure runs grub to update your boot configuration so that the new kernel boots by default. At this point, you are ready to reboot.

After rebooting, log in to verify the new kernel and Intel SGX support. First, the kernel version should be 5.0.0+, which you can check with uname.

localhost:$ uname -r
5.0.0+

Verify that the Intel SGX driver has loaded by examining the kernel log. You should see a log entry for sgx, which prints the address range for the enclave page cache (EPC) in memory.

localhost:~$ dmesg | grep sgx
[    0.260212] sgx: EPC section 0x70200000-0x75ffffff

If you do not see an “sgx: EPC section” line, then your Intel SGX driver did not load. Possible causes are:

  • Intel SGX is not enabled in the system BIOS
  • Intel SGX is not supported by your processor
  • Intel SGX core functionality was not selected in the kernel configuration
  • An old kernel without the Intel SGX driver booted instead of the new one

If you see this line:

sgx: IA32_SGXLEPUBKEYHASHx MSRs are not writable

then your system supports Intel SGX, but not Launch Control. You’ll still be able to use KVM in your guest systems, but launch control will not be available.

Building QEMU-SGX

Clone the qemu-sgx repository.

localhost:~$ git clone http://github.com/intel/qemu-sgx

Check out the 3.1.0 tag. This is the release that is compatible with the 5.0.0 kvm-sgx kernel.

localhost:qemu-sgx$ git checkout sgx-v3.1.0-r1

The README file from the kvm-sgx repository states which qemu-sgx releases are compatible with a given kvm-sgx kernel.

The build procedure for QEMU is slightly different than that for most Linux applications. You need to create a build directory, then run QEMU’s configure script from within. The following configure options are recommended (and in some cases, required):

The --enable-kvm option is not necessary if you are booted into your KVM-enabled kernel, but is provided here for completeness and clarity.

The --enable-spice option will let you use the libvirt package and virt-viewer to reach the console.

The --disable-git-update option prevents the QEMU build system from trying to pull in updated sources from the QEMU Git* repository. For the Intel SGX build it's recommended that you freeze the source code repository to ensure compatibility in the event a rebuild is necessary. The QEMU-SGX repository is an out-of-tree set of patches to the QEMU source code, which means the original QEMU package maintainers are not validating their updates against the changes needed by Intel SGX.

The --enable-curses, --enable-gtk, and --enable-vnc options provide additional flexibility for connecting to the console.

If you enable other options, you may need to install additional prerequisite packages.

localhost:qemu-sgx$ mkdir build
localhost:qemu-sgx$ cd build
localhost:build$ ../configure --disable-git-update --enable-kvm --enable-vnc --enable-curses --enable-spice --enable-gtk --target-list=x86_64-softmmu
localhost:build$ make

A quick functionality test is to run an empty guest virtual machine (VM) with minimal arguments, and force Intel SGX support on. If there are errors, the VM will refuse to start. (If everything is correct, the VM will start, but there is no OS so it will not have anything to boot. This is normal.)

localhost:build$ sudo x86_64-softmmu/qemu-system-x86_64 -nographic -enable-kvm -cpu host,+sgx -object memory-backend-epc,id=mem1,size=8M,prealloc -sgx-epc id=epc1,memdev=mem1

If Intel SGX is properly enabled on the host, you should see output similar to the following when the virtual machine starts up:

output at VM start

Hit Ctrl-A, then C to get the monitor’s command prompt, then type quit to exit.

If the virtual machine ran properly, you are ready to install QEMU. It will go into /usr/local/bin unless you changed the install prefix when you ran configure.

localhost:build$ sudo make install

Using Intel SGX in the QEMU Virtual Machine

There are two parts to enabling Intel SGX in a guest VM.

  • The SGX feature must be enabled.
  • An enclave page cache (EPC) must be defined.

Enabling Intel SGX in the VM

Adding +sgx to the -cpu option to QEMU enables Intel SGX in the VM. Depending on what CPU model you choose for your VM, QEMU may auto-enable SGX, but there’s less confusion if you explicitly add the sgx parameter.

An easy way to enable Intel SGX support is to pass the host CPU through to the VM rather than define a specific CPU model. This is the default behavior, but again, being explicit with QEMU options helps to eliminate confusion.

If your hardware supports Intel SGX Launch Control, then you can enable (or disable) Launch Control with the sgxlc parameter. By default, the virtual machine inherits the host machine’s configuration, but continuing with the theme of clarity, it is recommended that you explicitly define it.

The following options enable Intel SGX in the VM, but disable Launch Control:

-cpu host,+sgx,-sgxlc

To enable both Intel SGX and Launch Control:

-cpu host,+sgx,+sgxlc

If for some reason you want to disable SGX in the VM entirely, use the following:

-cpu host,-sgx

Allocating an Enclave Page Cache

The current version of the sgx-kvm kernel divides the EPC among the guest hosts. You specify how much of the EPC to expose to the guest, which reduces the total amount of EPC available to other guests (and to the host system itself, if you choose to run SGX applications on the host as well). If you have 96 MB of EPC on your host and you assign 16 MB to each virtual machine, then you’ll only be able to run (at most) six VMs at one time. This behavior may change in future releases of the kvm-sgx kernel.

To define an EPC range, you must allocate a custom QEMU memory object and assign it a unique ID, then provide the memory ID to the -sgx-epc option. The following QEMU options create and assign an 8-MB EPC to the VM:

-object memory-backend-epc,id=mem1,size=8M,prealloc -sgx-epc id=epc1,memdev=mem1

You can define multiple EPC segments in this manner. See the README file for the qemu-sgx repository for more information on defining EPC segments.

Integrating with libvirt

While it is possible to run qemu-system-x86_64 directly, the lengthy QEMU command lines and complex options make that unwieldy. Libvirt provides a complete management suite that greatly simplifies virtual machine creation, execution, and maintenance. A usage guide to libvirt is beyond the scope of this tutorial.

Run the following to install libvirt:

localhost:~$ sudo apt install libvirt-bin libvirt-clients virt-manager

The installation process should add you to the libvirt group. At this point you will need to log out and then log in again so that the new group membership takes effect. The default configuration for libvirt requires that users be in the libvirt group to create and manage global VMs.

This command also installs the native QEMU package into /usr/bin. To use the qemu-system-x86_64 binary that you installed into /usr/local/bin, you’ll need to specify the path to the QEMU emulator manually after creating a virtual machine. It’s not recommended that you overwrite or replace the distribution-managed package, as any package updates will overwrite your custom-built binary.

Configuring AppArmor

AppArmor is a mandatory access control (MAC) system installed by Ubuntu 18.04 by default. MACs constrain application capabilities at a finer level of granularity than the traditional Unix* file permissions. The default configuration for libvirt does not allow execution of programs in /usr/local/bin.

If you are using AppArmor, you’ll need to add the following line to /etc/apparmor.d/local/usr.sbin.libvirtd:

/usr/local/bin/* PUx,

Reload the AppArmor profile for ibvirtd by running apparmor_parser:

localhost:~$ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.libvirtd

Configuring QEMU in libvirt

The Intel SGX build of QEMU requires access to /dev/sgx_virt to assign EPC memory pages, and this access will be denied by libvirt’s cgroup controllers, which are enabled by default. You need to add /div/sgx_virt to the list of devices required by virtual machines. Edit /etc/libvirt/qemu.conf and change the cgroup_device_acl list to include /dev/sgx_virt:

cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/sgx_virt"
]

Finally, QEMU needs to read and write to the /dev/sgx_virt device. Unfortunately, this device gets created with file mode 600 at boot time, and it’s owned by root, so QEMU has to launch as root. In /etc/libvirt/qemu.conf, set the runtime user to uid 0:

user = "+0"

By default, libvirt also tries to use a security driver for QEMU, and it chooses either Security-Enhanced Linux (SELinux) or AppArmor, whichever is available. This is set by the security_driver parameter. If you don’t want to use a security driver, set this parameter to “none”. Configuring SELinux profiles for libvirt’s security driver is outside the scope of this document.

If you are using AppArmor for the security driver, you’ll also need to modify /etc/apparmor.d/libvirt/TEMPLATE.qemu to read:

#include <tunables/global>

profile LIBVIRT_TEMPLATE flags=(attach_disconnected) {
  #include <abstractions/libvirt-qemu>
  /usr/local/bin/* PUx,
}

After making these changes, you’ll need to restart the libvirtd service:

localhost:~$ sudo systemctl restart libvirtd

Install a Test VM

Verify that your login session includes the libvirt group by running the groups command:

localhost:~$ groups
adm cdrom sudo dip plugdev lxd libvirt

If you want to be able to execute virt-manager or use the graphics consoles for your virtual machines, you’ll need a graphical desktop as well. If you haven’t already, install the desktop environment for Ubuntu, and (optionally) a VNC server, if you’ll be working remotely. Note that this step is strictly one of convenience; you can create and manage virtual machines in a command-line environment.

localhost:~$ apt install ubuntu-desktop

Unfortunately, libvirt does not have direct support for the Intel SGX enabled version of QEMU. To add Intel SGX support to a virtual machine, you’ll first need to create the VM and then edit the XML by hand.

You have three main options for creating a VM.

Option 1: Create a blank VM on the command line

This method works entirely from the command line, though it does require a few manual steps. First, generate a unique UUID with the uuidgen command.

localhost:~$ uuidgen

Next, create a disk image to hold the VM. Using the QEMU disk image format is arguably the most versatile, and allows features such as VM snapshots and encryption. The following creates a 20GB disk image.

localhost:~$ qemu-img create -f qcow2 testvm.qcow2 20G
Formatting 'testvm.qcow2', fmt=qcow2 size=21474836480 cluster_size=65536 lazy_refcounts=off refcount_bits=16

Finally, create an XML file, which we’ll call testvm.xml, that defines a minimal VM. This configuration is for 4 GB RAM and a single virtual disk, and does not include a graphical console (you can add this later in virt-manager). Note that you’ll need to set the UUID and the source file path to the disk image you created above. This must be an absolute path.

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>testvm</name>
  <uuid>uuid_from_uuidgen_goes_here</uuid>
  <memory>4194304</memory>
  <currentMemory>4194304</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.8'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/path/to/testvm.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </memballoon>
  </devices>
</domain>

Next, define the VM in libvirt using the XML source file:

localhost:~$ virsh define testvm.xml
Domain testvm defined from testvm.xml

You can verify that the domain has been created by listing the known libvirt domains.

localhost:~$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     testvm                         shut off

Run it as a final validation step. This definition does not provide a console, nor is there an OS loaded, so it will simply start and run in the background.

localhost:~$ virsh start testvm
Domain testvm started

Stop the VM by running the following:

localhost:~$ virsh destroy testvm
Domain testvm destroyed
Option 2: Create a blank VM using virt-manager

Once your desktop environment is running, you can run virt-manager and create a virtual machine. This method is arguably the easiest, as the GUI steps you through the options.

Option 3: Create and install an OS in the VM with virt-install

This method allows you to create and install a VM from a source image such as an ISO image. The virt-install command is part of the virtinst package.

localhost:~$ apt install virtinst

Adding Intel SGX to the Test VM

Unfortunately, libvirt does not have direct support for the Intel SGX enabled version of QEMU. To add Intel SGX support to your newly-created virtual machine, you’ll need to edit the XML for the newly created VM (or domain, using libvirt’s terminology) by hand.

localhost:~$ virsh edit domain

The first step is to edit the opening <domain> stanza as follows:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

This allows you to encode custom QEMU arguments in the XML definition for the domain.

Next, you need to delete the <cpu> stanza if your XML definition includes one. This is necessary because you’ll be providing custom arguments to QEMU’s -cpu option.

Inside the <devices> stanza, you’ll need to add the path to your qemu binary in /usr/local/bin.

<devices>
<emulator>/usr/local/bin/qemu-system-x86_64</emulator>
<!-- Your XML file will have several items inside here -->
</devices>

The required arguments for QEMU are encoded via a <qemu:commandline> stanza, with each argument getting its own <qemu:arg> definition. Place this anywhere inside the <domain> stanza.

<qemu:commandline>
     <qemu:arg value='-cpu'/>
     <qemu:arg value='host,+sgx,+sgxlc'/>
     <qemu:arg value='-object'/>
     <qemu:arg value='memory-backend-epc,id=mem1,size=16M,prealloc'/>
     <qemu:arg value='-sgx-epc'/>
     <qemu:arg value='id=epc1,memdev=mem1'/>
</qemu:commandline>

This XML corresponds to the following QEMU command-line options:

-cpu host,+sgx,+sgxlc -object memory-backend-epc,id=mem1,size=16M,prealloc -sgx-epc id=epc1,memdev=mem1

Test the Intel SGX Enabled VM

Test that Intel SGX functionality is present by starting the virtual machine. If everything is correct, the VM should launch without errors.

localhost:~$ virsh start testvm
Domain testvm started

If your VM does not start, see the following table for a list of common errors and their possible causes:

Error MessagePossible Cause(s) and Resolution
-sgx-epc: invalid option

Libvirt is running the distribution QEMU from /usr/bin. Make sure the <emulator> stanza is set to /usr/local/bin/qemu-system-x86_64

/usr/local/bin/qemu-system-x86_64: Permission denied

Your MAC is preventing access to the executable in /usr/local/bin. You either have:
1. Configured a MAC for Linux as a whole
2. Configured libvirt to use a security driver for QEMU
3. Both of the above
Check your configurations in /etc/apparmor.d and /etc/apparmor.d/libvirt/TEMPLATE.qemu. Ensure that “PUx” permissions are granted for binaries in /usr/local/bin/*.

invalid object type: memory-backend-epc

Libvirt’s security settings are preventing access to /dev/sgx_virt. Check the following:
1. /dev/sgx_virt should be in the cgroup_device_acl list in qemu.conf.
2. QEMU should be configured to run as uid 0 in qemu.conf.
3. Either disable your MAC or create an exception or profile to allow access to /dev/sgx_virt.

Assuming the VM launches properly, you are now ready to create a real guest image and start using Intel SGX in your virtualized environment.

Summary

With the Intel SGX enabled KVM and QEMU distributions it is possible to virtualize Intel SGX on Intel SGX capable hardware. Each guest operating system gains access to the Intel SGX hardware features, and they can be configured independently of one another, whether that be EPC size, access to flexible launch control, or even access to Intel SGX as a whole. Further, these virtual machines can be managed through libvirt, making it possible to integrate Intel SGX enabled guests into existing deployments.

This is still an evolving technology, however, and it is under active development. Virtualizing Intel SGX is appropriate for development systems, testing, and personal workgroup environments, but it is not appropriate for production solutions at the current time.

Further Reading

See the Intel / kvm-sgx and Intel / qemu-sgx project pages on GitHub* for technical information and the latest updates on the Intel SGX virtualization project.

TWEET: Virtualizing Intel® SGX using KVM in the Linux* kernel with QEMU virtual machine monitor to create a VM on your Intel SGX capable hardware that can use Intel SGX in the guest OS.

SUMMARY: Virtualizing Intel® SGX using KVM in the Linux* kernel with QEMU virtual machine monitor to create a VM on your Intel SGX capable hardware that can use Intel SGX in the guest OS.

For more complete information about compiler optimizations, see our Optimization Notice.