The Intel® Software Guard Extensions (Intel® SGX) Card is a full length, full height, double-wide PCIe* x16 expansion card with three Intel® Xeon® E3-1585L v5 processors on board that is based on the Intel Visual Compute Accelerator (VCA2) card. These three Intel Xeon processors, referred to as nodes, are complete, individual server systems with their own memory, operating system, and storage. Though they rely on the host system for node management and virtual networking, they operate independently from the host and from each other.
This guide steps through the process of setting up Intel SGX using the Intel VCA2 Card with an Ubuntu* 16.04 image and booting each node with its own Ubuntu 16.04 image.
The Intel SGX Card may differ slightly from the Intel VCA2 Card when it is released. This guide assumes you are using the Intel VCA2 card, and it will be updated with instructions for the Intel SGX Card after it has been officially launched.
This guide assumes that you have installed:
- Compatible dynamic random-access memory (DRAM) modules in the Intel SGX Card
- The Intel VCA2 Card in a compatible, 2U server system with Intel® Xeon® Scalable processors
- Ubuntu 16.04 LTS on the server
For hardware compatibility information, see the hardware users guide on the Intel® Visual Compute Accelerator 2 (Intel® VCA2) technical documentation page.
There are several downloads for the Intel VCA2 Card in two separate locations. Table 1 summarizes the most important packages and where each can be found.
Table 1: Download Packages
Hardware users guide
Technical specifications, host system requirements, validation guide, and memory and OS support.
Software users guide
Complete software setup guide for the host system, and technical information on provisioning nodes.
VCA1283LVV/VCA1585LMV Host Files
Kernel image for the host system, kernel modules for the host system, and management utilities. These must be loaded on the host system to use the card.
Downloads are available for each of the supported host operating systems.
VCA1283LVV / VCA1585LMV BIOS Update
BIOS image for the nodes.
VCA1283LVV / VCA1585LMV EEPROM Update
Firmware update for the card.
VCA1283LVV / VCA1585LMV Persistent Reference Package
Ready-to-use bootable, persistent operating system images for the nodes that use the BlockIO device.
Downloads are available for each of the supported node operating systems.
Managing the Compute Nodes
The compute nodes on the Intel VCA2 Card are headless systems: They have no monitor, no keyboard, no mouse, and no physical ports to attach to devices. This means that the only means of interacting with them is through the host system using the drivers, system daemons, and management utilities. Both the host system and the node systems must be running an OS with kernel support for the card, and the host system must run the management client software.
The vcactl utility, which runs on the host system, is your primary management interface with the Intel VCA2 Card and its server nodes.
Setting Up the Intel® VCA2 Card
The procedure in this setup guide is distilled from the Software users guide for the Intel VCA2 card and focuses on configuration options that are more appropriate for an Intel SGX Card deployment. It is recommended that you download the Software users guide as a reference before you begin.
Preparing the Host System
You must disable Network Manager on your system, as it can interfere with the communications between the host system and the Intel VCA2 Card:
systemctl stop NetworkManager systemctl disable NetworkManager
Installing the Host Kernel Image
Unfortunately, the PCIe bus communications between the host and the nodes cannot be implemented solely as kernel modules if VT-d is enabled on the host (these patches are not required if VT-d is disabled). This limits the number of operating systems that Intel can provide as prebuilt images. Intel does make the kernel patches available as source code so that administrators can build a custom image for other Linux* distributions, but this guide uses the prebuilt Ubuntu image.
From the Downloads and Drivers site, choose the VCA1283LVV/VCA1585LMV Host Files download. You will be directed to another page that provides this bundle for the following Linux operating systems (as of June 2019):
- CentOS* 7.4
- Debian* 8.x
- Ubuntu 16.04
Choose the Ubuntu bundle, download it to your host system, and then extract the files using the unzip utility:
This will produce the following Debian packages:
daemon-vca-2.3.26-amd64.deb linux-headers-4.14.20-188.8.131.52.vca_1.0_amd64.deb linux-image-4.14.20-184.108.40.206.vca_1.0_amd64.deb vcadebug-2.3.26.deb vcass-modules_2.3.26-1_amd64.deb
The linux-* files are the kernel image and headers, and indicate which Linux kernel version they are based on, in this case version 4.14.20-1. First, install the kernel headers:
sudo dpkg -i linux-headers-4.14.20-220.127.116.11.vca_1.0_amd64.deb
Next, install the kernel image:
sudo dpkg -i linux-image-4.14.20-18.104.22.168.vca_1.0_amd64.deb
This also updates the GRUB* configuration to boot the new kernel image by default.
Installing the Kernel Modules and Host Utilities
Next, install the management daemon and kernel modules:
sudo pkg -i vcass-modules_2.3.26-1_amd64.deb
If you update an existing installation to a newer version of the daemon and kernel modules, you must uninstall the existing packages first.
When the command completes, reboot the system:
Once the new system boots, you can verify that you are running the new image with uname:
$ uname -a Linux localhost 4.14.20-22.214.171.124.vca #1 SMP Wed Nov 28 17:11:04 CET 2018 x86_64 x86_64 x86_64 GNU/Linux
You can also verify that the kernel modules have loaded with lsmod:
$ lsmod | grep -i vca vca_mgr_extd 16384 0 vca_mgr 16384 0 vca_csm 16384 0 vca_vringh 24576 1 vop vca_virtio_net 36864 1 vop vca_virtio 16384 2 vca_virtio_net,vop vca_virtio_ring 24576 2 vca_virtio_net,vop vcablkfe 36864 1 plx87xx vcablk_bckend 53248 1 plx87xx vca_mgr_extd_bus 16384 2 plx87xx,vca_mgr_extd vca_csa_bus 16384 1 plx87xx vca_mgr_bus 16384 2 vca_mgr,plx87xx vca_csm_bus 16384 2 vca_csm,plx87xx
Now, you can install the daemon and host utilities:
sudo pkg -i daemon-vca-2.3.26-amd64.deb
Verify that everything is correct by checking the status of the nodes using vcactl:
$ sudo vcactl status Card: 0 Cpu: 0 STATE: bios_up Card: 0 Cpu: 1 STATE: bios_up Card: 0 Cpu: 2 STATE: bios_up
Updating the Card Firmware
Download the EEPROM Update package and unzip the package:
$ unzip IntelVisualComputeAccelerator_EEPROM_2.3.26.zip Archive: IntelVisualComputeAccelerator_EEPROM_2.3.26.zip inflating: eeprom_c3456134.bin creating: M.2_support/ inflating: M.2_support/eeprom_dafbb163.bin
It is recommended that you reset the nodes before starting the update using vcactl reset. You will then need to wait for the BIOS to enter the ready state before proceeding. Running vcactl wait-BIOS is a convenient way to be notified when the nodes are ready: The command will hang until the BIOS images are up and running.
$ sudo vcactl reset $ sudo vcactl wait-BIOS Card: 0 Cpu: 0 - BIOS is up and running! Card: 0 Cpu: 2 - BIOS is up and running! Card: 0 Cpu: 1 - BIOS is up and running!
To update the EEPROM, first run the vcactl command with the update-EEPROM subcommand, and then specify the card number and the image to load. In the above package, there are two firmware updates to apply.
$ sudo vcactl update-EEPROM 0 eeprom_c3456134.bin Update EEPROM process started (for card 0). Do not power down system! Update EEPROM for first PCIe switch successful! Update EEPROM for second PCIe switch successful! Update EEPROM successful (for card 0). Reboot system is required to reload EEPROM. $ sudo vcactl update-EEPROM 0 M.2_support/eeprom_dafbb163.bin Update EEPROM process started (for card 0). Do not power down system! Update EEPROM for first PCIe switch successful! Update EEPROM for second PCIe switch successful! Update EEPROM successful (for card 0). Reboot system is required to reload EEPROM.
Reboot the host system to finalize the update process.
Updating the BIOS Images
Each compute node is a complete, independent system with its own BIOS image, and each one must be updated separately. Start by downloading and unzipping the BIOS Update package.
As with the EEPROM update, it is recommended that you reset the nodes, and the BIOS images must be in the ready state:
$ sudo vcactl reset $ sudo vcactl wait-BIOS Card: 0 Cpu: 0 - BIOS is up and running! Card: 0 Cpu: 2 - BIOS is up and running! Card: 0 Cpu: 1 - BIOS is up and running!
To update the BIOS images, you use the update-BIOS subcommand of vcactl. You must specify the card number and the node number to update:
$ sudo vcactl update-BIOS 0 0 VCA-BIOS_0ACGC305_0ACIE204_201810251032.img Card: 0 Cpu: 0 - BIOS UPDATE STARTED. DO NOT POWERDOWN SYSTEM Card: 0 Cpu: 0 - UPDATE BIOS SUCCESSFUL Card: 0 Cpu: 0 - Node will be power down and up automatically to make the change active. Please wait for 'bios_up' to start work with the node.
A BIOS update can take several minutes to complete. Repeat this procedure for the other two nodes:
$ sudo vcactl update-BIOS 0 1 VCA-BIOS_0ACGC305_0ACIE204_201810251032.img $ sudo vcactl update-BIOS 0 2 VCA-BIOS_0ACGC305_0ACIE204_201810251032.img
Enabling Intel SGX
The Intel VCA2 Card does not support the software enable capability, so Intel SGX must be explicitly enabled in the BIOS on each node. Use the set-BIOS-cfg subcommand of vcactl to modify BIOS options. As with other commands that interact with specific nodes, you must specify the target card and node numbers.
The node restarts in order to change the BIOS, and then restarts a second time:
$ sudo vcactl set-BIOS-cfg 0 0 sgx enable Card: 0 Cpu: 0 - BIOS configuration changed. Node will be restarted... Card: 0 Cpu: 0 - Node will be power down and up to make the change active. Please wait for 'bios_up' to start work with the node. Card: 0 Cpu: 0 - BIOS is up and running! Card: 0 Cpu: 0 - Retrying to set param in BIOS Card: 0 Cpu: 0 - BIOS configuration changed. Node will be restarted... Card: 0 Cpu: 0 - Node will be power down and up to make the change active. Please wait for 'bios_up' to start work with the node. Card: 0 Cpu: 0 - BIOS is up and running!
Repeat this command for the other two nodes:
$ sudo vcactl set-BIOS-cfg 0 1 sgx enable $ sudo vcactl set-BIOS-cfg 0 2 sgx enable
Verify the status of Intel SGX using vcactl get-BIOS-cfg:
$ vcactl get-BIOS-cfg 0 1 sgx Card: 0 Cpu: 1 - BIOS configuration: SGX: enabled
Configuring Bridged Networking
Networking on the nodes is implemented via a virtual network device that communicates with the host over the PCI bus. To make the nodes visible to an external network, the host must create a network bridge. The individual nodes can then be configured with static addresses or via DHCP.
To create a bridge in Ubuntu 16.04, edit the /etc/network/interfaces file. If your host system is configured dynamically, replace the definition for your primary network device with the following:
auto virbr0 iface virbr0 inet dhcp bridge_ports devicename post-up ifconfig devicename mtu 9000 # See note about setting the mtu below.
If your host system uses a static address, the format is slightly different:
auto virbr0 iface virbr0 inet static bridge_ports devicename post-up ifconfig devicename mtu 9000 # copy your static configuration for devicename # See note about setting the mtu below.
In either case, replace devicename with the name of your physical network device (which is probably of the form “enoN”, or “enpNs0”). This tells Ubuntu which network devices will participate in the bridge.
The ifconfig command sets the maximum transmission unit for the physical device. The maximum transmission unit (MTU) of the Intel SGX VCA2 virtual ethernet adapter is much higher than the default value of 1500, so setting the MTU to a larger value than the default for the physical network card may increase the virtual network’s performance. Most modern network cards and switches can handle a large MTU (also known as jumbo frames), but if you see errors in your host’s system log, you may need to lower this value.
Reboot the system to ensure a clean network configuration.
Once the system is back up, you must configure the Intel SGX Card management daemon to use the newly created network bridge. Do this by running vcactl config:
$ sudo vcactl config bridge-interface virbr0
Each compute node boots and runs its own operating system image. Since the nodes are headless systems without physical I/O devices, they must be told how to load their OS, and the OS images must be provisioned in advance.
There are two primary methods of providing a persistent OS image to a node:
- The BlockIO Device method uses the BlockIO driver to map read and write operations to an image file on the host’s filesystem. This filesystem image is mountable on the host via the loopback device, so that the host can perform critical maintenance tasks (such as a filesystem check using fsck) that would normally require a console on the node.
- The NFS method functions similarly to a diskless client. The node boots from a RAM disk image and then mounts its filesystems from an NFS server. The NFS server can be on the host or an external system.
The BlockIO device method is simplest and easiest, but it does come with a caveat: Writes to the BlockIO device are buffered on both the node’s front end and the server’s back end. If you use the BlockIO devices you must shut down the nodes before shutting down the host or you risk filesystem corruption on the node.
This guide shows the procedure for installing a persistent image that uses the BlockIO method.
Though the original VCA2 card supported virtualization on the node images, the virtual machine monitors (VMMs) in the prebuilt images did not support Intel SGX virtualization. Creating a Linux image with kernel support for Intel SGX virtualization is outside the scope of this guide.
Installing the Persistent Images
Download the Persistent Reference Package from Intel and unzip the archive. Note that there are two prebuilt options: CentOS 7.4 and Ubuntu 16.04. This example uses the Ubuntu image, which has been compressed with both gzip and zip.
$ unzip IntelVisualComputeAccelerator_Persistent_Reference_Ubuntu_16.04.3_2.3.26.zip Archive: IntelVisualComputeAccelerator_Persistent_Reference_Ubuntu_16.04.3_2.3.26.zip inflating: vca_disk_24gb_reference_k4.14.20_ubuntu16.04_2.3.26.vcad.gz $ gunzip vca_disk_24gb_reference_k4.14.20_ubuntu16.04_2.3.26.vcad.gz
Each node needs its own file, so you must make copies of the disk image (one for each node). It is recommended that you keep a copy of the original disk image in case you need to reprovision a node with a clean starting image. The following commands create a directory in /var/opt/vca and then copies the images to that location. The destination image files are named for the card and node number:
$ sudo su # mkdir /var/opt/sgxcard # cp vca_disk_24gb_reference_k4.14.20_ubuntu16.04_2.3.26.vcad /var/opt/sgxcard/node_0_0.vcad # cp vca_disk_24gb_reference_k4.14.20_ubuntu16.04_2.3.26.vcad /var/opt/sgxcard/node_0_1.vcad # cp vca_disk_24gb_reference_k4.14.20_ubuntu16.04_2.3.26.vcad /var/opt/sgxcard/node_0_2.vcad
Configuring Networking on the Nodes
The virtual network devices on the nodes are managed by the host. Use vcactl with the config subcommand to set the networking options for each, including the hostname. Each node gets its own configuration, so you can have a mix of both static and Dynamic Host Configuration Protocol (DHCP) assigned addresses.
To configure node 0 for DHCP, use:
$ sudo vcactl config 0 0 ip dhcp
For static networking, you need to specify the IP address, network mask length in bits, and the gateway address. For example, the following commands assign the IP address 192.168.1.2 on the 192.168.1.0/24 subnet, with a gateway address of 192.168.1.1:
$ sudo vcactl config 0 0 ip 192.168.1.2 $ sudo vcactl config 0 0 mask 24 $ sudo vcactl config 0 0 gateway 192.168.1.1
To set the node’s hostname:
$ sudo vcactl config 0 0 node-name sgxnode00
Note that you cannot change the network configuration of a node once it has booted. Any new settings will not take effect until the node reboots.
Configuring the Nodes for BlockIO Booting
Next, you need to configure the nodes to boot using the BlockIO device, vcablk0:
# sudo vcactl blockio open 0 0 vcablk0 RW /var/opt/sgxcard/node_0_0.vcad # sudo vcactl blockio open 0 1 vcablk0 RW /var/opt/sgxcard/node_0_1.vcad # sudo vcactl blockio open 0 2 vcablk0 RW /var/opt/sgxcard/node_0_2.vcad
Booting the Nodes
Verify that everything is correct by examining the configuration in /etc/vca_config.d/vca_config.xml. Each Intel SGX Card has a stanza of the form <card id=”N”>, and each node has a stanza of the form <cpu id=”0”>. Inside the latter, you should see a definition of the block device that looks like the following:
<cpu id="0"> <!-- some lines removed for clarity --> <ip>dhcp</ip> <node-name>sgxnode00</node-name> <block-devs> <vcablk0> <mode>RW</mode> <path>/var/opt/sgxcard/node_0_0.vcad</path> <ramdisk-size-mb>0</ramdisk-size-mb> <enabled>1</enabled> </vcablk0> </block-devs> <va-min-free-memory-enabled-node>1</va-min-free-memory-enabled-node> </cpu>
Correct any errors using vca config and then boot your nodes:
$ sudo vcactl boot 0 0 vcablk0 $ sudo vcactl boot 0 1 vcablk0 $ sudo vcactl boot 0 2 vcablk0
If you configured one or more of your nodes to use DHCP, you will need to get their network addresses. Once the node state is “net_device_ready”, you can query the nodes.
$ sudo vcactl status Card: 0 Cpu: 0 STATE: net_device_ready Card: 0 Cpu: 1 STATE: net_device_ready Card: 0 Cpu: 2 STATE: net_device_ready $ sudo vcactl network ip Card 0 Cpu 0: 126.96.36.199 Card 0 Cpu 1: 188.8.131.52 Card 0 Cpu 2: 184.108.40.206
Logging In to the Nodes
The OpenSSH* server—and root login via ssh—is enabled by default in the Persistent Reference Package images: There is no console, so ssh is the only means of connecting to a freshly booted node.
Use ssh to log in to each node as root. The default root password for the node image can be found in the Intel VCA Card Software Users Guide.
You should immediately change the root password on your nodes! The Persistent Reference Package images come with a preset root password because there is no console, and thus no means of accessing the nodes other than a direct log in as root.
Production installations will almost certainly demand a Linux image that has been customized for the compute environment in order to provide control over the kernel version, Linux installation, and of course the default root password. Intel provides the following downloads on the Downloads and Drivers page to assist you with this process.
- The Intel Visual Compute Accelerator Source Files package contains the source code for the kernel patches, kernel modules, VCA daemon and host utilities. These sources can be used to build the VCA software for an arbitrary Linux kernel version, though some porting may be necessary.
- The Intel Visual Compute Accelerator Build Scripts package contains scripts for building custom, bootable Linux images for the node systems.
This guide stepped you through the setup procedure for Intel SGX on the Intel VCA2 Card, and each node should now be running a separate instance of Ubuntu 16.04 LTS using the persistent reference images. This provides the first step toward designing and building your production deployment by giving you a fully functioning Intel SGX compute environment. From here you can experiment with the card, familiarize yourself with the node management utilities and procedures, and develop systems administration tools and processes needed to deploy your final solution.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.
This sample source code is released under the Intel Sample Source Code License Agreement.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
© 2019 Intel Corporation
Tweet: The Intel® SGX Card with PCIe* expansion & 3 Intel® Xeon® E3 processor nodes: headless, complete, individual server systems operating independently from host & from each other.
Summary: The Intel® SGX Card with PCIe* expansion & 3 Intel® Xeon® E3 processor nodes: headless, complete, individual server systems operating independently from host & from each other.