Intel® SGX for Dummies – Part 2

In my last blog post, only about 9 short months ago, I provided an overview of the Intel® SGX design objectives.  Sincere apologies for the long delay between postings, my colleagues and I have been hard at work on the latest security technologies and I need to remember to carve out more time to post.

As a reminder, I highlighted these eight design objectives for Intel® SGX:

  1. Allow application developers to protect sensitive data from unauthorized access or modification by rogue software running at higher privilege levels.
  2. Enable applications to preserve the confidentiality and integrity of sensitive code and data without disrupting the ability of legitimate system software to schedule and manage the use of platform resources.
  3. Enable consumers of computing devices to retain control of their platforms and the freedom to install and uninstall applications and services as they choose.
  4. Enable the platform to measure an application’s trusted code and produce a signed attestation, rooted in the processor, that includes this measurement and other certification that the code has been correctly initialized in a trustable environment.
  5. Enable the development of trusted applications using familiar tools and processes.
  6. Allow the performance of trusted applications to scale with the capabilities of the underlying application processor.
  7. Enable software vendors to deliver trusted applications and updates at their cadence, using the distribution channels of their choice.
  8. Enable applications to define secure regions of code and data that maintain confidentiality even when an attacker has physical control of the platform and can conduct direct attacks on memory.

In my previous post I expanded upon the first two objectives.  In this post, I will review objectives 3-5:

Objective 3 – Enable consumers of computing devices to retain control of their platforms and the freedom to install and uninstall applications and services as they choose.

Creating a trusted application should not prescribe a specific configuration nor limit the user’s control of his or her platform.  A common technique for improving the security of today’s platforms is to severely constrain the software that may be loaded onto the platform.  Game boxes, set-top boxes, and smart phones typically have a dedicated operating system, offer limited upgradeability, and place restrictions on application availability and behavior to reduce variation that can lead to security issues.  An enterprise may require specific OS and software configurations and restrict other user behaviors (e.g., adding USB devices to the system).  While there may be good business or manageability reasons for such restrictions, they should not be required to preserve data confidentiality and integrity.  This requirement becomes even clearer when personal computing devices are considered, where the need for a trusted environment is equally great while the imperative of personalization is even more evident.

Objective 4 – Enable the platform to measure an application’s trusted code and produce a signed attestation, rooted in the processor, that includes this measurement and other certification that the code has been correctly initialized in a trustable environment.

Allowing consumers to continue to control the software on a platform introduces a problem for trustworthy application delivery.  How can one be certain that a platform has the necessary primitives to support trusted computing, that an application has been correctly installed, and that the installed application has not been tampered with?  Or to put it another way, how can an application “prove” that it is trusted?

An accepted way of determining that an application has been correctly loaded and initialized is to compare the application’s signature (a cryptographic hash of its memory footprint at a well-known execution point) with an “expected value” derived from a system known to be trusted (this is called measuring the application).[1]  To attest its provenance, the measurement is signed with a private key known only to the trusted entity that performs the measurement.

Note that developers cannot rely on a measurement supplied by system software; as noted earlier, software can always be virtualized or otherwise spoofed by suitably privileged rogue software. This implies that hardware must be responsible for supplying this measurement – the same hardware that establishes the trusted environment, loads/initializes the trusted application, and (ultimately) performs computations on the sensitive data.

Objective 5 – Enable the development of trusted applications using familiar tools and processes.

The first four objectives provide the benefits of a more closed environment by reducing the number of entities that must be trusted, while preserving the advantages provided through open platforms and user choice.  But these objectives alone do not ensure that the software spiral can continue.  For example, if developers are required to radically change their development processes, or are forced to develop for a proprietary type of secure microcontroller, productivity would be significantly reduced.

 

As I don't want these blog posts to turn into novels, I will stop here.  But now that posting is top of mind, I will expand upon the remaining objectives shortly.

Part 3 is now available.

 

[1] To be precise, one need only measure the trusted part of the application that is responsible for manipulating the sensitive data.

 

For more complete information about compiler optimizations, see our Optimization Notice.
Tags: