Good morning, I hope this post finds the week going well for everyone. I've been meaning to write this post for some time but was precluded from doing so by development deadlines.
The issue raised by Vikram has been raised in other threads in this forum, most specifically the following threat that Scott has also replied to:
Both of these threads cut to what is a very important issue with respect to SGX technology and would seem to benefit from further clarification. This is the notion that using SGX enclaves would somehow eliminate the need for identification and authorization, the two fundamental tenants that all information security systems must embrace if information is to remain privileged or if integrity is to be assigned to information that comes from the tutelage of such a system.
First, a clarification with respect to Scott's explanation of what a launch token is. He correctly states that this is a data structure generated by the Intel supplied Launch Enclave (LE) that verifies an enclave is legitimate and therefore allowed to launch. What is incorrect about his explanation is how the LE makes such a decision, most specifically the statement, ''then makes sure the enclave is on the whitelist".
Such an architecture would obviously not scale, since it would require that some sentinel identifier from every enclave would be available for the LE to base its authorization decision on. What the LE instead does is to verify whether or not the enclave has been signed by a key whose measurement value is on the whitelist that is distributed and signed by Intel. The key measurement value is the SHA256 checksum of the modulus of the 3072 bit RSA key that was used to sign the enclave. So, most precisely, what the LE does is verify that an enclave has been signed by a software vendor that has implemented a business relationship with Intel, ie. registered their signing key with Intel.
The requirement for a launch token is central to the authorization model that SGX is built on. A processor specific launch token is required in order to execute the ENCLS[EINIT] instruction, which is the final step in creating the EPC based memory image of an enclave that the processor will be able to decrypt and execute. So controlling the generation of a launch token allows a platform to make authorization decisions as to what enclaves may execute and the conditions under which they can execute, ie. debug or production mode.
The central debate about SGX is whether or not this authorization process should be under the control of Intel or the platform software, ie. the operating system or hypervisor. Intel has proffered an answer to this question through the notion of Flexible Launch Control (FCS), which allows the OS or hypervisor to make the decision as to what enclaves may be authorized for initialization. There are certainly arguments to be made for FCS and Costan and Devadas made these in their initial SGX review paper, where they advanced the supposition that the platform software is in a better position to make decisions about whether or not to launch an enclave then the LE is.
While these arguments have merit, they do violate the premise that FCS based SGX can provide IAGO class security statements about the execution integrity of an enclave, since in an FCS environment, compromise of the operating system allows the integrity of the launch authorization process to be compromised. Whereas in an LE based environment the chain of trust for the authorization of enclave initialization transits only through SGX protected islands of execution.
In his second statement, Scott indicates that there is no way that SGX can bind an application to a specific enclave. This is technically correct with respect to the SGX instruction primitives but, as we see above, is operationally incorrect since the notion of this binding is technically an authorization process which is under the control of the launch token generation process.
The conceptual understanding that security developers must take away from this is that, in the current environment, an enclave is very much a Bearers Token with respect to any data that has been sealed to an enclave or suppositions of integrity that can be conveyed to data that is processed or generated by an enclave. In other words, whoever is able to load and initialize an enclave has access to secrets that the enclave has sealed to it, either through static sealing or the use of remote attestation to establish an ephemeral security context.
If malware or an undesired actor process is able to gain access to a platform, any secrets entrusted to the enclave are then available to the malware, if the malware can arrange for loading and execution of an enclave. In a similar fashion, if an enclave is intended to be a secure endpoint with respect to the integrity of data being generated, a security aggressor gaining access to the enclave can deliver to an intended security counter-party, information of their choosing. Think conceptually of a remote sensing system where SGX is intended to provide a secure endpoint. If the platform can be compromised and an alternate actor can execute the enclave there are no guarantees that can be afforded to the integrity of the sensor data.
None of this is theoretical. The Intel PSW makes this somewhat cumbersome but the independent PSW implementation that we developed allows the implementation of unified binaries where all of the enclave loading and execution functionality, including the enclave itself, can be embedded into a single statically linked binary. We developed an interesting proof of concept, using the Struts vulnerability that led to the Equifax security breach, that pulls a unified enclave bearing binary onto a platform and spirits information off the platform using a mutually attested communications channel with an off platform enclave.
So the ultimate security challenge with respect to SGX is controlling who and what can execute an enclave.
This challenge is what led to our focus on Autonomous Introspection (AI) for developing high security assurance platforms. Since AI models all allowed information exchange events (Actor/Subject) interactions it provides a mechanism for defining what application can access and thus initialize an enclave. Since the AI modeling engine is ultimately a kernel based entity, it is consistent with the contention of Costan et.al. that the operating software is in the best position with respect to making decisions with respect to enclave authorization.
It is a somewhat interesting twist on the model that we encase the modeling engine in an SGX enclave in order to prevent it from being tampered with. This poses a security challenge to an aggressor which requires them to execute actions inconsistent with the model the SGX AI engine is enforcing in order to overtake the platform. This obviously requires that the enclave mediating the root behavior of the platform be launched with a known integrity chain (root of trust) but this seems to be an inescapable fact on current platforms.
All of this may be more then what people are interested in, but these are centrally important concepts with respect to the design of secure systems using SGX. While the static sealing and encryption capabilities of an enclave are important, they should not be thought of as a replacement for encrypting data with GPG/PGP style confidentiality systems. The confidentiality guarantees available with enclave based sealing are extended to any party that can bear, ie. initialize and execute an enclave.
All of this only scratches the surface on these issues but hopefully it is useful background information to those embarking on SGX based security architectures.
Best wishes for a productive remainder of the week.