How Confidential Computing Builds a Secure Future

author-image

By

 

Learn about protecting data with confidential computing powered by open source software with two people working at the forefront of this technology.

Dan Middleton, a principal engineer at Intel, and Dave Thaler, a software architect at Microsoft, share their work with confidential computing and their efforts to advance it via the Confidential Computing Consortium*, a Linux Foundation* project that aims to accelerate the adoption of trusted execution environment technologies and standards. Middleton is the current chair of the Confidential Computing Consortium's Technical Advisory Council and Thaler is a previous chair.  They share insights on the Open at Intel podcast with host Katherine Druckman.

Learn about confidential computing, the problems it solves and how you can get involved. Katherine Druckman

First, I let’s cover some basics. What is confidential computing? 

Dave Thaler 

When we first created the Confidential Computing Consortium, we decided one of our first tasks was to define the term “confidential computing,” so we all knew what we were talking about and coming together for. So, one of our first tasks was to agree on that definition. Our definition is: “Confidential computing is the protection of data in use by performing computation in a hardware-based attested trusted execution environment.” Of course, that spurs the question, “What is a trusted execution environment?” This is a natural follow-up question that we get asked. We didn’t want to invent our own term for trusted execution environment but use a term that could be agreed on with standards bodies.  

I also participate in the Internet Engineering task force on committees that deal with trusted execution environments, so we co-developed a term that a trusted execution environment is a computing environment that does three things: It provides code integrity, data integrity and data confidentiality. That means is the code and the data that's executing inside the trusted execution environment can't be compromised, or the data can't be read by anything that's running outside of that trusted execution environment, so confidential computing is anything that does that in a way that is based on hardware, not software, and is fully attestable. 

Dan Middleton 

We’ve got a precise definition now that that builds on work in other standards bodies. Sometimes people need an even more introductory definition. One of the reasons that confidential computing is around is that for a long time now we've had the ability to protect data when it's at rest. So, when you write something to disk you might have disk encryption, and if you're going to go send that information to somebody else, you're probably transmit it over Transport Layer Security (TLS) or some other sort of protected protocol. But when you go to use that secret, that's when it's most vulnerable and arguably most valuable. Confidential computing lets you protect it when you're actually operating on it. When your program is running, it may be running in some sort of protected memory. Maybe it's only decrypted when it actually enters the CPU package, so there are different ways to implement this, but it's really getting at protecting data in use. 

Dave Thaler 

Sometimes people may not understand what data and use means... Data in use is when it's in use by, say, the CPU when it's actually doing computations, when it's actually trying to perform some operation on the data rather than just storing it or sending it across the network. 

Katherine Druckman 

Dave, you mentioned attestation. What is it and why is it so important in confidential computing? 

Dave Thaler 

Without attestation you just have a system that may provide the other properties, but how do you know that it provides those properties? Is it just, 'Oh, trust me, it's providing these properties?’  

What we strive for in confidential computing is more than just, 'you all trust me,' -- there's some basis on which you can establish trust for yourself. We don't want a customer using confidential computing technology to say they just inherently trust the cloud provider to do the right thing. We want them to be able to validate stuff for themselves. This gets into what we see as the trust model and what confidential computing protects. What we're trying to do, or trying to provide enough tools to do, is to say how many different entities are there that could be compromised and get access to your data or change the operations and do whatever it is that's dangerous that you're worried about.  

One way to think about that is: ‘How much code is there for me to protect?’ Maybe there's a bug in the code that I'm worried about. Maybe I'm running some other software. How do I know it's running the right software?’ You could also ask: 'How many different organizations are empowered to touch that code and see the data?’ The more organizations there are, then your attack surface area is higher because it could be a compromise in that organization, a nefarious actor, or some social engineering attack... We're also trying to provide tools that determine the minimum number of organizations that you inherently must trust, and it turns out that the minimum -- and it may not be practical in all cases, the absolute minimum -- is basically two.  

That’s you or your own admins, those who are actually the authoritative for the policy. The second, in practice, is whoever creates the hardware that's in the chip because if you have a bunch of hardware chips, it's not like you're going to crack out every possible piece of hardware and deconstruct it and verify that hardware is doing something. That's not practical. You could, but unless you're a major Defense Department or something like that with infinite budgets, you probably can't inspect every possible piece of hardware that gets shipped to you. So, you must have some trust in whoever made that chip.  

This is the same sort of trust that you get with your debit card or your credit card with a chip and pin system. You trust that the chip hasn't been compromised and you trust that with your money. You can choose to trust other things beyond the chip manufacturer and the hardware itself and your own admins. Everything else is a choice and with confidential computing, there are ways to address it, like attestation, as a way of saying, ‘How do you know that you're getting the right thing while only trusting those two entities?’ 

Katherine Druckman 

Minimum viable trust. I like your example. 

Dan Middleton 

It's one of these concepts that’s both very simple and arbitrarily complex depending on where and how you approach it. On simple side, it's just asking you to prove to me that you have the security to protect whatever secret I'm going to send to you. You're asserting or attesting to that, or the software is doing that. But then you can start peeling the onion. Without doing that, however, one of the other things to observe about attestation as it relates to confidential computing is that it is like a new building block. If you're a developer, one of the fun things to do play with new APIs. They give you new capabilities that you didn't have before. And one of the new capabilities that you get with confidential computing that you can actually code is attestation. You can make different decisions about how you're going to interact between systems based on what kind of level of security that you can infer from the other hosts that you're interacting with. 

Dave Thaler 

I can give two analogies...  

The first one is the passport or the driver's license analogy. If you're going to cross the border, you can present a passport at the border crossing at customs and immigration. The officials will look at it and say ‘I don't trust you, but I trust that this passport is valid.’ The passport is issued by the United States government and I trust the United States government to only issue passports to citizens and people that pass a set of requirements. I can look at it and at face value tell whether this is actually issued by the United States. It doesn't look like a forgery...If it hasn't expired, because there is an expiry date on my passport, I can use this in different ways. It's attestation with your passport... 

Another analogy, and some systems work this way, is more like a background check. If you're going to apply for a job or a loan, you fill out an application and provide a bunch of personal data. On the other end, the bank or the hiring company doesn't treat it at face value. They often contract with a background check agency and send the data to trusted third party. The person who filled out the application may not even know who that is. They sent it off to some background check organization to see if the person checks out. They check criminal records, they may check credit ratings and so on. Based on the report, they give this person a job or give this person a loan.  

It's a different type of attestation compared to the passport. I know exactly who it is, a U.S. citizen and so on. And in the second case, I have no idea who it is. It's the person or the entity or the organization that's making the decision as to whether to grant the job or loan that contracts with some place that they trust. They're selling off this evidence, in this case the job application or loan application and getting back the attestation result, which is kind of like a passport. But they're getting that directly from the credit reporting agency or the background check agency. We call that the background check model, but either way, the entity that's making the decision gets a report they can trust. Either because the attester is carrying it around with them and presenting it to them in the passport case, or because the relying party organization making the decision has some trusted entity they can go off. In either case, the party making the decision must have a trusted agency or trusted entity that they can use. In the first case, it’s the US government in my example and in the second case it might be a background check agency that they contract with, but in both cases they get a set of evidence that did not come from the person filling out the form or wanting to get in. It comes from somebody else that they trust, and that's what attestation is about. How do I get a proof or assurance from something that I trust to know whether to trust the entity that's presented to me? 

 

 

Katherine Druckman 

We’re all about open source here. When you're talking about these trusted execution environments, why is it important that confidential computing adopt open standards and an open source software ecosystem? Is it a sustainability question? Is it a security question? All of the above?  

Dan Middleton 

It’s probably all of the above, but with security technologies, the more you can inspect those technologies, the more you can trust them. In the security world we call it Kerckhoff's principle, which is roughly that the security of a system shouldn't depend on the secrecy of its design. Or sometimes it's phrased the other way around: If the design is fully exposed, that shouldn't defeat the security of the system. As much as possible, we like to see security technologies in open source. That's why you have things like OpenSSL or other cryptography primitives that have been written so they can be inspected. That's one of the great things about the CCC, we're a home for these software stacks building on confidential computing hardware. 

Dave Thaler 

For me, it goes back to minimum viable trust and how many different organizations require trust. If there's open source code, then I don't have to trust the author of the code because I can look at it myself and establish that trust. I can verify what that code is doing without having to trust the author of it. If it's closed source, then I must trust the author to have generated the correct code. If it's open source, I could accept code from somebody I intentionally don't trust because I can analyze that code. This notion of the security vetting of code allows you to remove an entity from the implicit trust chain by trusting the code itself. You can do that yourself, or often you might contract with a security agency. Again, this goes back to choosing to bring another agency into trust. I can contract out with a security agency that I trust and vet that code.  

Open source is what the Confidential Computing Consortium helps fund and foster, although in theory it's not the openness of the source that is important, it's vetting , so if you had open source that nobody looked at, it would not be any more secure than closed source. It's the looking at it, it's the number of eyes and the security vetting of it that makes it more secure and more trustable. Sometimes I use the term vettable code that asks if the code has been vetted. Regardless of whether it's open or closed, as a customer, has my security agency, whether it's my own in-house people or an agency that I trust, have they been able to vet that code? When it's open source, that provides a way for the confidential consortium to support it, to see it and to encourage the vetting of that code. 

For more of this conversation and others, subscribe to the Open at Intel podcast: 

 

 

About the Author

Katherine Druckman, an Intel Open Source Evangelist, is a host of podcasts Open at Intel, Reality 2.0 and FLOSS Weekly.  A security and privacy advocate, software engineer, and former digital director of Linux Journal, she's a long-time champion of open source and open standards.