I went to Seth Schoen’s class on trusted computing at the Freedom Technology Center yesterday. The class itself was GREAT! Trusted computing is an area that I really wanted to learn about, but never had the time for. This was a chance to get a whole boatload of info in one big braindump. Seth did a great job of presenting an overview and then delving into some details. This was perfect for me cause I was coming in with little background. Given the set of info that’s available online already, I think it will probably only take a bit of “connect the dots” to provide a nice resource for those looking to learn about what makes up trusted computing, and what is currently being worked on. So as I promised a few days ago, I’m going to atone for my sins in recommending the Stanford VLAB security event and provide some real info. I have developed a very strong opinion on the issue, so you’re going to get that as well.

Trusted Computing is a term used to describe one of a number of efforts which attempt to make computers more secure. In particular, this effort aims to provide a way in which the software running on a system can be verified. Verifying software is important because worms and viruses and various other attacks normally aim at inserting new software into a system, or modifying the existing software. If the user of a computer could be sure that they were only running the software they expected to it would afford them an additional level of security which doesn’t exist today. The verification of software is done by what is called “measuring” the running software. This is different than what I expected, because the trusted computing initiatives that Seth discussed DO NOT keep a program from running at any point, they just allow the user to uncover an unauthorized change to software. Measuring software is done in part by adding new hardware to the PC architecture, but also includes a number of new software mechanisms. Seth wrote a detailed paper called Trusted Computing: Promise and Risk. It lays down the architecture in general as well as the capabilities that are provided.

Something that might not be obvious from the paper is that there are what seem to be three different areas of software and hardware that need to be present in order for a full trusted computing solution to work.

  • The hardware used to do measurement of software.

  • Changes to the standard hardware architecture in order to accommodate this measuring chip

  • Changes to the software running on the secured system.

For those looking for even more indepth information there’s the atform Alliance, which provides information about the measurement hardware. Both Intel and AMD have their own versions of the hardware changes needed to make trusted computing possible. Intel calls their version LaGrande Technology (LT), and the version from AMD is called Secure Execution Mode (SEM). I haven’t been able to pull up much info about the AMD effort, but there’s a very good overview of LaGrande from ExtremeTech. The last piece of the puzzle is the operating system support. Microsoft has been doing a lot of work in this area, with a project that was first called Palladium, but is now called the Next Generation Secure Computing Base (NGSCB). There’s a lot of information out there if you’re into the technical details. Right now I’m going to stick with a few overall concepts however.

I went into the class thinking what I think most people think - that trusted computing could in some way keep me from running the software that I want on my system. I thought that if trusted computing were to happen I might not be able to run Linux on some future version of hardware. This isn’t true at all because trusted computing doesn’t allow for halting the system because of rogue software. It just isn’t part of the plan. Looking at the Trusted Computing paper, the four major areas of functionality are:

  • Memory curtaining

  • Secure input and output

  • Sealed storage

  • Remote attestation

And most of the functionality there is of benefit even when running an alternative system like Linux. The sealed storage concept in general was something I wasn’t aware of, and it could provide a lot of benefit in general. I think I still prefer traditional encryption to the sealed storage concept, but I could see how the technology in general would be a benefit to most people. But even so, I have a lot of doubts about it. More on that later.

So what’s the big deal then? Why is everyone so irked about the use of these trusted computing technologies if they can’t really interfere with the use of a PC in general, and they provide a lot of benefit? The problem lies in the last point from that list, the remote attestation. This is described in some detail in the paper, but in brief it allows for a remote entity to determine the configuration of software you’re running on your computer. There are some valid reasons for this to be used, reasons which I agree with in principle. It could be used when accessing your bank account so that your bank can make sure you aren’t using a system which has been compromised, or it could be used by an online game server to ensure that none of the participants was cheating. Those uses are pretty cool, but even though there are valid uses this is really where the problems start to come in most drastically. Remote attestation is a mechanism that vendors can use to deliver data or services only to people running “approved” configurations of software. For me, this is something that’s completely unacceptable. It is my feeling that companies would use this to perpetuate anti-competitive behavior. They engage in anti-competitive behavior already, but the remote attestation would make the mechanisms for their discrimination an explicit part of the architecture. I don’t think someone else has a right to tell me how I can access a service any more than I think a bus driver should have a right to determine who’s allowed to get on a bus based on their race. Neither has anything at all to do with the service being provided. I do not trust the law in this case to regulate the use of remote attestation in such as way that it can be used for beneficial purposes without being used for detrimental purposes. Simply not using the service is an obvious answer, but this doesn’t really work in practice. Access to the service can still be used to pressure users to use approved software; otherwise they can’t interact with those who do have access to the service/data. This could be a huge hindrance to that class of users. Think of being cut off from the phone network unless you had an AT&T phone. You could just not use a phone at all, but it would be a huge pain in the ass. Seth goes into some detail about other nefarious uses of both remote attestation and sealed storage in his paper.

So there’s the groundwork. Trusted computing isn’t used for the nefarious purposes I originally thought it could be, and there are both good and bad points to what it can be used for. In my opinion sealed storage is mostly cool, remote attestation is both good and bad almost equally. So what is my new outlook on the technology? Do I think it’s the horrid demon that I used to think it was? Fuck yes!! If anything, I think it’s even worse than it appeared to be. Now that I understand trusted computing more fully I think it could be the Antichrist. Once trusted computing devices get released I don’t think it would be long before it starts raining toads and the seas start to boil, the end of the world will be at hand.

The reason I have such huge problems with it is that it seems like it provides some benefits, although there are what seem like some slight problems. However, when we really think about the “slight problems” and compare what they could be used for against what companies are actually doing, we should see that they really will be huge problems. I’m assuming this is why Seth has been trying to advance the idea of an Owner Override for the remote attestation. I don’t agree that attestation is the only problem though. I think sealed storage is an issue also. I don’t want my programs to be able to put stuff on my hard drive that only it can access! It is cool that programs could store data which only they can access, but I think this will also be abused. If I use one program to create a chunk of content, that content is mine. It does not, nor do I want it to, belong to the software that created it. I might want the data to act like that, but I want the control of that behavior to be in my hands. The combination of the tamperproof TCPA hardware and software that can verify what environment it’s running in could result in programs which can effectively lock me out of my own data. This abdication of control, even though it could result in security in some cases, I find unacceptable. The trusted computing environment boils down to turning over control of your system in certain cases to someone else’s hardware and software. As Seth also brings up, even the user of the computer is at times viewed as an adversary.I am not willing to give up that control. The price of freedom is eternal vigilance. If keeping my system completely under my control means that I have to spend a lot more time guarding it that’s just the price that has to be paid. If it means that I might lose my private info or have a virus infect my system that’s just a risk I’ll have to expose myself to for now. That is not to say that I don’t think that computer security needs to be improved upon. It does. I don’t think that giving up control of my computer to someone else is the way that I would like to get to greater security. I do not trust trusted computing.