Monday, December 06, 2010

Update on Qubes

It's been a bit quiet on the Qubes development front for the last 2 months. The reason for this was that Rafal and myself got fully engaged in a new commercial research project. After all, we do need to make money somehow, so that we could later spend them on funding Qubes development :)

But this new engagement is actually closely related to what we do with Qubes (i.e. how new hardware technologies allow to build more secure OSes), so it's not like we're abandoning Qubes, as the experience we get with this research project will surely be useful for us when designing and implementing the Qubes 2.0 architecture.

In order to continue with Qubes, we've decided to hire some Linux programmers, while Rafal and I will continue with our research project over the coming months. We've decided to start a cooperation with another Polish computer outfit, TLS Technologies, who specializes in advanced systems design and implementation.

There are a couple of people people from TLS engaged in Qubes, and you will soon "meet" them on qubes-devel, in our wiki, and of course, you will see their contributions in our git repos.

The plan is to have Beta 1 released sometime in January 20102011. The two important features that will be implemented first, and that will make it into Beta 1 (apart for the long-awaited installer) are: Firewall VMs, and support for templates for service VMs. Stay tuned for more details soon!

If everything goes smoothly, then we should expect Qubes 1.0 sometime at the end of Q1 2011...

Wednesday, October 06, 2010

Qubes Alpha 3!

We have just uploaded the new packages for the Qubes Alpha 3 milestone. A lot of under the hood work went into this release, including:
  • Redesigned networking and NetVM support (for VT-d system)
  • Reasonably stable S3 sleep support (suspend-to-RAM), that works even with a NetVM!
  • Improved GUI virtualization (all known bugs fixed finally!)
Disposable VMs are really a killer feature IMO. The screenshot below shows the user's experience:

The user righ-clicks on a PDF file, chooses "Open in Disposable VM", and then waits 1... 2... 3... 4 seconds (assuming a reasonably modern laptop) and the document automagically opens in a fresh new Disposable VM. If you make some changes to the document (e.g. if it was a PDF form, and you edited it), those changes will propagate back to the original file in the original AppVM.

So, within 4-5 seconds, Qubes creates a new VM, boots it up (actually refreshes from a savefile), copies the file in question to the VM, and finally opens the application that is a registered MIME handler for this type of documents, e.g. a PDF viewer. We're pretty confident this time could be further decreased down to some 2 seconds, or maybe even less. This is planned for some later Beta release.

Dynamic memory balancing allows to better utilize system physical memory by moving it between running AppVMs in realtime, according to the VM's real needs. This allows to run more VMs, compared to a scheme with static memory allocation, and also dramatically eliminates system hiccups, that otherwise occur often in a static scheme when one of the VMs is short of memory and initiates swapping.

The screenshot above shows the memory usage on my 6GB laptop when writing this blog post. As you can see I can easily run a dozen of AppVMs (most users will not need that many, but I'm a bit more paranoid I guess ;) and could probably even start a few more if there was such a need (e.g. open some Disposable VMs). Of course, this all depends on the actual type of workload the user runs in each VM - most of my AppVMs run just one or two applications, usually a Web browser (Firefox), but some, e.g. the work, and personal AppVMs run much more memory-hungry applications such as Open Office, or Picasa Photo Browser. I very rarely see more than 1 GB of memory allocated to a single VM, though. Generally speaking, the new memory management in Qubes works pretty nice.

Currently, the biggest slow-down factor for Qubes is somehow poor disk performance, most likely caused by the joint impact of the Xen backend, Linux dm, and kcryptd (we use the simplest possible Xen block backend for security reasons, will move to more sophisticated backends when we introduce untrusted storage domain in Qubes 2.0).

Now, most of the under-the-hood work for Qubes 1.0 seems to be complete, and now it time for all the polishing of the user experience, which will be the main focus of the upcoming Beta development. Just reminding that we're currently looking to hire developers for this effort.

The Installation instructions can be found here. Enjoy!

Tuesday, September 28, 2010

ITL is hiring!

We're looking to hire one or two full time developers, who will be working on the open source version of Qubes OS, with the primary task of advancing it from Alpha to Beta stage, and then finally to a production quality version.

We're looking to hire developers, not necessarily security researchers! Specifically we expect the following from candidates:
  • Many years of experience with Linux/GNU development, including system-level and kernel-level Linux development, documented by the actual projects,
  • Familiarity with virtualization technologies, and specifically with Xen hypervisor,
  • Basic understanding of the Qubes architecture and excitement about the project :)
  • Product-oriented approach (polishing, testing, packaging, understanding of user needs),
  • Good communication skills in written English
In return we offer the following benefits:
  • Decent, full-time salary,
  • Opportunity to be part of a renown security team,
  • Opportunity to work on an exciting product,
  • Work on a GPLed project with all the benefits it gives to the developer (visibility, rights to the code)
If you're interested in joining our team, please send a message to joanna at invisiblethingslab.com.

Please do not send typical resumes: don't write about schools you finished, certificates you obtained, driving license, scuba trainings, etc. We are only interested in a short bio (keep it below 100 words please), and links to your past or current projects. Include your geographic location.

While it would be great if you were based in Warsaw (or somewhere in Poland), as it would allow for regular face-to-face meetings, this is not a critical factor. ITL doesn't have a physical office, and everybody work from their apartments, so there is no need to relocate to Warsaw, in case you happened to be based somewhere else.

Monday, September 13, 2010

On Thin Clients Security

I'm constantly being asked about it, and so I thought I would write a handy blog post, so I could just referrer to it in the future, when yet anther person asks me if I think the use of Thin Clients is a game-changer to desktop security...

It is not! Thin Clients do not improve your desktop security in any way, and that's because:

  1. You still run a regular full-blown OS, such as Widows and all the regular applications, such as those buggy PDF readers, Web browsers, etc - it's just you run them all on some corporate server, rather on your laptop. The fact that you run the OS on the corporate server, doesn't make it any less prone to compromises, compared to if you run it locally on your laptop.


  2. A compromise of your laptop, even if it's just a dump terminal, is still fatal! This is because if your laptop's kernel (or MBR, or BIOS, or some PCI device's firmware, or GPU) is compromised, the attacker can intercept/steal/spoof all the data that you work on remotely, because it is still your laptop that processes the input (keystrokes, mouse events) and output (pixels). So, an Evil Maid attack on your laptop when you use it as a Thin Client, would be just as devastating, as it is otherwise (and don't fool yourselves that crypto tokens can help)

We really need secure end-user systems, even if we just want to use them as dump terminals only! There is really no way we could skip this step (and e.g. focus only on infrastructure, or services security).

Thursday, September 09, 2010

(Un)Trusting your GUI Subsystem

Why do we need secure desktop systems? Why support from hardware is necessary to build secure desktop OSes? Does virtualization make things more, or less complex? Why Dynamic RTM (Intel TXT) is better than Static RTM? Can we have untrusted GUI domain/subsystem?

I tried to cover those questions in my recent keynote at ETISS, and you can grab the slides here.

Particularly, the slide #18 presents the idealistic view of an OS that could be achieved through the use of hardware virtualization and trusted boot technologies. It might look very similar to many other pictures of virtualized systems one can see these days, but what makes it special is that all the dark gray boxes represent untrusted domains (so, their compromise is not security-critical, except for the potential of a denial-of-service).

No OS currently implements this architecture, even Qubes. We still have Storage and GUI subsystem in Dom0 (so they are both trusted), although we already know (we think) how to implement the untrusted storage domain (this is described in detail in the arch spec), and the main reason we don't have it now is that TXT market adoption is so poor, that very few people could make use of it.

The GUI subsystem is, however, a much bigger challenge. When we think about, it should really feel impossible to have an untrusted GUI subsystem, because the GUI subsystem really "sees" all the pixmaps that are to be displayed to the user, so also all the confidential emails, documents, etc. The GUI is different in nature than the networking subsystem, where we can use encrypted protocols to prevent the netvm from sniffing or meaningfully intercepting the application-generated traffic, or the storage subsystem, where we can use fs-encryption and trusted boot technologies to keep the storage domain off from reading or modifying the files used by apps in a meaningful ways. We cannot really encrypt the pixmaps (in the apps, or AppVMs), because for this to work we would need to have graphics cards that would be able to do the decryption and key exchange (note how this is different from the case of an untrusted storage domain, where there is no need for internal hardware encryption!), and the idea of putting, essentially an HTTPS webserver on your GPU is doubtful at best, because it would essentially move the target from the GUI domain to the GPU, and there is really no reason why lots-of-code in the GPU were any harder to attack than lots-of-code in the GUI domain...

So we came out recently with an idea of a Split I/O model that is also presented in my slides, where we separate the user input (keyboard, mouse), and keep it still in dom0 (trusted domain), from the output (GUI, audio), which is moved into an untrusted GUI domain. We obviously need to make sure that the GUI domain cannot "talk" to other domains, to make sure it cannot "leak out" the secrets that it "sees" while processing the various pixmaps. For this we need to have the hypervisor ensure that all the inter-domain shared pages mapped into the GUI domain are read-only for the GUI domain, and this would imply that we need the GUI protocol, exposed by the GUI domain to other AppVMs, to be unidirectional.

There are more challenges though, e.g. how to keep the bandwith of timing covert channels, such as those through the CPU caches, between the GUI domain and other AppVMs on a reasonably low level (please note the distinction between a covert channel, which require cooperation of two domains, and a side-channel, which requires just one domain to be malicious - the latter are much more of a theoretical problem, and are of a concern only in some very high security military systems, while the former are easy to implement in practice usually, and present a practical problem in this very scenario).

Another problem, that was immediately pointed out by the ETISS audience, is that an attacker, who compromised the GUI domain, can manipulate the pixmaps that are being processed in the GUI subsystem to present false picture to the user (remember, the attacker should have no way to send them out anywhere). This includes attacks such as button relabeling ("OK" becomes "Cancel" and the other way around), content manipulation ("$1,000,000" instead of "$100", and vice-versa), security labels spoofing ("red"-labeled windows becoming "green"-labeled), and so on. It's an open question how practical these attacks are, at least when we consider automated attacks, as they require ability to extract some semantics from the pixmaps (where is the button, where is the decoration), as well as understanding the user's actions, intentions, and behavior (just automatically relabeling my Friefox label to "green" would be a poor attack, as I would immediately realize something is going wrong). Nevertheless this is a problem, and I'm not sure how this could be solved with the current hardware architecture.

But do we really need untrusted GUI domain? That depends. Currently in Qubes the GUI subsystem is located in dom0, and thus it is fully trusted, and this also means that a potential compromise of the GUI subsystem is considered fatal. We try to make an attack on GUI as hard as possible, and this is the reason we have designed and implemented special, very simple GUI protocol that is exposed to other AppVMs (instead of e.g. using the X protocol or VNC). But if we wanted to add some more "features", such as 3D hardware acceleration for the apps (3D acceleration is already available to the Window Manager in Qubes, but not for the apps), then we would not be able to keep the GUI protocol so simple anymore, and this might result in introducing exploitable fatal bugs. So, in that case it would be great to have untrusted GUI domain, because we would be able to provide feature-rich GUI protocols, with all the OpenGL-ish like things, without worrying that somebody might exploit the GUI backend. We would also not need to worry about putting all the various 3rd party software in the GUI domain, such as KDE, Xorg, and various 3rd party GPU drivers, like e.g. NVIDIA's closed source ones, and that some of it might be malicious.

So, generally, yes, we would like to have untrusted GUI domain - we can live without it, but then we will not have all the fancy 3D acceleration for games, and also need to carefully choose and verify the GUI-related software (which is lots of software).

But perhaps in the next 5 years everybody will have a computer with a few dozens of cores, and also the CPU-to-DRAM bandwidth will be orders of magnitude faster than today, and so there will be no longer a need to offload graphic intensive work to a specialized GPU, because one of our 64 cores will happily do the work? Wouldn't that be a nicer architecture, also for many other reasons (e.g. better utilization of power/circuit real estate)? In that case nobody will need OpenGL, and so there will be no need for a richer GUI protocol than what is already implemented in Qubes...

It's quite exciting to see what will happen (and what we will come up for Qubes) :)

BTW, some people might confuse X server de-privileging efforts, i.e. making the X server run without root privileges, which is being done in some Linux distros and BSDs, with what had been described in this article, namely making the GUI subsystem untrusted. Please note that a de-priviliged X server doesn't really solve any major security problems related to GUI subsystem, as whoever controls ("0wns") the X server (depriviliged or not) can steal or manipulate all the data that this X server is processing/displaying. Apparently there are some reasons why people want to run Xorg as non-root, but in case of typical desktop OSes this provides little security benefit (unless you want to run a few X servers with different user accounts, and on different vt's, which most people would never do anyway).

Thursday, September 02, 2010

Qubes, Qubes Pro, and the Future...

The work on Qubes OS has been extremely exciting and also very challenging for us. While most of the work we have been doing so far relates to solving various technical, under-the-hood challenges, the more important goals in the long-term are related more to mitigating the so called "human factor", i.e. making the system not only easy to use, but tolerant to user absentmindedness. This includes e.g. ensuring the user uses a correct AppVM (e.g. do the banking in the "banking" AppVM, and not in the "random web browsing" AppVM, and also not the other way around: don't do random surfing in the "banking" AppVM), and generally making the whole isolation between AppVMs as seamless as possible, but without sacrificing the security at the same time.

This is becoming very important, as the technical level of security in Qubes is already very high, and so the "human factor" might easily become a low hanging fruit for the attacker. (In contrast to other OSes)

But for Qubes to become something more than just an interesting OS for Linux geeks and security enthusiasts, it is also critical to have better application support. Right now Qubes lets users run Linux apps, because each AppVM is Linux-based. But, and let's not be afraid to admit this: Linux sucks when it comes to application support! (Take Open Office as an example - it not only looks like MS Office 97, but is also terribly user-unfriendly, especially their presentation program, the Impress. Why is it so difficult to make it look and behave more like Apple Keynote?)

There is only one way to provide better application support to Qubes: make it support Windows-based, or Mac-based, AppVMs. Just imagine that: being able to run most of your Windows (or Mac) applications, but at the same time benefit from the Qubes strong isolation and seamless integration on one common desktop...

In order to implement support for Windows-based AppVMs (or alternatively Mac-based AppVM) we would need to engage significant resources (5+ very skilled developers, working full time for 1+ year), and so we're currently looking for an investor that would be able to provide funding for such an endeavor. The idea is to create a dedicated spin-off company that would focus entirely on Qubes and Qubes Pro, and in the future will make a profit from selling Qubes Pro licenses. Qubes Pro will become a commercial product, still based on the open source Qubes, but adding support for Windows-based or Mac-based AppVMs. I would be happy to discuss the details and business plan via email with interested potential investors.

Speaking about the future of Qubes: next week I will speak at the European Trusted Infrastructure Summer School, where I will talk about some general stuff like why we need secure desktop systems and why trusted computing might be a way to go, but will also dive a little bit into some new things we plan for Qubes 2.0, such as storage domain and split I/O graphics model. The conference features some very reputable speakers in system-level security field, such as David Grawrock (the father of Intel TXT and TPM), and Loic Duflot (our venerable competitor in the filed of offensive system-level research), so I consider a honour to deliver an opening keynote there (Check the agenda here).

I will have my Qubes laptop with me, of course, so if anybody is interested to see Qubes OS live (including Disposable VMs!), I would be happy to do a quick demo on the spot.

Thursday, August 19, 2010

The MS-DOS Security Model

Back in the '80s, there was an operating system called MS-DOS. This ancient OS, some readers might not even remember it today, had a very simple security model: every application had access to all the user files and other applications.

Today, over two decades later, overwhelming majority of people still use the very same security model... Why? Because on any modern, mainstream OS, be that Linux, Mac, or Windows, all the user applications still have full access to all the user's files, and can manipulate all the other user's applications.

Does it mean we haven't progressed anywhere from the MS-DOS age? Not quite. Modern OSes do have various anti-exploitation mechanisms, such as ASLR, NX, guard pages (well, Linux has it since last week at least), and even some more.

But in my opinion there has been too much focus on anti-exploitation, and on bug finding, (and on patching, of course), while almost nothing has been done on the OS architecture level.

Does anybody know why Linux Desktops offer ability to create different user accounts? What a stupid question, I hear you saying - different accounts allow to run some applications isolated from user's other applications! Really? No! The X server, by design, allows any GUI application to mess with all the other GUI applications being displayed by the same X server (on the same desktop). So, what good it is to have a "random_web_browsing" user, if the Firefox run under this user account would still be able to sniff or inject keystrokes to all my other GUI applications, take screenshots of them, etc...?

[Yes, I know, the user accounts allows also to theoretically share a single desktop computer among more than one physical users (also known as: people), but, come on, these days it's that a single person has many computers, and not the other way around.]

One might argue that the progress in the anti-exploitation, and also safe languages, would make it nearly impossible to e.g. exploit a Web browser in the next few years, so there would be no need to have a "random_web_browsing" user in the first place. But, we need isolation not only to protect ourselves when somebody exploits one of our application (e.g. a Web Browser, or a PDF viewer), but also, and perhaps most importantly, to protect from maliciously written applications.

Take summer holiday example: imagine you're a scuba diver - now, being also a decently geeky person, no doubt you will want to have some dive log manager application to store the history of your dives on a computer. There are a dozen of such applications on the web, so all you need to do is to pick one (you know, the one with the nicest screenshots), and... well you need to install it on your laptop now. But, hey, why this little, made by nobody-knows-who, dive application should be given unlimited access to all your personal files, work email, bank account, and god-know-what-else-you-keep-on-your-laptop? Anti-exploitation technology would do exactly nothing to prevent your files in this case.

Aha, it would be so nice if we could just create a user "diving", and run the app under this account. In the future, you could throw in some advanced deco planning application into the same account, still separated from all the other applications.

But, sorry, that would not work, because the X server doesn't provide isolation on the GUI-level. So, again, why should anybody bother creating any additional user accounts on a Linux Desktop?

Windows Vista made a little step forward in this area by introducing integrity levels, that, at least theoretically, were supposed to prevent GUI applications from messing with each other. But they didn't scale well (IIRC there were just 3 or 4 integrity levels available), and it still isn't really clear if Microsoft treats them seriously.

So, why do we have user accounts on Linux Desktops and Macs is beyond me (I guess Mac's X server doesn't implement any GUI-level isolation either - if I'm wrong, please point me out to the appropriate reference)?

And we haven't even touched the problems that might arise from the attacker exploiting a bug in the (over-complex) GUI server/API, or in the (big fat) kernel (with hundreds of drivers). In order for those attacks to become really interesting (like the Rafal's attack we presented yesterday), the user would have to already be using e.g. different X servers (and switch between them using Ctrl-Shift-Fn), or some sandboxing mechanisms, such as SELinux sandbox, or, in case of Vista, a scheme similar to this one.

Tuesday, August 17, 2010

Skeletons Hidden in the Linux Closet: r00ting your Linux Desktop for Fun and Profit

A couple of months ago, while working on Qubes GUI virtualization, Rafal has come up with an interesting privilege escalation attack on Linux (a user-to-root escalation), that exploits a bug in... well, actually it doesn't exploit any concrete bug, which makes it so much more interesting.

The attack allows a (unpriviliged) user process that has access to the X server (so, any GUI application) to unconditionally escalate to root (but again, it doesn't take advantage of any bug in the X server!). In other words: any GUI application (think e.g. sandboxed PDF viewer), if compromised (e.g. via malicious PDF document) can bypass all the Linux fancy security mechanisms, and escalate to root, and compromise the whole system. The attack allows even to escape from the SELinux's "sandbox -X" jail. To make it worse, the attack has been possible for at least several years, most likely since the introduction of kernel 2.6.

You can find the details of the attack, as well as the discussion of possible solutions, including the one that has eventually been implemented, in the Rafal's paper.

One important aspect the attack demonstrates, is how difficult it is to bring security to a desktop platform, where one of the biggest challenges is to let applications talk to the GUI layer (e.g. X server in case of Linux), which usually involves a very fat GUI protocol (think X protocol, or Win32 GUI API) and a very complex GUI server, but at the same time keep things secure. This was one of the key priories for us when designing Qubes OS architecture. (So, we believe Qubes is much more secure than other sandboxing mechanisms, such as BSD jails, or SELinux-based sandboxes, because it not only eliminates kernel-level exploits, but also dramatically slims down GUI-level attacks).

The kernel-level "patch" has been implemented last week by Linus Torvalds, and pushed upstream into recent stable kernels. RedHat has also released an advisory for this attack, where they rated its severity as "high".

ps. Congrats to Brad Spengler for some good guessing :)

Thursday, July 01, 2010

Qubes Alpha 2 released!

The Alpha 2 is out!
New screenshots are here :)

Tuesday, June 01, 2010

Disposable VMs

While we're still busy with some last few tickets left for Qubes Alpha 2 milestone, Rafal has already started working on a new feature for Qubes Beta 1: on Disposable VMs. I think this is really gonna be a killer feature, and I wanted to say a few words about it.

Disposable VMs will be very lightweight VMs that can be created and booted in a very short time, say < 1s, with a sole purpose of hosting only one application, e.g. a PDF viewer, or a Media Player.

To understand why Disposable VMs are important, imagine the following situation -- you receive an email from a customer that contains a PDF attachment, say an invoice or a contract. Obviously you're opening and reading the message in an email client running in your "work" AppVM (or "work-email" AppVM, if you're paranoid), just because it is a work-related correspondence, arriving at your professional email address (for many reasons it is good to use different email addresses for job-related activities and for personal life).

However, chances of somebody compromising your email client by just sending you a maliciously crafted message that would exploit your body or subject parsers are very small, if you have disabled full HTML parser for message bodies (which I think most security-concious people do anyway). Perhaps a more effective attack vector would be for somebody to 0wn your email server first, and then try to exploit IMAP/POP/SMTP protocol parser in your email client. But hey, in that case, they already would get access to all your emails on the corporate server, without exploiting your email client (well, they could however gain access to your PGP keys this way -- if this bothers you, you might want to use smartcards for PGP keys). There is also a possibility to do a Man-In-The-Middle attack and try to exploit SSL protocol early parsers, but this could be prevented using a separate VPN AppVM in Qubes.

But now you would like to open this PDF that a customer just sent you. It's quite reasonable to be afraid that the PDF might be malicious and might try to exploit your PDF viewer, and then try to steal your emails or other things you keep in the "work" AppVM (or "work-email" AppVM). It doesn't matter if you trust the sender, as the sender's OS might very well be compromised by some malware and might be infecting all outgoing PDFs without the user consent.

You could try opening the PDF in one of your non-sensitive VMs, e.g. the "random" VM that you use for causal Web browsing, to make sure that even if the PDF is malicious, that it won't get access to any sensitive data. But what if the PDF is not malicious, and what if it contains some confidential data? In that case you might throw the baby out with the bath water (your "random" VM might have been already compromised and now it would be able to steal the secrets from your PDF file).

A disposable VM is an ideal solution here. You create a clean, disposable VM, just for the purpose of viewing the PDF. Then, once you're done, you just throw it away. If the PDF was malicious it could done harm only to its own disposable VM, that doesn't contain anything except... this very PDF. At the same time, the disposable VM is always started in a clean state, so there is no way somebody could steal the document. Only the document can steal itself :)

That all sounds easy, but to make it practical we need a very efficient implementation of disposable VMs, and a good system integration, so the experience was seamless to the user. E.g. the user should only be required to right-click on a file and choose "Open in a Disposable VM", and Qubes should take care about everything else: creating the VM, starting it, copying the file to the VM, and starting a MIME-associated application for this type of file (e.g. PDF) in the VM. And this all in time below 1s!

Basic support for Disposable VMs is planned for Beta 1, which is scheduled sometime at the end of the summer holidays. But I can tell that's just the beginning. The ultimate goal, from the user's point of view, would be to make Qubes OS to look and behave just like a regular mainstream OS like Linux, or Windows, or even Mac, but still with all the strong security that Qubes architecture provides, deployed behind the scene. Seamless support for Disposable VM is one of the first steps to achieve this goal.

Special credits go to Matt Piotrowski, who just left Berkeley University, and whose recently published thesis was a direct inspiration to implement disposable VMs in Qubes. While we did mention "one-time" VMs in our architecture document back in January (see chapter 4.6), it really was Matt's paper that convinced me we should really have them in Qubes. Virtics, a proof-of-concept implementation written by Matt, shares lots of similarities with Qubes, like e.g. architecture and implementation of the GUI virtualiztion. There are also differences though, and I refer readers to the Matt's paper for more details.

Monday, May 03, 2010

On Formally Verified Microkernels (and on attacking them)

Update May 14th, 2010: Gerwin Klein, a project lead for L4.verified, has posted some insightful comments. Also it's worth reading their website here that clearly explains what assumptions they make, and what they really prove, and what they don't.

You must have heard about it before: formally verified microkernels that offer 100% security... Why don't we use such a microkernel in Qubes then? (The difference between a micro-kernel and a type I hypervisor is blurry. Especially in case of a type I hypervisor used for running para-virtualized VMs, such as Xen used in Qubes. So I would call Xen a micro-kernel in this case, although it can also run fully-virtualized VMs, in which case it should be called a hypervisor I think.)

In order to formally prove some property of any piece of code, you need to first assume certain things. One such thing is the correctness of a compiler, so that you can be sure that all the properties you proved for the source code, still hold true for the binary generated from this source code. But let's say it's a feasible assumption -- we do have mature compilers indeed.

Another important assumption you need, and this is especially important in proving kernels/microkernels/hypervisors, is the model of the hardware your kernel interacts with. Not necessarily all the hardware, but at least the CPU (e.g. MMU, mode transitions, etc) and the Chipset.

While the CPUs are rather well understood today, and their architecture (we're talking IA32 here) doesn't change so dramatically from season to season. The chipsets, however, are a whole different story. If you take a spec for any modern chipset, let's say only the MCH part, the one closer to the processor (on Core i5/i7 even integrated on the same die), there are virtually hundreds of configuration registers there. Those registers are used for all sorts of different purposes -- they configure DRAM parameters, PCIe bridges, various system memory map characteristics (e.g. the memory reclaiming feature), access to the infamous SMM memory, and finally VT-d and TXT configuration.

So, how are all those details modeled in microkernels formal verification process? Well, as far as I'm aware, they are not! They are simply ignored. The nice way of saying this in academic papers is to say that "we trust the hardware". This, however, might be incorrectly understood by readers to mean "we don't consider physical attacks". But this is not equal! And I will give a practical example in a moment.

I can bet that even the chipset manufactures (think e.g. Intel) do not have formal models for their chipsets (again, I will give a good example to support this thesis below).

But why are the chipsets so important? Perhaps they are configured "safe by default" on power on, so even if we don't model all the configuration registers, and their effects on the system, and if we won't be playing with them, maybe it's safe to assume all will be fine then?

Well, it might be that way, if we could have secure microkernels without IOMMU/VT-d and without some trusted boot mechanism.

But we need IOMMU. Without IOMMU there is no security benefit of having a microkernel vs. having a good-old monolithic kernel. Let me repeat this statement again: there is no point in building a microkernel-based system, if we don't correctly use IOMMU to sandbox all the drivers.

Now, setting up IOMMU/VT-d permissions require programming the chipset's registers, and is by no means a trivial task (see the the Intel VT-d spec to get an impression, if you don't believe me). Correctly setting up IOMMU is one of the most security-critical tasks to be done by a hypervisor/microkernel, and so it would be logical to expect that they also formally prove that this part is done flawlessly...

The next thing is the trusted boot. I will argue that without proper trusted boot implementation, the system cannot be made secure. And I'm not talking about physical attacks, like Evil Maid. I'm talking about true, remote, software attacks. If you haven't read it already, please go back and read my very recent post on "Remotely Attacking Network Cards". Building on Loic's and Yves-Alexis' recent research, I describe there a scenario how we could take their attack further to compromise even such a securely designed system as Qubes. And this could be possible, because of a flaw in TXT implementation. And, indeed, we demonstrated an attack on Intel Trusted Execution Technology that exploits one such flaw before.

Let's quickly sketch the whole attack in points:

  1. The attacker attacks a flaw in the network card processing code (Loic and Yves-Alexis)

  2. The attacker replaces the NIC's firmware in EEPROM to survive the reboot (Loic and Yves-Alexis)

  3. The new firmware attacks the system trusted boot via a flaw in Intel TXT (ITL)

    • If the system uses SRTM instead, it's even easier -- see the previous post (ITL)

    • If you have new SINIT module that patched our attack, there is still an avenue to attack TXT via SMM (ITL)

  4. The microkernel/hypervisor gets compromised with a rootkit and the attacker gets full control over the system:o

And this is the practical example I mentioned above. I'm sure readers understand that this is just one example, of what could go wrong on the hardware level (and be reachable to a software-only attacker). Don't ignore hardware security! Even for software attacks!

A good question to ask is: would a system with a formally verified microkernel also be vulnerable to such an attack? And the answer is yes! Yes, unless we could model and prove correctness of the whole chipset and the CPU. But nobody can do that today, because it is impossible to build such a model. If it was, I'm pretty sure Intel would already have such a model and they would not release an SINIT module with this stupid implementation bug we found and exploited in our attack.

So, we see an example of a practical attack that could be used to fully compromise a well designed system, even if it had a formally verified microkernel/hypervisor. Compromise it remotely, over the network!

So, are all those whole microkernel/hypervisor formal verification attempts just a waste of time? Are they only good for academics so that they could write more papers for conferences? Or for some companies to use them in marketing?

Perhaps the formal verification of system software will never be able to catch up with the pace of hardware development... By the time people will learn how to build models (and how to solve them) for hardware used today, the hardware manufactures, in the meantime, will present a few new generations of the hardware. For which the academics will need another 5 years to catch up, and so on.

Perhaps the industry will take a different approach. Perhaps in the coming years we will get hardware that would allow us to create untrusted hypervisors/kernels that would not be able to read/write usermode pages (Hey Howard;)? This is currently not possible with the hardware we have, but, hey, why would a hypervisor need access to the Firefox pages?

And how this all will affect Qubes? Well, the Qubes project is not about building a hypervisor or a microkernel. Qubes is about how to take a secure hypervisor/microkernel, and how to build the rest of the system in a secure, and easy to use, way, using the isolation properties that this hypervisor/microkernel is expected to provide. So, whatever kernels we will have in the future (better formally verified, e.g. including the hardware in the model), or based on some exciting new hardware features, still Qubes architecture would make perfect sense, I think.

Saturday, May 01, 2010

Evolution

If you have been following my research over the last several years (even in the days before ITL), you will undoubtedly notice how much I have changed the profile over that time...

Several years ago, myself and Alex Tereshkin (who later became ITL employee #1), were known mostly as rootkit researchers. It was back in the days when the word "rootkit" was not as much well known as it is today (It became well known sometime in the late 2005, and I remember when I was applying for a US Visa that year, the immigration officer in the Warsaw embassy asked me what I did professionally and when I replied that I was a security researcher specializing in rootkits, he was very happy to tell me that he just read about those "rootkits" somewhere, although he was not very much worried about them, because he was a Mac user...)

But then, in the coming years, we decided to explore other areas, like virtualization, trusted computing, chipset security, and even touched on the CPU security briefly. Many valuable contributions in those areas have come from Rafal Wojtczuk, who joined our team some two years ago.

And then, finally, we became ready to actually build something meaningful. Not just yet another nonsense trivial-to-break "security product", but something that have had a potential to really improve user's security. And so, the Qubes project idea has been born, and soon it became ITL's highest priority project.

So, these days we don't do any reverse engineering or malware analysis any more. We'd rather design systems so they be immune to rootkits by design (e.g. by significant TCB reduction), rather then analyze each and every new rootkit sample caught in the wild and try to come up with a detector for it.

Of course, this all doesn't mean we're giving up on our offensive research. There is still a chance you will hear about some new attacks from us. But this would surely be limited only to the attacks that we consider relevant in an environment that is already designed with security in mind, like Qubes :) So, e.g. an attack against VT-d, or some CPU exploit, or a Xen exploit, might be extremely interesting. But don't expect to see any research on how to e.g. compromise Windows 7 or Mac kernel or break out of their primitive sandboxes -- these systems are so badly designed from a security standpoint, that coming up with a yet-another attack against them makes little sense from a scientific point of view.

Naturally, I'm all excited about this all: that I've been exploring new areas, and that my work has eventually started becoming meaningful. But that is, of course, only mine subjective opinion. Specifically, this turned out not be the case for Alex, who simply enjoys reverse engineering and compiler hacking just for the sake of doing it (Alex did some excellent job on metamorphic code generators, that are years ahead of what you can read at public conferences). Unfortunately, with the current new course we took at ITL, Alex started getting less and less chances to apply his skills, and faced a decision whether to stay at ITL and do other things, i.e. other than reversing or compiler hacking, or to quit and continue doing what he has always liked to do.

The reader has probably figured out by now that Alex decided to quit ITL. I fully understand his decision and wish him all the best in his new adventures!

You should still be able to reach Alex using his old ITL's email address (alex@), or directly via his new email: alex.tereshkin at gmail.com.

Friday, April 30, 2010

Remotely Attacking Network Cards (or why we do need VT-d and TXT)

I've finally found some time to study Loic Duflot's and Yves-Alexis Perez's recent presentation from the last month on remotely attacking network cards. You can get the slides here.

In short, they're exploiting a buffer overflow in the network card's firmware by sending malicious packets to the card, and then they gain full control over the card's firmware, so they can e.g. issue DMA to/from the host memory, effectively fully controlling the host (that's another example of "Ring -3 rootkit" I would say). The buffer overflow is in some exotic management protocol (that I think is disabled by default, but that's irrelevant) implemented by the NIC's firmware (the NIC has its own RISC processor, and memory, and stack, which they overflow, etc.).

I like this research very much, because it demonstrates several important things:

First, it shows that it is definitely a good idea to isolate/sandbox all the OS networking code using IOMMU/VT-d. And this is exactly what we do in Qubes.

Second, the attack provides a real-world example of why Static Root for Trust Measurement (SRTM) is inferior to Dynamic RTM (DRTM), e.g. Intel TXT. To understand why, let's make the following assumptions:
1) The OS/VMM properly uses IOMMU to isolate the network card(s), just like e.g. Qubes does.
2) Once the attacker got control over the NIC firmware, the attacker can also modify the persistent storage (EEPROM) where this firmware is kept. This has been confirmed by Loic in a private email exchange.
3) The system implements trusted boot via SRTM, i.e. using just BIOS and TPM, without Intel TXT.

Now, the attacker can modify the firmware in the EEPROM and this will allow the attacker to survive the platform reboot. The card's firmware will start executing early in the boot process, definitely before the OS/VMM gets loaded. Now, the compromised NIC, because it is capable of doing DMA to the host memory, can compromise the image of the VMM in a short time window between the time it got measured and loaded by the (trusted) OS loader, e.g. Trusted GRUB, but still before the time VMM had a chance to setup proper IOMMU/VT-d protections for itself.

Of course, in practice, it might be tricky for the compromised NIC firmware to precisely know this time window when it should send a compromising DMA write request. If the DMA was issued too early, then the trusted OS loader would calculate a wrong hash and put a wrong value into a PCR register, which would later prevent the system from completing the boot, and prevent the attack. If the DMA was issued too late, the IOMMU/VT-d protections would already be in-place, and the attack would again be unsuccessful. But, hey, much harder obstacles have been worked around by smart exploit writes in the past, so don't comfort yourself that the attack is hard. If it's possible, it means this technology is flawed, period.

And this is where DRTM, AKA Intel TXT, shows its advantage over simple SRTM. When you load a hypervisor using TXT, the SENTER instruction would first apply the VT-d protections around the hypervsior image, then do the measurements, and only then load it, with VT-d protections still in-place.

The above is the theory. A few months ago we demonstrated an attack against this scheme, but the attack was exploiting a flaw in the TXT implementation, not in its design, so it didn't render TXT useless as a technology.

A much bigger problem with Intel TXT is, that Intel still has done nothing to prevent SMM-based attacks against TXT. This is what we demonstrated about 1.5 years(!) ago. Our research stressed that TXT without protection from SMM is essentially useless. Intel then promised to come up with a spec on how to write an STM, and how TXT should work with STM (when to measure/load it, etc), but nothing has been released by Intel for all this time AFAIK...

Now, without STM (which is supposed to provide protection from potentially compromised SMM), the TXT cannot really prevent Loic and friends from owning the system, even if it uses such a securely designed OS as Qubes. This is because Loic would be able to modify e.g. the MBR while the system boots (thanks to DMA ability of the infected NIC firmware), and then attack an SMM from this MBR (I can bet lots of money Loic & co. would easily find a few other SMM exploits in any recent BIOS if they only wanted to), and then having infected the SMM, they will be able to compromise TXT-loaded hypervisor, and finally compromise the whole system.

I know there are some people from various governments reading this blog. If you really want to have secure systems, consider pushing on Intel to finally do something about the SMM-based attacks against TXT. Beware, Intel will try to tell you that, using TXT LCP you can seal your secrets to only "trusted" SMM images and would try to convince you it's a way to prevent SMM attacks on TXT. It is not. Only true SMM sandboxing is a proper way to address this problem.

Anyway, congrats to Loic and colleagues for yet another very interesting and meaningful system-level research!

Wednesday, April 07, 2010

Introducing Qubes OS

For the last 6 months we have been busy with a new project: Qubes. Qubes is an open source OS based on Xen, X, and Linux, designed to provide strong isolation for desktop computing. The link to the project website is at the end of the post.

The system is currently in the alpha stage, but if you're determined it's actually usable. For example I have switched to Qubes around a month ago, and two weeks ago I even decided to wipe and reinstall my Mac Book, which used to be my primary laptop previously. Now I use my old Mac Book only for making the slides (Apple Keynote really has no competition) and Web page for Qubes :) And I use Qubes for pretty much all the other daily tasks, from work, shopping, banking, random browsing, to Qubes development itself (it takes part in the "qubes" AppVM).

Just remember to make backups regularly if you decided to use Qubes for anything else than testing and development.

So, enough of introduction, you will find lots of details (including a 40-page PDF describing the system architecture) at the Qubes project website. Enjoy!

Update 7-Apr-2010 15:56 CEST: The server seems to be overloaded a bit by the traffic... If you are planning to install the OS, I guess it would be wise to postpone downloading the installation packages until later this week, when the first wave of visitors goes away.

Update 7-Apr-2010 16:31 CEST: The Wiki doesn't work due to lack of free memory... Talking to my provider about buying some more RAM. Sorry for the inconvenience.

Update 7-Apr-2010 18:28 CEST: The server has been brought offline for RAM upgrade. Should be back online in some 15 minutes...


Saturday, January 16, 2010

Priorities

It’s interesting how many people don’t realize what are the priorities in computer security... There are many fields to secure: server security, web applications security, network security, and finally desktop security. Over the last years I met SO many people that always expressed surprise why I would like to focus on desktop systems security? They usually argue that today, as everybody knows, it is the Network that is what computing is all about and that we should focus on securing infrastructure, and forget about the desktops, which are always to be insecure. The network is the computer, as somebody said.

What those people forget about, is that it is always the desktop that ultimately gets access to all the user’s secretes -- all the passwords, all the keys, all the corporate documents, all the nude holiday pictures, all the secret love letters, all the credit card numbers, and many more.

However secure were all the services (remote servers and network protocols) that we use, if our desktop gets compromised it’s all lost. The recent incident with Google is just yet another example of that. Our desktop systems are the most crucial piece of the whole puzzle.

It’s funny how many people think that by using some thin client solution on their desktops they can solve the problem. Of course they cannot! Just the fact that your OS executes on a server, rather then on your hardware, doesn’t make it any less prone to all the attacks that were otherwise possible when the software executed on your system.

The attempts to secure desktops have been failing for so many years. While recently there is some attempt to minimize likelihood of remote attacks via Web browsers (or generally to focus on application security), this is still just the tip of the iceberg -- there are so many other attack avenue that none of the popular OSes even tries to address, that I consider myself a brave person (not to say stupid) that I actually use my laptop everyday and keep some sensitive information on it ;)

Ok, so that’s a nice piece of complaining you say, but what are we, at ITL, gonna do about it? Well, we just gonna sit and patiently wait for better OSes to appear some day... Oh, hell, we won’t!

Happy New Year :)

<please ignore>
9933 F096 8820 0E23 1AF4 078D 8BDB D97D BDEA 9E9D
</>