News + Stories: What is security for you?
Kay Römer: There are two aspects here. The first is called confidentiality. This means that information is only released from a system to those persons who are authorised to receive it. The second is integrity, in other words that only those people who are authorised to do so can modify a system. In my view, this is what security means.
From what perspective do you view security?
Römer: My area of expertise is not security as such, but rather an overarching term called dependability. This is a collective set of properties of a system that enable people to rely on it. And that includes security. This also includes confidentiality and integrity, but there is a lot more to this. For example, the availability of a system, i.e. that the system is not always switched off for maintenance. In that case, nothing actually bad happens, but nothing useful happens either. Another aspect here is reliability, which aims to ensure that the system does what I want it to do. Maintainability is also part of this, so that I can make any necessary changes or adjustments. My field of expertise is not people who deliberately hack systems, but the world is bad enough apart from them as well. There are all kinds of environmental influences, interferences, dirt and much more. All these can impair a system just as much as a targeted hack. We looked at all of this in the Lead Project “Dependable Internet of Things in adverse environments” and security was part of it. One factor with security is that it can hardly ever be retrofitted to a system. It has to be included from the beginning. When you design a modern vehicle, for example, you have to think about this everywhere. It’s like a chain. A system is only as secure as its weakest link. That’s why you have to look at the whole system from all aspects at the outset and really think about it. This is just as true for security as it is for dependability.
To what extent does security also influence dependability? If security is not up to par, how much does that hamper dependability?
Römer: If we take dependability as the list of properties mentioned above, then the system is no longer dependable as soon as one of these things is not guaranteed. If it is not secure, it cannot be dependable. That would be one context. Another factor is the concept of safety, which is not the same as security. The connection between the two terms is as follows. Security means protecting the system from people, i.e. from hackers. Safety means that I protect people from the system so that the system does not harm people due to it having been hacked or because it is faulty. This would be like an industrial robot that runs somebody over due to a failed sensor. Safety and security are of course closely linked. If a system is hacked, then it is no longer secure, but it may also no longer be safe because the system harms people as a result of the hack. Some good examples of this can be found in internet videos. For example, the remote-controlled car that someone hacked into and was able to take over the controls. This is also a safety problem, because the person sitting in the car can potentially be driven into a ditch, even though they were actually driving perfectly correctly. In this way these two things are very closely linked.
Due to the limited resources and the possibility of physical access, I can hack many IoT devices in a much more perfidious way than by only sending messages over the network
What is the danger if cybersecurity is not available in the IoT?
Römer: The devices in the Internet of Things usually have little memory, little computing power and a very limited energy reserve. They are often battery-operated, but are designed to work for over ten years without the battery having to be replaced. This is a challenge, especially in terms of security, as you can’t do many things that work with a PC or a server. You don’t have the computing power, memory or energy for that. Due to this challenge, you usually have to come up with special ideas. A second very specific thing in relation to cybersecurity in the IoT is that hacks of classic computer systems usually involve sending certain messages to the computer via the internet, which then generate a stack or buffer overflow, for example, which allows the system to be taken over by the attacker. With the Internet of Things, with these networked embedded systems, the devices are not normally located in a protected room or in a data centre, but are actually everywhere. This makes them relatively easy to get to and they can be physically accessed. One example of this is side channels, where I measure the power consumption and thus find out something about the system. Or I can attach a cable to the chip and thus influence what the system does. These are physical attacks, and they are often not possible in the classic internet because I don’t have direct access to the computer. Due to the limited resources and the possibility of physical access, I can hack many IoT devices in a much more perfidious way than by only sending messages over the network. So, these are two very specific challenges.
You said that the chain is only as strong as its weakest link. In a private household, more and more things are networked, for example the fridge, the stereo system or the oven. How great is the risk of someone gaining access to the entire home network through such devices, which are not as well secured as a computer?
Römer: That depends very much on how it is structured internally. Many devices in our research environment can be found in the industrial sector and are used, for example, to measure certain things in buildings. The computers are often not fully-fledged participants in the internet, but only transmit sensor data to more powerful computers using a special protocol. If I manage to hack into this little device, I can hardly take over other devices and cause damage there. But there are newer systems where it is de facto the case that every small sensor, no matter how primitive, is an equal participant in the internet. Once I have hacked into them, there is a certain risk that I can attack or take over the others from this base in a domino-like manner. This really depends a lot on the internal structure. If the hacked device is just a small sensor that sends measured values somewhere, the worst I can do is falsify the measured values, but I can’t automatically take over the next computer in the chain.
Your main areas of research are networked embedded systems and the Internet of Things. What are the safety requirements here and what are the dangers?
Römer: In addition to the things already mentioned, we not only looked at several computers in terms of the strength of the chain, but also at the various components of a single device. A computer itself already has various functions. It starts with the sensors for detecting the environment, then there are the processors for processing the data and so on. Then I have the external devices, I have the radio communication – and all of that I can attack. I can attack the sensor and make it believe that something is there that isn’t. I can also interfere with radio communication and smuggle in messages that have not been sent at all. Or I can block the message transmission completely. Of course, I can also mess up the processor so that it doesn’t calculate what it should. In the Lead Project, we looked at these different aspects together, the sensor technology, data processing, radio communication and transmission, in order to make everything reliable.
Did you end up with a universal solution or do you need several specific solutions?
Römer: Some of these are very specific solutions. We are still a relatively long way from an all-in-one solution. But there are procedures that are more universal. One of these is also a good example of what we looked at in the Lead Project. For wireless technologies such as Bluetooth Low Energy, which are used in almost every mobile phone or laptop today, there is an official standard that describes how the protocol works. However, each manufacturer has a slightly different implementation of it. We wanted to know whether two devices with chips from different manufacturers collaborate correctly with each other via Bluetooth. The classic approach would be to simply connect two devices together and test for two hours to see if everything works at all times. If it does, then we can hope that it will continue to work for the next 100 years. Instead, we first used machine learning methods to learn how one device does Bluetooth communication. For this we systematically questioned the device like a teacher would a pupil and checked what it reported back in response to various queries. From these answers we learnt a mathematical description, a model, of the Bluetooth implementation of the device. We then did the same with the second Bluetooth device. The models learnt for the two devices can then be compared mathematically to see if they are compatible. If that’s the case, I can give it my stamp of approval and say it works. Or the system finds a counter-example where it doesn’t work – for instance if the Bluetooth chip in the lamp does this and that with the Bluetooth chip in the laptop and then there is a big bang and nothing works any more. And in principle, this is not only possible for Bluetooth Low Energy, but for all kinds of other technologies, such as WiFi. But it’s really not so simple that it works for WiFi at the touch of a button; you have to sit down again and put a lot of work into it.
In principle, the same rules apply to the Internet of Things as for laptops when it comes to passwords and security
So, the advantage is that you only have to “learn” the communication once for each device and then it can be compared with all the others instead of having to pair each device with every other device...
Römer: Exactly. You could in practice get the chips from the manufacturers and learn for each chip what they do. Finally, it would then be easy to see whether they are compatible or not.
More and more people have networked technical devices in their homes – whether ovens, stereo systems or other things. Should users be made more aware of the fact that there are also security risks lurking in the background for them?
Römer: In principle, the same rules apply to the Internet of Things as for laptops when it comes to passwords and security. Interestingly, many problems are not about being able to hack into such systems, but rather people not setting passwords at all and leaving the factory-set ones unchanged. Social engineering is also a problem here, meaning that people write down passwords because they are so complicated and then leave them lying around somewhere unprotected. People should pay attention to these things. And then of course the effects of attacks on IoT devices and laptops or PCs are different. I can perform a hack to obtain information or to modify the system – these are the issues of confidentiality and integrity. And what does this mean for an IoT device? On the one hand, there are sensors in it, some of which collect data about me as a person. With IoT devices there is the potential that very personal, sensor-based information about me and my everyday life can be obtained, such as how often I go to the toilet or how often I am in the kitchen. Another difference is that the ramifications can also be of a physical nature when it comes to IoT devices. Let’s take the refrigerator mentioned earlier. We now have models whose temperature I can set using an app. This could be used by an attacker to set the temperature so that my food spoils. Here, physical damage can be caused instead of simply information being stolen. The consequences can therefore be much worse than when the laptop is hacked. However, the rules for dealing with security are the same here too. I shouldn’t think it’s just a small sensor, it won’t do anything bad and I don’t have to worry about it. You really have to pay attention to this and configure things accordingly. A firewall may also be necessary to isolate these devices from critical systems. Unfortunately, many people still lack this awareness.
What are particularly critical areas in the event of an attack?
Römer: One example that we deal with intensively is positioning. You have GPS outdoors, but it doesn’t work indoors. This requires special indoor positioning systems, which are prone to safety issues. For example, there are attacks that trick the system into believing that the physical distance between two devices is greater or less than it actually is. When a robot is in use that has to maintain certain safety distances, this is dangerous. Normally it should stay far enough away from people. However, if the attack pretends that the measured distance is greater than the actual distance, this would cause the robot to run over or into people.
A team led by Maria Eichlseder at TU Graz has developed the ASCON algorithm, which has been selected as the international standard for lightweight cryptography. What can such an algorithm do and what can it not influence at all?
Römer: ASCON is a cryptographic algorithm that can be used to encrypt data on one side and decrypt it on the other. Ultimately, this means that the network connection between two devices can be protected. But it does not help with physical attacks on the devices themselves. I can still go to the pins of the processor with a measuring device and measure something. I can still fool the sensors into believing that they are measuring information that is not physically present. The algorithm doesn’t help here.
But wouldn’t that only be the case if I had physical access to these systems?
Römer: Yes, that is the prerequisite for this, although this is often also the case in the IoT sector.
If I can’t constantly install new, better hardware and better security protocols, how can I still ensure that everything works reliably and well over a fairly long period of time?
What are the biggest areas needing improvement in embedded networked systems and the Internet of Things?
Römer: I think the limited performance of devices is still a challenge. As far as I understand it, ASCON also has certain requirements in terms of performance. In the Internet of Things, there are many devices with very low performance and only a few devices with higher performance. As I understand it, it would not be possible to use ASCON at the lower end of the performance spectrum. So, we have to come up with something in this area in particular. Another point is that with IoT devices, the idea is that you install them once and then they work for 20 years without anyone having to take care of them or maintain them. Forecasts predict that there will eventually be some 100 billion of these devices, and that won’t just be fridges and cars. They will also include the smallest sensors in buildings that monitor certain material properties. That creates the extremely difficult challenge of updating these periodically. It’s not like a smartphone, which I replace with a new one every two years because it’s no longer powerful enough or no longer supports the latest security features. These IoT devices have to run for ten, 15 or 20 years.
And what do I do if new security vulnerabilities are found? I can’t tear the whole world apart, throw away all the IoT devices and install new ones. This quickly raises the question of sustainability. If I can’t constantly install new, better hardware and better security protocols, how can I still ensure that everything works reliably and well over a fairly long period of time? This is a pretty fundamental challenge and there is no real solution yet. One method could be the aforementioned approach that we used in the Lead Project. After all, the security issue is a bit like the hare and the hedgehog. Hackers find out that an error has occurred somewhere during programming that can be exploited. That’s why the security experts come along to find a solution; the one is always chasing after the other, but there’s never an end in sight. The big hope would be that there are systems that are provably secure. Similar to testing Bluetooth devices, where it can be proven that two devices work together, it would be necessary to prove that, under certain assumptions, they can no longer be hacked. This could alleviate the problem of longevity somewhat.
What can non-experts do to protect themselves?
Römer: In my view, the best non-experts can do is to really use the security mechanisms that have been made available with the devices. This means setting passwords appropriately and not storing them everywhere on the computer or in the cloud, because a hacker can read them there. And you should be aware that these small devices are just as susceptible to attacks as your laptop, although it may not seem so at first glance. If non-experts heed this advice, they can protect themselves to a certain extent even without a detailed technical understanding.
This research area is anchored in the Field of Expertise “Information, Communication & Computing”, one of five strategic foci of TU Graz.
You can find more research news on Planet research. Monthly updates from the world of science at Graz University of Technology are available via the research newsletter TU Graz research monthly.