VW Emissions Scandal: Death Knell for IoT?

One can hardly read a word about the recent Volkswagen emissions scandal without replacing our collective Fahrvergnügen with Schadenfreude. Massive German auto maker, caught red-handed falsifying emissions data. Heads are gonna roll!

While we have to give VW execs some credit for finally owning up to the deception, their scapegoating is a different story. According to the VW leadership, who’s at fault in this sorry tale? Three rogue software engineers.

Seriously? With billions of dollars at stake, who’s responsible for planning and executing a massive cover-up involving hundreds of thousands of vehicles? Three coders?

pressureImplausible as this fingerpointing sounds, the information about the specifics of who-did-what-when in this sordid tale has yet to be revealed. So from this point on, I’ll be speaking hypothetically.

Hypothetically speaking, then, let’s consider an automobile manufacturer we’ll call, say, XY. Are the programmers of the emissions device software at XY the likely perpetrators of such an escapade?

It is certainly possible to program software to yield incorrect results. After all, you can program software to give you whatever results you want. However, any good software quality assurance (SQA) team should be able to catch such shenanigans.

The basics of SQA are white box and black box testing. White box means the testers analyze the source code itself – which would usually catch any code that intentionally gives the wrong result.

However, even if the coders were subtle enough with their malfeasance to slip by white box testing, then black box testing should trip them up.

With black box testing, testers begin with a set of test data and run them through the software. They check the actual results against the desired results. If they don’t match, then they know there’s a problem. Since the whole point of the malicious code is to generate incorrect results, any competent black box test should call out the crime.

We can only assume the code in question passed all of its tests. So at the very least, the testers at XY are either incompetent or in collusion with the three rogue engineers – and either of these situations indicates a broader problem than simply three bad coder apples.

The Insider Calibration Attack

So, are the perpetrators in XY’s sordid tale of deception a broad conspiracy involving engineers and testers? Perhaps, or perhaps not.

There is another approach to falsifying the emissions data altogether, one that wouldn’t have to involve the engineers that wrote the code for the emissions devices or the testers either. That approach is a calibration attack.

Calibration attacks are so far off the cybersecurity radar that they don’t even have a Wikipedia page – yet. Which is surprising, as they make for a great arrow in the hacker’s quiver, since they don’t depend upon malicious code, and furthermore, encryption doesn’t prevent them.

In the case of XY, their subterfuge might in fact be such an insider calibration attack. Here’s how it works.

There are emissions sensors in each automobile that generate streams of raw data. Those raw data must find their way into the software running inside the emissions device that is producing the misleading results. But somewhere in between, either on a physical device or as an algorithm in the software itself, there must be a calibration step.

This calibration step aligns the raw data with the real-world meaning of those data. For example, if the sensor is detecting parts per million (PPM) of particulate matter in the exhaust, a particular sensor reading would be some number, say, 48947489393 during a controlled test. Without the proper calibration, however, there’s no way to make sense of this number.

To conduct the calibration, a calibration engineer would use an analog testing tool to determine that the actual PPM value at that time was, say, 3.2 PPM. The calibration factor would be the ratio of 48947489393 to 3.2, or 15296090435.3125 (in real world scenarios the formula might be more complicated, but you get the idea).

The engineer would then turn a dial somewhere (either physically or by setting a calibration factor in the software) that represents this number. Once the device is properly calibrated in this way, the readings it gives should be accurate.

However, if the calibration engineer does the calibration incorrectly – or a malefactor intentionally introduces a miscalibration – then the end result would be off. Every time. Even though there was nothing wrong with the sensor data, no security breach between the sensor and emissions device, and furthermore, every line of code in the device was completely correct.

In fact, the only way to detect a calibration attack is by running an independent analog test. In other words, someone would have to get their own exhaust particulate measuring device and run tests on real vehicles to see if the emissions device was properly calibrated.

Which, of course, is how the dirty deeds at VW – oops, I mean XY – were finally uncovered.

The Bigger Story: External Calibration Attacks

So, why did I put “death knell for the IoT” in the title of this article? XY’s emissions devices weren’t on the Internet, and thus weren’t part of the Internet of Things. But of course, they could have been – and dollars to donuts, will be soon.

The most likely scenario for XY’s troubles is an internal calibration attack – but scenarios where hackers mount calibration attacks from outside are far more unsettling.

My Internet research on this topic turned up few discussions of this type of attack. However, there has been some academic research into external calibration attacks in the medical device arena (see this academic paper from the UCLA Computer Science Department as an example).

Here’s a likely scenario: your IoT-savvy wearable device sends diagnostic information to your physician. Physicians have software on their end that they use to analyze the data from such devices for diagnostic purposes.

If a hacker is able to compromise the calibration of the transmitted data, then the physician may be tricked into reaching an incorrect diagnosis – even though your wearable is working properly, the physicians’ software is working properly, and the communication between the two wasn’t compromised.

The conclusion of the UCLA report reads in part: “The proposed attack cannot be prevented or detected by traditional cryptography because the attack is directly dealing with data after sampling. Traditional cryptography can only guarantee the data to be safe through the wireless channels.”

In fact, as with the XY scenario, the only sure way to detect such an attack is to run an independent, analog test of the data. In the case of XY, there was a single calibration attack that impacted a large number of devices – and it still took years before somebody bothered to run the independent analog test.

In the case of the IoT, every single IoT device is subject to a calibration attack. And the only way to identify such attacks is to run an independent test on the data coming from or going to every IoT endpoint.

Even if there were a practical way of running such tests (which there isn’t), we must still ask ourselves whether we would rely upon IoT-enabled devices to run such tests. If so, we haven’t solved the problem – we’ve simply expanded our threat surface to include the devices we’re using to uncover calibration attacks themselves.

The Intellyx Take

Let’s say you just put on your fancy new fitness wearable. You go for a run and when you get back, you get a frantic call from your doctor, who tells you your blood pressure is 150 over 100 – a dangerous case of hypertension.

But then you ask yourself, how do you know the values are accurate? Well, you don’t. The only way to tell is to test your blood pressure with a different device and compare the results. So you borrow your spouse’s fancy new fitness wearable, and it gives your doctor the same reading.

If they’re the same model from the same manufacturer, then of course you’re still suspicious. But even if they’re different devices, you have no way of knowing whether your doctor’s software is properly calibrated.

So you get out your trusty sphygmomanometer (like we all have one of those in our medicine cabinets), and test your blood pressure the old fashioned way.

Then it dawns on you. What good is that fancy new fitness wearable anyway? You’d be suspicious of any reading it would give your doctor, so to be smart, you’d put on that old fashioned cuff for a trustworthy reading anyway. But if you’re going to do that, then why bother with the new IoT doodad in the first place?

This blood pressure scenario is simpler than the XY case, because we’re only worried about a single reading. In the general case, however, we have never-ending streams of sensor data, and we need sophisticated software to make heads or tails out of what they’re trying to tell us.

If a calibration attack has compromised our IoT sensor data, then the only way to tell is to check all those data one at a time – a task that becomes laughingly impractical the larger our stream of IoT sensor data becomes.

Encryption won’t help. Testing your software won’t help. And this problem will only get worse over time. Death knell for the IoT? You be the judge.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Morgan.

SHARE THIS:

Comments

  1. Encryption won’t work and testing won’t help against an external calibration attack. But physical security and properly partitioning your infrastructure with security boundaries will.

    I can’t recall who said it (maybe Sun Tzu?) that when you organize a military camp, every group of tents should be placed in such a way to its neighbors that it can aid them in case of attack but also defend itself in case that neighboring group proves to be rogue. Similarly, each tent in a group should be set up that way. It’s the same thing with IoT. But true, it’s something we still have to learn to think of and fold into our designs.

    So no, I don’t think the IoT will choke to death because of the VW scandal.

    1. Yes, physical security and partitioned infrastructure are essential. So let’s say you have 10,000 pieces of heavy equipment that generate massive streams of telemetry, and you have such protections in place.

      Can you be 100% sure you’re secure? Of course not — you’re never 100% sure.

      So how do you tell if any of those pieces of equipment has been compromised by a calibration attack, in spite of your protections? Test the data from every single sensor with an analog test.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.