An Internet of Things that do what they’re told

Our things are getting wired together, and you're not secure if you can't control the destiny of your private information.

Barbed_wire_Richard_Leonard_Flickr

The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.

Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.

To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.

The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.

That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.

It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.

I don’t mean to say that all carrier security sucks, or every customer service rep is gullible, or that all cops are dirty. But some carrier security sucks, sometimes: remember that when security researchers revealed that AT&T’s billing system was exposing customers’ financial info, the company responded by having the researchers arrested. And some customer service reps can be socially engineered, sometimes: remember that Wired writer Mat Honan had his entire digital life wiped out because a teenager was able to trick CSRs for some of the biggest companies in the industry. And of course, cops sometimes betray their public trust for alarmingly small sums of cash, like the crooked police officers who sold British tabloid reporters the private mobile numbers of celebrities and government officials.

We don’t know how to make back doors that only good guys can go through.

In other words: as soon as you create a back door on phones, you create the possibility that someone will abuse it. We don’t know how to make back doors that only good guys can go through. And that’s before we get to the security issues that arise from standardizing telco-controlled back doors in phones that are sent to countries where the rule of law is compromised or nonexistent. A year ago, Ukrainians who attended the Euromaidan demonstrations in Kiev had their mobile phone IDs harvested by state security services using Stingray devices, who then ordered the national carriers to use them to look up their mobile numbers and broadcast a chilling message by SMS: “Dear subscriber, you are registered as a participant in a mass disturbance.”

What happens when we give the state the power to brick any phone without user intervention? After San Francisco BART officers were caught murdering a rider by passengers who recorded and transmitted footage using their mobile phones, the public transit operator tried to shut down mobile service on its property. Hardly a day goes by without stories of cops who illegally seize witnesses’s mobile phones after committing illegal acts — what are the consequences of creating a law enforcement remote-wipe-and-brick mandate for those devices?

Imagine a user-centric, data-centric, freedom-centric version of this security measure: all devices would have to be sold with encrypted filesystems by default, so that users whose phones are lost or stolen can be sure that their data is intact, that their bank accounts won’t be raided, that the correspondence with their lawyers and doctors and lovers won’t be read, that their search history and photos won’t be exposed.

OSes would invite users who were worried about deterring physical theft to initialize their devices with a secret — a key or passphrase — that can be entered into a website, which signs it and transmits to the phone, ordering it to wipe itself down to the BIOS. In that scenario, a phone could only be bricked if both the customer and the carrier cooperated.

Both this model and the existing one have their pluses and minuses, as well as some shared weaknesses.

Chief among these shared risks is that users might choose bad, easily guessed passwords, or might store their passwords in other services, such as a cloud email account or a password locker, that, in turn, gets breached. That produces some system-wide risk for the carrier, manufacturer, and police, but it also creates room for users who are worried about their security to adopt very strong passwords and to store them in highly secure contexts.

In a system where users can choose to install or not install a kill switch, there is the risk that some users will initially undervalue the cost to them of having their phones stolen, choose not to put in the kill switch, and then regret it later.

Most significantly, a world where all mobile phones are presumed to be worthless after they’re stolen is one where thieves are strongly disincentivized from stealing any phones. This is an important benefit to mandated, state-controlled kill switches, but it’s a short-lived one.

A system is not secure if it doesn’t give you the freedom to do what you need to do.

Street-level phone theft is an unsophisticated crime, but criminal networks contain sophisticated elements. Once kill switches are standardized, we should expect criminal networks to respond by developing chop-shops for phones — just as they did for cars after the advent of vehicle ID numbers — that turn them into parts; we should also expect them to establish export markets that move killed phones to jurisdictions that don’t have kill-switch mandates. Finally, we should expect them to figure out how to flash the BIOS of bricked phones and install a new OS on them. If we rely on phones, we want there to be technicians who can easily discover how to restore them from a corrupt state to a known-good one.

Good security measures anticipate countermeasures. It’s true that desperate, addled criminals who swipe your phone won’t be able to sell them around the corner anymore to get quick cash for drugs. But they will be able to take it a couple blocks over and sell it to a middleman who will not have much trouble turning it into cash — as parts, a refurb, or an export.

Any time someone tells you that security is on a slider whose other end is labelled “privacy” (or “autonomy” or “elegance” or “usability”), that person is either being sneaky or lazy.

You’re not secure if you can’t control the destiny of your private information. A system is not secure if it doesn’t give you the freedom to do what you need to do. If you think that giving phone companies the power to brick the computer you rely on for everything from calling your kids to opening your front door is “elegant” or “usable,” then you and I have very different ideas of what those words mean.

Our things are getting wired together, and we’re establishing the legal, normative, commercial, and technical norms for the Internet of Things right now. We can choose how we do it, and we should: we should choose the Internet of Things That Do What You Tell Them, not the Internet of Things That Boss You Around.

Cropped image on article and category pages by Richard Leonard on Flickr, used under a Creative Commons license.

tags: , , , ,