Security has to reboot. What has passed for strong security until now is going to be considered only casual security going forward. As I put it last week, the damage that has become visible over the past few months means that “we need to start planning for a computing world with minimal trust.”
So what are our options? I’m not sure if this ordering goes precisely from worst to best, but today this order seems sensible.
Stay the Course
This situation may not be that bad, right?
Apple’s adding a fingerprint scanner to the iPhone 5S last month seemed like bad timing, given recent security concerns. However, many people I respect seem completely calm about it, for reasons that make me think most of us are treating security casually:
- Rafe Colburn points out that “Security is a concept with no meaning outside the context of specific threats.” It’s not yet clear what there is to fear.
- Tim Bray suggests that the risks of many security problems only arise “if what you’re mostly worried about is a skilled, determined adversary, such as a government official.”
- James Turner has a similar take: “The game isn’t about making your house invincible; it’s about making it difficult enough to bias the thieves toward someone else’s home.”
Will security fall the way that privacy largely has? (Though Bray hopes privacy hasn’t fallen.)
Perhaps the failure of encryption is another non-problem, something only a few vocal people will notice unless something terrible happens in their immediate circle. Security has largely stayed a specialist concern, and is often amazingly casual in both the digital and physical worlds.
I suspect, however, that after a few changes of credit cards, cleaning up of stolen identities, and moments of industrial espionage, that these issues will make it harder and harder for “casual security”—even though it’s what we used to think of as fairly strong security—to remain valuable.
A different approach leaves the risks behind by leaving digital behind to the extent possible.
While a small group of people have already dropped out of the digital world, recent stories amplify the concerns that drive dropping out, or at least using less. John Gilmore suggested:
Where Big Data collection is voluntary, I do not volunteer, thus I don’t use Facebook, Google, etc. When collection is involuntary, like with NSA’s Big Data, I work to limit their power, both to collect, and to use; and then I don’t believe they will follow the rules anyway, because of all the historical evidence. So I arrange my life to not leave a big data trail: I don’t use ATMs, I pay with cash, don’t carry identification, don’t use Apple or Google or Microsoft products, etc.
Will more people follow his lead? Stepping away from easily traceable digital approaches certainly reduces exposure to digital surveillance.
Abandon Some Digital Dreams
When we assumed we could keep information secure, we were willing to take some steps we wouldn’t have otherwise. Some of that is about personal information sharing, putting potentially sensitive information in digital form. Suddenly sharing even fairly basic location information has consequences. Those risks could put a damper on many interactions.
While the default settings of software and devices may be sharing more information than we want, some efforts at security may actually have created more security hazards. Do you really want a 3G kill switch in your computer when you don’t know who might have access to it? The balance might be different for cell phones; such features certainly raise the potential cost of compromised systems, and raise the paranoia level broadly.
Two-factor authentication is a popular story lately, but multi-factor authentication might become a further option. Different systems contribute pieces to keys that let users in, but no individual system can make the connection. This adds to the complexity of systems (and what if one of them is down?), but digital security is rarely about simplicity.
It’s harder to imagine, but the “always connected, always on” model of computing may also have to go. Not for everything—it seems likely that commercial sites will stay up, as will social networks and email services. It is much harder to attack systems that are disconnected or off. Physical and network separation may not be perfect—contamination can still spread through bad code or data—but it’s an additional layer of isolation. (Of course, a 3G connection to the CPU may be harder to halt.)
Physical approaches can certainly go beyond connections between computers. Physical security has its own problems, and the ubiquity of recording devices makes “wearing a wire” seem almost quaint, but it certainly requires attackers to make a potentially expensive investment to reach their targets. Cities currently cluster groups of powerful people who prefer personal contact when possible, despite the options for dispersal that the digital world keeps expanding.
Physical and in-person approaches also make it easier to return to old models of compartmentalization and cells, where information is shared on a need to know basis rather than rough classification levels. When “need to know” information travels electronically, it’s easily intercepted, forwarded, or duplicated. Person to person contact isn’t just useful for conversation, but also for exchanging information about and keys to future messages that may travel digitally but hide in other content, require specific one-time pads or keys, or will arrive at a particular time. (And then those messages can include information about future messages, but extending the chain makes it more brittle.)
Change to Other Digital Approaches
If we don’t want to meet in person weekly to exchange keys, but aren’t comfortable just hoping that there aren’t weaknesses in our communications chain, what can we do?
I’ve always enjoyed Ronald Reagan’s classic quote “trust but verify.” In a digital world where you want to work with others you don’t always know, it serves as a minimal approach that lets you get things done without getting burned constantly. Another way of putting it, from a very different context, is Arthur Weasley’s “Never trust anything that can think for itself if you can’t see where it keeps its brain” (Harry Potter and The Chamber of Secrets, 329).
Right now, many of us are relying on code that only its creators get to see, though maybe a few privileged customers get to pay for the privilege. It may be time for that to fall, as it becomes clear that opacity hides brokenness. As Matthew Green wrote a few weeks ago:
Maybe this is a good thing. We’ve been saying for years that you can’t trust closed code and unsupported standards: now people will have to verify.
Even better, these revelations may also help to spur a whole burst of new research and re-designs of cryptographic software. We’ve also been saying that even open code like OpenSSL needs more expert eyes. Unfortunately there’s been little interest in this, since the clever researchers in our field view these problems as ‘solved’ and thus somewhat uninteresting.
What does verification mean? Most people, even most people who use cryptography, can’t read the code used to implement it. An even smaller group of people can evaluate whether that code behaves as it is supposed to. We need something like “cryptography superfriends.” The NSA used to offer those services, but that didn’t work out very well. Evaluating these tools will require a new group of inspectors, doing their work separately and making it all available for inspection.
The IETF is moving ahead with new rounds of standards, and they might be the right place for this work. After past subversion, I’m cautious, but it seems like the right place to start, at least.
I’ve always wondered how much value formally verified code might have, and I’m not sure if the kernel of a browser is enough formal code to help, but I’m also intrigued by
CoqQuark, a small browser kernel that mediates access to system resources for all other browser components. If STEED can help simplify distributed email encryption, that might also make it easier to distribute needed components.
Two weeks ago, when I wondered about whether computing could recover from this summer’s collapse of trust, @QuentinJohns1 tweeted back that “transparency and openness will restore trust by its very nature.” I had my doubts—transparency isn’t always welcome, and openness makes it easy to disrupt processes. In the long run, however, I hope he’s right.
Update: Here’s the Electronic Frontier Foundation’s telling of the story and call to action.