- MAS S66: Indistinguishable From… Magic as Interface, Technology, and Tradition — MIT course taught by Greg Borenstein and Dan Novy. Further, magic is one of the central metaphors people use to understand the technology we build. From install wizards to voice commands and background daemons, the cultural tropes of magic permeate user interface design. Understanding the traditions and vocabularies behind these tropes can help us produce interfaces that use magic to empower users rather than merely obscuring their function. With a focus on the creation of functional prototypes and practicing real magical crafts, this class combines theatrical illusion, game design, sleight of hand, machine learning, camouflage, and neuroscience to explore how ideas from ancient magic and modern stage illusion can inform cutting edge technology.
- Maybe We Need an Automation Tax (RoboHub) — rather than saying “automation is bad,” move on to “how do we help those displaced by automation to retrain?”.
- America’s Cyber-Manhattan Project (Wired) — America already has a computer security Manhattan Project. We’ve had it since at least 2001. Like the original, it has been highly classified, spawned huge technological advances in secret, and drawn some of the best minds in the country. We didn’t recognize it before because the project is not aimed at defense, as advocates hoped. Instead, like the original, America’s cyber Manhattan Project is purely offensive. The difference between policemen and soldiers is that one serves justice and the other merely victory.
- White House Names DJ Patil First US Chief Data Scientist (Wired) — There is arguably no one better suited to help the country better embrace the relatively new discipline of data science than Patil.
Our things are getting wired together, and you're not secure if you can't control the destiny of your private information.
Register for Solid 2015 to hear Cory Doctorow discuss the Electronic Frontier Foundation’s work with the Internet of Things.
The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.
Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.
To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised. Read more…
The real challenge going forward: we can't trust anything.
A few weeks ago, I wrote about postmodern computing, and characterized it as the computing in a world of distrust.
This morning, I read Steve Bellovin’s blog post, What Must We Trust? — Bellovin explains that “modern” (my word) security is founded on the idea of a “Trusted Computing Base” (TCB), defined (in part) in the United States’ Defense Department’s Orange Book. There were parts of a system that you had to trust, and you had to guard their integrity vigilantly: the kernel, certainly, but also specific configuration files, executables, and so on.
The TCB has always been problematic, particularly since (at least initially) it did not consider the problem of network connections. But networking aside, Bellovin argues that recent events have blown the idea of a “trusted” system to bits. We’ve seen attacks against (Bellovin’s list) batteries, webcams, USB, and more. If Andromedans (Bellovin doesn’t want to say NSA) have managed to infiltrate our disk drives, what can trust mean? And it would be naive to think that this stops with devices that have disk drives. Our devices, from Fitbits to data centers, have been pwnd even before they’re built. Read more…
At what layer do we build privacy into the fabric of devices?
Attend Solid 2015 to explore the convergence of privacy, security, and the Internet of Things.
In 2011, Kashmir Hill, Gizmodo and others alerted us to a privacy gaffe made by Fitbit, a company that makes small devices to help people keep track of their fitness activities. It turns out that Fitbit broadcast the sexual activity of quite a few of their users. Realizing this might not sit well with those users, Fitbit took swift action to remove the search hits, the data, and the identities of those affected. Fitbit, like many other companies, believed that all the data they gathered should be public by default. Oops.
Does anyone think this is the last time such a thing will happen?
Fitness data qualifies as “personal,” but sexual data is clearly in the realm of the “intimate.” It might seem like semantics, but the difference is likely to be felt by people in varying degrees. The theory of contextual integrity says that we feel violations of our privacy when informational contexts are unexpectedly or undesirably crossed. Publicizing my latest workout: good. Publicizing when I’m in flagrante delicto: bad. This episode neatly exemplifies how devices are entering spaces where they’ve not tread before, physically and informationally. Read more…
Security is at the heart of the web.
We want to share. We want to buy. We want help. We want to talk.
At the end of the day, though, we want to be able to go to sleep without worrying that all of those great conversations on the open web will endanger the rest of what we do.
Making the web work has always been a balancing act between enabling and forbidding, remembering and forgetting, and public and private. Managing identity, security, and privacy has always been complicated, both because of the challenges in each of those pieces and the tensions among them.
Complicating things further, the web has succeeded in large part because people — myself included — have been willing to lock their paranoias away so long as nothing too terrible happened.
I talked for years about expecting that the NSA was reading all my correspondence, but finding out that yes, indeed they were filtering pretty much everything, opened the door to a whole new set of conversations and concerns about what happens to my information. I made my home address readily available in an IETF RFC document years ago. In an age of doxxing and SWATting, I wonder whether I was smart to do that. As the costs move from my imagination to reality, it’s harder to keep the door to my paranoia closed. Read more…