"security" entries

Four short links: 27 February 2015

Four short links: 27 February 2015

No Estimates, Brand Advertising, Artificial Intelligence, and GPG BeGone

  1. #NoEstimatesAllspaw also points out that the yearning to break the bonds of estimation is nothing new — he’s fond of quoting a passage from The Unwritten Laws of Engineering, a 1944 manual which says that engineers “habitually try to dodge the irksome responsibility for making commitments.” All of Allspaw’s segment is genius.
  2. Old Fashioned Snapchatget a few drinks in any brand advertiser and they’ll admit that the number one reason they know that brand advertising works is that, if they stop, sales inevitably drop.
  3. Q&A With Bruce Sterling on Artificial Intelligence — in which Sterling sounds intelligent, and the questioner sounds Artificial.
  4. GPG and Me (Moxie Marlinspike) — Even though GPG has been around for almost 20 years, there are only ~50,000 keys in the “strong set,” and less than 4 million keys have ever been published to the SKS keyserver pool ever. By today’s standards, that’s a shockingly small user base for a month of activity, much less 20 years. This was a great talk at Webstock this year.
Comment
Four short links: 25 February 2015

Four short links: 25 February 2015

Bricking Cars, Mapping Epigenome, Machine Learning from Encrypted Data, and Phone Privacy

  1. Remotely Bricking Cars (BoingBoing) — story from 2010 where an intruder illegally accessed Texas Auto Center’s Web-based remote vehicle immobilization system and one by one began turning off their customers’ cars throughout the city.
  2. Beginning to Map the Human Epigenome (MIT) — Kellis and his colleagues report 111 reference human epigenomes and study their regulatory circuitry, in a bid to understand their role in human traits and diseases. (The paper itself.)
  3. Machine Learning Classification over Encrypted Data (PDF) — It is worth mentioning that our work on privacy-preserving classification is complementary to work on differential privacy in the machine learning community. Our work aims to hide each user’s input data to the classification phase, whereas differential privacy seeks to construct classifiers/models from sensitive user training data that leak a bounded amount of information about each individual in the training data set. See also The Morning Paper’s unpacking of it.
  4. Privacy of Phone Audio (Reddit) — unconfirmed report from Redditor I started a new job today with Walk N’Talk Technologies. I get to listen to sound bites and rate how the text matches up with what is said in an audio clip and give feed back on what should be improved. At first, I though these sound bites were completely random. Then I began to notice a pattern. Soon, I realized that I was hearing peoples commands given to their mobile devices. Guys, I’m telling you, if you’ve said it to your phone, it’s been recorded…and there’s a damn good chance a 3rd party is going to hear it.
Comment
Four short links: 24 February 2015

Four short links: 24 February 2015

Open Data, Packet Dumping, GPU Deep Learning, and Genetic Approval

  1. Wiki New Zealand — open data site, and check out the chart builder behind the scenes for importing the data. It’s magic.
  2. stenographer (Google) — open source packet dumper for capturing data during intrusions.
  3. Which GPU for Deep Learning?a lot of numbers. Overall, I think memory size is overrated. You can nicely gain some speedups if you have very large memory, but these speedups are rather small. I would say that GPU clusters are nice to have, but that they cause more overhead than the accelerate progress; a single 12GB GPU will last you for 3-6 years; a 6GB GPU is plenty for now; a 4GB GPU is good but might be limiting on some problems; and a 3GB GPU will be fine for most research that looks into new architectures.
  4. 23andMe Wins FDA Approval for First Genetic Test — as they re-enter the market after FDA power play around approval (yes, I know: one company’s power play is another company’s flouting of safeguards designed to protect a vulnerable public).
Comment
Four short links: 23 February 2015

Four short links: 23 February 2015

Self-Assembling Chairs, Home Monitoring, Unicorn Horn, and Cloud Security

  1. MIT Scientists and the Self-Assembling Chair (Wired) — using turbulence to randomise interactions, and pieces that connect when the random motions align. From the Self-Assembly Lab at MIT.
  2. Calaosa free software project (GPLv3) that lets you control and monitor your home.
  3. Founder Wants to be a Horse Not a Unicorn (Business Insider) — this way of thinking  —  all or nothing moonshots to maximise shareholder value  —  has become pervasive dogma in tech. It’s become the only respectable path. Either you’re running a lowly lifestyle business, making ends meet so you can surf all afternoon, or you’re working 17-hour days goring competitors with your $US48MM Series C unicorn horn on your way to billionaire mountain.
  4. Using Google Cloud Platform for Security Scanning (Google Online Security) — platform vendors competing on the things they can offer for free on the base platform, things which devs and ops used to have to do themselves.
Comment

An Internet of Things that do what they’re told

Our things are getting wired together, and you're not secure if you can't control the destiny of your private information.

Barbed_wire_Richard_Leonard_Flickr

Register for Solid 2015 to hear Cory Doctorow discuss the Electronic Frontier Foundation’s work with the Internet of Things.

The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.

Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.

To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised. Read more…

Comments: 3
Four short links: 19 February 2015

Four short links: 19 February 2015

Magical Interfaces, Automation Tax, Cyber Manhattan Project, and US Chief Data Scientist

  1. MAS S66: Indistinguishable From… Magic as Interface, Technology, and Tradition — MIT course taught by Greg Borenstein and Dan Novy. Further, magic is one of the central metaphors people use to understand the technology we build. From install wizards to voice commands and background daemons, the cultural tropes of magic permeate user interface design. Understanding the traditions and vocabularies behind these tropes can help us produce interfaces that use magic to empower users rather than merely obscuring their function. With a focus on the creation of functional prototypes and practicing real magical crafts, this class combines theatrical illusion, game design, sleight of hand, machine learning, camouflage, and neuroscience to explore how ideas from ancient magic and modern stage illusion can inform cutting edge technology.
  2. Maybe We Need an Automation Tax (RoboHub) — rather than saying “automation is bad,” move on to “how do we help those displaced by automation to retrain?”.
  3. America’s Cyber-Manhattan Project (Wired) — America already has a computer security Manhattan Project. We’ve had it since at least 2001. Like the original, it has been highly classified, spawned huge technological advances in secret, and drawn some of the best minds in the country. We didn’t recognize it before because the project is not aimed at defense, as advocates hoped. Instead, like the original, America’s cyber Manhattan Project is purely offensive. The difference between policemen and soldiers is that one serves justice and the other merely victory.
  4. White House Names DJ Patil First US Chief Data Scientist (Wired) — There is arguably no one better suited to help the country better embrace the relatively new discipline of data science than Patil.
Comment
Four short links: 18 February 2015

Four short links: 18 February 2015

Sales Automation, Clone Boxes, Stats Style, and Extra Orifices

  1. Systematising Sales with Software and Processes — sweet use of Slack as UI for sales tools.
  2. Duplicate SSH Keys EverywhereIt looks like all devices with the fingerprint are Dropbear SSH instances that have been deployed by Telefonica de Espana. It appears that some of their networking equipment comes set up with SSH by default, and the manufacturer decided to reuse the same operating system image across all devices.
  3. Style.ONS — UK govt style guide covers the elements of writing about statistics. It aims to make statistical content more open and understandable, based on editorial research and best practice. (via Hadley Beeman)
  4. Warren Ellis on the Apple WatchI, personally, want to put a gold chain on my phone, pop it into a waistcoat pocket, and refer to it as my “digital fob watch” whenever I check the time on it. Just to make the point in as snotty and high-handed a way as possible: This is the decadent end of the current innovation cycle, the part where people stop having new ideas and start adding filigree and extra orifices to the stuff we’ve got and call it the future.
Comment

Postmodern security

The real challenge going forward: we can't trust anything.

A few weeks ago, I wrote about postmodern computing, and characterized it as the computing in a world of distrust.

This morning, I read Steve Bellovin’s blog post, What Must We Trust? — Bellovin explains that “modern” (my word) security is founded on the idea of a “Trusted Computing Base” (TCB), defined (in part) in the United States’ Defense Department’s Orange Book. There were parts of a system that you had to trust, and you had to guard their integrity vigilantly: the kernel, certainly, but also specific configuration files, executables, and so on.

The TCB has always been problematic, particularly since (at least initially) it did not consider the problem of network connections. But networking aside, Bellovin argues that recent events have blown the idea of a “trusted” system to bits. We’ve seen attacks against (Bellovin’s list) batteries, webcams, USB, and more. If Andromedans (Bellovin doesn’t want to say NSA) have managed to infiltrate our disk drives, what can trust mean? And it would be naive to think that this stops with devices that have disk drives. Our devices, from Fitbits to data centers, have been pwnd even before they’re built. Read more…

Comments: 3

The Intimacy of Things

At what layer do we build privacy into the fabric of devices?

Sign-up to attend Solid 2015 to explore the convergence of privacy, security, and the Internet of Things.

loom_sethoscope_flickr

In 2011, Kashmir Hill, Gizmodo and others alerted us to a privacy gaffe made by Fitbit, a company that makes small devices to help people keep track of their fitness activities. It turns out that Fitbit broadcast the sexual activity of quite a few of their users. Realizing this might not sit well with those users, Fitbit took swift action to remove the search hits, the data, and the identities of those affected. Fitbit, like many other companies, believed that all the data they gathered should be public by default. Oops.

Does anyone think this is the last time such a thing will happen?

Fitness data qualifies as “personal,” but sexual data is clearly in the realm of the “intimate.” It might seem like semantics, but the difference is likely to be felt by people in varying degrees. The theory of contextual integrity says that we feel violations of our privacy when informational contexts are unexpectedly or undesirably crossed. Publicizing my latest workout: good. Publicizing when I’m in flagrante delicto: bad. This episode neatly exemplifies how devices are entering spaces where they’ve not tread before, physically and informationally. Read more…

Comment

Keep me safe

Security is at the heart of the web.

Locks image: CC BY 2.0 Mike Baird https://www.flickr.com/photos/mikebaird/2354116406/  via Flickr

We want to share. We want to buy. We want help. We want to talk.

At the end of the day, though, we want to be able to go to sleep without worrying that all of those great conversations on the open web will endanger the rest of what we do.

Making the web work has always been a balancing act between enabling and forbidding, remembering and forgetting, and public and private. Managing identity, security, and privacy has always been complicated, both because of the challenges in each of those pieces and the tensions among them.

Complicating things further, the web has succeeded in large part because people — myself included — have been willing to lock their paranoias away so long as nothing too terrible happened.

I talked for years about expecting that the NSA was reading all my correspondence, but finding out that yes, indeed they were filtering pretty much everything, opened the door to a whole new set of conversations and concerns about what happens to my information. I made my home address readily available in an IETF RFC document years ago​. In an age of doxxing and SWATting, I wonder whether I was smart to do that. As the costs move from my imagination to reality, it’s harder to keep the door to my paranoia closed. Read more…

Comment