How did we end up with a centralized Internet for the NSA to mine?

The Internet is naturally decentralized, but it's distorted by business considerations.

I’m sure it was a Wired editor, and not the author Steven Levy, who assigned the title “How the NSA Almost Killed the Internet” to yesterday’s fine article about the pressures on large social networking sites. Whoever chose the title, it’s justifiably grandiose because to many people, yes, companies such as Facebook and Google constitute what they know as the Internet. (The article also discusses threats to divide the Internet infrastructure into national segments, which I’ll touch on later.)

So my question today is: How did we get such industry concentration? Why is a network famously based on distributed processing, routing, and peer connections characterized now by a few choke points that the NSA can skim at its leisure?

I commented as far back as 2006 that industry concentration makes surveillance easier. I pointed out then that the NSA could elicit a level of cooperation (and secrecy) from the likes of Verizon and AT&T that it would never get in the US of the 1990s, where Internet service was provided by thousands of mom-and-pop operations like Brett Glass’s wireless service in Laramie, Wyoming. Things are even more concentrated now, in services if not infrastructure.

Having lived through the Boston Marathon bombing, I understand what the NSA claims to be fighting, and I am willing to seek some compromise between their needs for spooking and the protections of the Fourth Amendment to the US Constitution. But as many people have pointed out, the dangers of centralized data storage go beyond the NSA. Bruce Schneier just published a pretty comprehensive look at how weak privacy leads to a weakened society. Others jeer that if social networking companies weren’t forced to give governments data, they’d be doing just as much snooping on their own to raise the click rates on advertising. And perhaps our most precious, closely held data — personal health information — is constantly subject to a marketplace for data mining.

Let’s look at the elements that make up the various layers of hardware and software we refer to casually as the Internet. How does centralization and decentralization work for each?

Public routers

One of Snowden’s major leaks reveals that the NSA pulled a trick comparable to the Great Firewall of China, tracking traffic as it passes through major routers across national borders. Like many countries that censor traffic, in other words, the NSA capitalized on the centralization of international traffic.

Internet routing within the US has gotten more concentrated over the years. There were always different “tiers” of providers, who all did basically the same thing but at inequitable prices. Small providers always complained about the fees extracted by Tier 1 networks. A Tier 1 network can transmit its own traffic nearly anywhere it needs to go for just the cost of equipment, electricity, etc., while extracting profit from smaller networks that need its transport. So concentration in the routing industry is a classic economy of scale.

International routers, of the type targeted by the NSA and many US governments, are even more concentrated. African and Latin American ISPs historically complained about having to go through US or European routers even if the traffic just came back to their same continent. (See, for instance, section IV of this research paper.) This raised the costs of Internet use in developing countries.

The reliance of developing countries on outside routers stems from another simple economic truth: there are more routers in affluent countries for the same reason there are more shopping malls or hospitals in affluent countries. Foreigners who have trespassed US laws can be caught if they dare to visit a shopping mall or hospital in the US. By the same token, their traffic can be grabbed by the NSA as it travels to a router in the US, or one of the other countries where the NSA has established a foothold. It doesn’t help that the most common method of choosing routes, the Border Gateway Protocol (BGP), is a very old Internet standard with no concept of built-in security.

The solution is economic: more international routers to offload traffic from the MAE-Wests and MAE-Easts of the world. While opposing suggestions to “balkanize” the Internet, we can applaud efforts to increase connectivity through more routers and peering.

IaaS cloud computing

Centralization has taken place at another level of the Internet: storage and computing. Data is theoretically safe from intruders in the cloud so long as encryption is used both in storage and during transmission — but of course, the NSA thought of that problem long ago, just as they thought of everything. So use encryption, but don’t depend on it.

Movement to the cloud is irreversible, so the question to ask is how free and decentralized the cloud can be. Private networks can be built on virtualization solutions such as the proprietary VMware and Azure or the open source OpenStack and Eucalyptus. The more providers there are, the harder it will be to do massive data collection.

SaaS cloud computing

The biggest change — what I might even term the biggest distortion — in the Internet over the past couple decades has been the centralization of content. Ironically, more and more content is being produced by individuals and small Internet users, but it is stored on commercial services, where it forms a tempting target for corporate advertisers and malicious intruders alike. Some people have seriously suggested that we treat the major Internet providers as public utilities (which would make them pretty big white elephants to unload when the next big thing comes along).

This was not technologically inevitable. Attempts at peer-to-peer social networking go back to the late 1990s with Jabber (now the widely used XMPP standard), which promised a distributed version of the leading Internet communications medium of the time: instant messaging. Diaspora more recently revived the idea in the context of Facebook-style social networking.

These services allow many independent people to maintain servers, offering the service in question to clients while connecting where necessary. Such an architecture could improve overall reliability because the failure of an individual server would be noticed only by people trying to communicate with it. The architecture would also be pretty snoop-proof, too.

Why hasn’t the decentralized model taken off? I blame SaaS. The epoch of concentration in social media coincides with the shift of attention from free software to SaaS as a way of delivering software. SaaS makes it easier to form a business around software (while the companies can still contribute to free software). So developers have moved to SaaS-based businesses and built new DevOps development and deployment practices around that model.

To be sure, in the age of the web browser, accessing a SaaS service is easier than fussing with free software. To champion distributed architectures such as Jabber and Diaspora, free software developers will have to invest as much effort into the deployment of individual servers as SaaS developers have invested in their models. Business models don’t seem to support that investment. Perhaps a concern for privacy will.

tags: , , , , , , , , , , , , , ,