A few days ago I proposed a way to
offer more privacy to people visiting government web sites.
This blog builds on that proposal, which was largely technical, by
examining the policy and organizational issues that swirl around it.
My ideas are informed by a discussion I had with Lillie
Coney, Associate Director of the
Electronic Privacy Information Center.
The blog is also inspired by two comments on the earlier blog and
brief email I exchanged with one commenter, which intertwine with
Coney’s in intriguing ways.
As I said in the first blog, my proposal focused on a very narrow
question driven by the Obama Administration’s interest in revising a
browsers. The proposal suggested a way to better approach anonymity,
but didn’t look at the related social and political issues:
The kinds of privacy and the degree of privacy people want
When it’s appropriate to make visitors identify themselves, or at
least to provide some persistent identity
Whom people trust to maintain identity information
This blog offers a number of points about those issues. The sections
Can the government be your friend?
Anonymity, pseudonymity, and participation
Who should run an OpenID server?
Thought experiment: could federal agencies offer anonymous authentication for whistle-blowers?
The kinds of government/public collaboration pursued by CIO Vivek
Kundra and others in the Obama administration sees people doing much
more than submitting ideas. The administration wants information
sharing and an exchange of ideas that allow both sides to reveal
But as one commenter pointed out on my previous blog, the government
has a lot of power that should make us hesitate before sharing too
much. Coney, whose work at EPIC includes a focus on domestic
surveillance, pointed to an incident where
the Las Vegas Review-Journal was served a subpoena
requiring it identify readers who had posted online comments about an
article involving a case with the Internal Revenue Service. (The
newspaper is fighting the subpoena.) Some agencies have enough power
to be scary. And some agency heads may take heavy-handed measures
without even being malicious or vindictive–just out of a concern for
So you may be living it up like Obama, Gates, and Crowley on one
agency’s web site, forming great relationships and having an extremely
productive discussion, only to discover that your comments come back
to bite you when you tussle with an entirely different agency. And of
course, the data you give these sites lasts forever.
Such promiscuous information sharing is supposedly outlawed by the
1974 Federal Privacy Act. This oft-cited law, along with the 1966
Freedom of Information Act, remain centerpieces in the armory of those
protecting personal privacy in the U.S. However, the Federal Privacy
Act creates many exceptions for agencies that want to opt out from its
rules, and fails to cover private contractors. Coney says, “EPIC’s
goal is to develop fair information practices that are enforceable and
transparent to protect users of government information.”
Having studied the privacy policies requested by different agencies,
Coney finds them in two camps. Agencies whose mission is to reach and
out and help people, such as the Department of Health and Human
Services, favored as much privacy as possible–the same goal Kundra
has expressed many times. On the other hand, law enforcement and other
agencies concerned with protecting the public would like to log all
accesses and try to attach personal information to all visits–even
access to public information.
That last policy puzzles me. If the government offers information
freely, Carl Malamud or I or anybody can grab it and put it on another
web site. There is no way to track who accesses free and open
information. Tracking access in the hope of preventing criminal use is
not only obnoxious but futile.
In short, forming a partnership with government takes a bit more
consideration than friending someone on Facebook. The new age of
government participation we’re hearing about, then, rests on some
assurances to the public. Personal information should not be collected
unless absolutely necessary, and should not be used for purposes
unrelated to the reason for capturing it, especially by other
We’re all excited about the expanding collaboration between government
and citizens, but the historic change intensifies the need to take a
fresh look at laws and policies on a regular basis, just as the OMB
People phone anonymous tips in to the police all the time. To allow
the same kind of anonymity
online would be just an invitation to spam. In fact, anyone with
something to hide would make sure to flood the system with
irresponsible accusations just to drown out the people who have
legitimate crimes to report. (The
FBI tip site
asks you to identify yourself.)
The proposal in my previous blog delivers pretty good pseudonymity,
allowing someone to submit repeated comments with the assurance that
they all come from the same person, but without surrendering personal
One commenter on my blog asked whether we can really trust the
government to protect pseudonymity. Well, of course they can always
trace you if they want to. Even non-government actors can do that, as
we’ve just seen from the recording industry’s testimony at Joel
Tenenbaum’s trial. Privacy is a cat-and-mouse game in which both sides
have escalating levels of attacks and parries.
Tracking you through contact information
Attack: My proposal let people leave a contact, such as an
email address or Twitter account, where the government could report
information about their account. Although the government should
promise not to misuse the contact, it could be used to identify a
Parry: Leaving a contact is optional, and you can manage your
account without leaving one. You can also use a free email address
from popular providers.
Tracking by IP address and time
Attack: The government can require your ISP to provide your
identity based on the time you were logged in and the dynamic IP
address they assigned you.
Parry: Find an open wireless access point or use an onion
From this point on, I’ll assume that OpenID will be used by federal
agencies in some configuration, because that’s the only technology
with a widespread implementation that can provide the protections
discussed in this blog.
One of the central policy questions we have to deal with, then, is
whom we should trust with our OpenID account. My proposal called on
the federal government to run an OpenID server for all its agencies,
mostly because I want the government to kick the habit of using
commercial services for such essential information-age functions. (See
my earlier blogs,
Five projects for Open Source for America
themes from the Personal Democracy Forum conference.)
Coney and I discussed several options for ensuring reliable servers.
There’s no reason not to allow multiple options. Running an OpenID
server is pretty easy. If EPIC had a hankering to serve up privacy
directly, this is its chance. The problem is whether visitors can
trust any particular server 1) to stay up, 2) not to go out of
business, 3) not to leak information, 4) not to abuse the information
for private gain, and 5) not to cave in to government pressure and
release information outside of the scope of the law.
Here are a few options.
The federal government runs its own dedicated server
Pros: The government can probably do the best job of
guaranteeing that the server stays up and is not broken into. The
government is not depending on outside entities for this essential
Cons: A central OpenID server offers a compelling target, and
a stream of recent news reports shows that government agencies suffer
from the same security lapses as private companies. Furthermore, many
people don’t trust the government to protect their privacy and feel
more secure with a private server.
The federal government regulates the organizations that provide servers
Pros: Personal data is stored in a variety of private
servers, complicating attacks, while the government ensures they are
Cons: Defining service-level agreements and quality control
is difficult, and legislating or regulating it is even more difficult.
The organizations that provide servers define a code of conduct and
Pros: Self-regulation is much lighter-weight than laws and
regulations, and the experts who know the technical and business
issues the best will be in charge of ensuring quality.
Cons: Self-regulating privacy agreements–we’ve seen
that before! The failures of TRUSTe and P3P leave us twice-scarred and
reluctant to try again. (See my article
Promises, Promises, Promises.)
Still, TRUSTe and P3P provided no protection because the organizations
creating privacy policies were disingenuous and lacked an interest in
truly protecting privacy. A sincere self-regulatory effort by new
organizations committed to privacy might succeed.
The federal government sets up a dedicated agency that is monitored by
a private firm or non-profit
Pros: This was suggested by Coney. It combines the
reliability of the government with the disinterested independence of
an outside observer.
Cons: Malicious actors in the government agency may succeed
in hiding bad behavior from the monitors, whose inspections would quickly settle
into an uninspired routine. Moreover, the requirements that the monitor
has to enforce are just as complex as in the previous solutions.
Free market: let each visitor choose a server and take his chances
Pros: This leads to the most diversity, which is a strength
in the area of security. And if a server goes down, how much is lost?
The visitor can open a new account elsewhere and rebuild the lost
Cons: No one can evaluate the competence and reliability of
another organization’s server, and weaknesses don’t become apparent
until disaster strikes.
As usual, the policy, organizational, and social issues in deploying a
technology are thornier than the technology itself. I still think the
architecture I offered in my proposal to OMB provides a good basis for
building any of the systems considered in this blog.
I’ll end this blog by exploring an identity system that would allow
an agency to authenticate a pseudonymous whistle-blower by verifying
“Yes, this is a current employee” or “Yes, this is a former employee”
without giving further information about that individual.
I believe that any such authentication system would have to be based
on a two-tier approach such as I laid out in my OpenID proposal. The
system I lay out in this section is too complex, organizationally and
technically, for the government to implement at this point, but it
shows the tools available to privacy advocates.
Each government agency participating in the authentication program
sets up a server to digitally sign IDs. Another server hosts accounts
for every employee.
Each employee is encouraged to create an account on the OpenID server
and to keep it secret.
When logged in at his or her agency account, the employee submits his
or her account name with the agency’s digital signature. This produces
an unforgeable string that combines verification of the employee with
verification of the agency.
At any time, the employee can post information to any web site that
accepts an OpenID login, using the employee’s secret OpenID account.
The employee includes the string obtained in the previous step. By
checking the signature, anyone can verify that the employee had an
account at one time on the agency server. Because the text revealed
underneath the signature is the account name, it proves that the
person who obtained the signature is the same person posting
In order to masquerade as an agency employee, someone would have to
obtain both the employee’s signed string and access to the
employee’s secret account on the OpenID server. This might be possible
if the employee is lax in protecting the information (for instance, by
putting it unencrypted on a cell phone and losing it). Other problems
with this system include:
There is no way to revoke the signature, unless the agency revokes all
signatures at once. Thus, there is no way to tell whether the employee
is still employed. This may be acceptable.
This system would allow any employee to leak any agency information
without personal repercussions. That’s probably not a policy we want
Technology confers power, and so does anonymity. Technical, legal, and
policy experts are all needed to study the implications of the systems
we have for participation, and the systems that are proposed to