The spy who came in from the code

Carmen Medina talks about tech, the CIA, and why government agencies don't play well with others

If you were going to pick an adjective to describe the Central Intelligence Agency, “open” wouldn’t immediately spring to mind. But according to Carmen Medina, who recently retired from the CIA and will speak at Gov 2.0 Expo, openness is just what the agency needs.

Medina’s Role at the CIA:

Carmen Medina: I just retired after 32 years at the CIA. I spent 25 years as a manager of analysts. In the mid part of this decade, I was sort of the number two in charge of analysis and also ended in charge of the Center for the Study of Intelligence, which is kind of like the Agency’s think tank and lessons-learned center. During my career, I was a bit of a heretic in the organization, though a successful one I guess, in that I always questioned how things were done. From the beginning, I was really interested in how information technology and the Internet had the potential to change the way we did our business. So back in the late ’90s, I was pushing hard to get all of our work online, even though a lot of people in the agency were skeptical about it.

Social media and extreme views:

CM: What the Internet allows, if you’re an individual that has an extreme view, is the ability to broadcast that view in the privacy of your den. You can get information to support your view without having to go to any unusual places that would attract suspicion. You can find other people who hold the same views that you do. You’re able to hide in plain sight, basically, while you’re doing that. While I’m a strong believer in the Internet and social networking, like everything else that’s happened in human history, it also offers a lot of potential for people who are not well-intentioned.

How our ideas about privacy have to change:

Gov 2.0 Expo 2010

CM: It struck me two or three years ago that our historical concepts of privacy were dependent upon what the technologies were at the time. So in my view, privacy is going to have to adjust to what is now possible. While some of the things that are now possible are scary to people, many add to the public good.

I’ll say it in a more generic way: If you’re using the power of social networking or monitoring to prevent activities that the community has decided are illegal, because there is law, then I don’t think you have the privacy to do illegal things.

Some concepts of privacy, that we thought were rights, are going to have to give way as we find out that social networks are just a lot more efficient, and monitoring and digital ubiquity are all more efficient ways to enforce laws, for example. That’s a big thing in Britain. I mean God only knows how many cameras they have on their streets. And they’re using it in ways to fight crime that, frankly, I don’t think is yet possible in the U.S. because of our privacy concerns.

It’s going to be very tricky. A not-well-intentioned government, or a government with authoritarian tendencies, is going to use these technologies in ways that the citizenry wouldn’t approve of. But that government is not going to give them a chance to approve it.

But let me also give you the other side of it. Government is viewed as inefficient and wasteful by American citizens. I would argue that one of the reasons why that view has grown is that they’re comparing the inefficiency of government to how they relate to their bank or to their airline. Interestingly enough, for private industry to provide that level of service, there are a lot of legacy privacy barriers that are being broken. Private industry is doing all sorts of analysis of you as a consumer to provide you better service and to let them make more profit. But the same consumer that’s okay with private industry doing that is not okay, in a knee-jerk reaction, with government doing that. And yet, if government, because of this dynamic, continues not to be able to adopt modern transactional practices, then it’s going to fall further behind the satisfaction curve.

We have to rethink government along these lines. And it’s interesting to me that at least in the British election, it’s out there as an issue in an explicit way that it has yet to be in the U.S.

How failure to share information leads to more failure:

CM: One of the objections to social networking and transparent collaboration that you get at an agency like the CIA, is that when you are really doing something where you cannot have failure, the work has to be tightly controlled. It has to be much more point-to-point and hierarchical. I thought that was a stupid argument that needed to be taken apart.

The first two-thirds of my Expo talk will use the chronology of the 2003 blackout as an example. One of the main utilities had made a decision to buy a different process software, and so they were no longer paying for the upgrades to the old software. Some of those bugs, that would’ve otherwise been fixed, brought the system down. I’m going to talk about why high-reliability, high-risk organizations should be adopting the principles of transparent and collaborative work first, because when these kinds of organizations have catastrophic failure, it’s usually because of stovepipes and lack of systemic awareness.

Since I come at this like a manager, I’m going to also talk about what it means for managers when you adapt transparent, collaborative, networked work. Most of what old-style managers are accustomed to doing is based on the industrial way of working. But if you create a transparent collaborative network, the manager becomes a monitor of the network’s health and the network’s talents. They make sure the mission is done, rather than acting as a quality control officer over every step of the process.

Why previous attempts to share intelligence have failed:

CM: In every instance that I can think of, people get sucked in by the technology solution without looking at the culture and the way people are doing the work. And when you overlay this new shiny toy on old processes, you actually make everything worse. People decide they have to create a big program and ask for a new budget line, and it has to be rolled out with bells and whistles. Then the contractors come along and see real opportunities for billable hours. So instead of getting modest iterative development, you get this massive program that takes 10 years to roll out. And you’re still working on implementing a technology that’s now three generations old. I’ve seen that happen.

But I still think it can be done. Most of what we would want to achieve we can achieve with existing technologies. You’ve got to start with the process. You have to change the objectives of the intelligence business to a certain degree. Make it less hierarchical and less about the “definitive answer.” Once you agree that things could work differently, then different technological solutions will become apparent.

The way government adopts technology is broken:

CM: There’s a tradition of huge IT programs and huge IT departments. And part of that is because that’s how you play the budget game with Congress. But now we’re moving into a world of apps. People are rolling out for pennies on the dollar little apps that can do essentially what a huge big government program is supposed to do. Five years from now, it’s going to be unsustainable for government to still be saying it needs five years and $2 billion to create a new information-sharing system. It’s nuts.

Can we crowdsource intelligence gathering?

CM: The intelligence community is betting that a closed network of limited people can actually be smarter about the world than the open network out in the world. And as a citizen, I don’t think that’s a good bet. In fact, people have been saying that for a while. There’s a move toward greater openness called “outreach.” But outreach is a very old-style static process: an event here, an event there. You’re always going to lag, and you’re always going to have less information about the world around you because you’re within a closed network. I strongly believe that. I’ll probably get in trouble for saying it, but that’s my view.

There are ways to create trusted networks, and it would be fascinating to do some prototypes on that. Look at what happened when Steve Fossett got lost. Satellite images were farmed out to people to see if they could find anything. Another example is Mechanical Turk. I’ve joined Mechanical Turk. I’ve done little tasks on it. If you can itemize a job into small, discreet discrete tasks, then you can farm it out to large networks of people. You could create trusted networks like Proctor & Gamble and some of these other companies do. There’s just huge potential there.

This administration and these policymakers need to ask for this kind of stuff. It’s like if you’re raising money. You’ve got to make the ask. If you don’t, you don’t get it. With something as important as the intelligence community, something that’s also difficult to change, you’ve got to make the ask.

This interview was condensed and edited.