Facebook's face recognition strategy may be just the ticket

Face recognition is here to stay.

Most of the commentary on Facebook’s new face-recognition strategy has been negative, with many folks posting instructions on how to opt out. I, on the other hand, think that Facebook may have come up with a great strategy for cutting the Gordian Knot on this thorny privacy problem.

Face recognition is here to stay. My question is whether to pretend that it doesn’t exist, and leave its use to government agencies, repressive regimes, marketing data mining firms, insurance companies, and other monolithic entities, or whether to come to grips with it as a society by making it commonplace and useful, figuring out the downsides, and regulating those downsides.

This is part of my general thinking about privacy. We need to move away from a Maginot-line like approach where we try to put up walls to keep information from leaking out, and instead assume that most things that used to be private are now knowable via various forms of data mining. Once we do that, we start to engage in a question of what uses are permitted, and what uses are not.

Overall, I think our privacy regimes need to move to a model similar to that applied to insider trading. It’s not possession of secret information that is criminalized; it is misuse of that information to take advantage of the ignorance of others.

Google and others have shied away from releasing web-based products that include face recognition technology because of privacy concerns (though both Apple’s iPhoto and Google’s Picasa apply it to photos stored on your local hard drive and under your control). No one wants to take the arrows or the possible legislative and/or regulatory scrutiny that may ensue.

What I like about Facebook’s approach is that they aren’t using the technology to actually tag people in photos; they are using the technology to alert people on your friend list that you might have appeared in a photo, and relying on those people to add the tags. This modified approach will result in better data, but also may mute just enough criticism that users will come to accept it.

When it comes to privacy, putting our head in the sand about what’s already possible with data mining and machine learning (and what will become even more possible with every passing year) is short-sighted. Unless we’re prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.

Related:

tags: , ,
  • Marc Hedlund

    You’re right that we shouldn’t assume data can somehow be contained or controlled once it’s online. As I’ve heard others say, “Online for a minute means online forever.” But at the same time I think it would be wrong to assume that this or any feature will really provoke understanding of what privacy means now. Facebook is careful to retreat (as they have here) if a new feature causes too much privacy shock when it launches; instead they stick with the features that edge in the direction of sharing more without excessive shock.

    Privacy is one of those things, like plumbing, that you only miss when it fails you. Fortunately those moments of failure don’t come for everyone, but they are the only real provocation that I’ve seen lead to understanding. Shock at an individual rather than collective level, though, is a very bad setup for collective change. Also, someone for whom privacy has failed is unlikely to talk much about that failure since it is by definition something they wish had remained private.

    If facial recognition won’t change the discussion, then, and many small failures won’t either, what will? I think the answer is nothing will. The only thing I can really imagine changing the discussion is a mass, collective, irrevocable privacy shock – a privacy China Syndrome (something imagined by sci-fi author John Brunner in his 1975 book The Shockwave Rider). That seems like just sci-fi to me, though, not a real future.

  • Sarah

    What most people realize is the issue here is not with privacy. It’s about disclosure. There would be no howls of breach of privacy if Web sites were held accountable and responsible for clearly communicating exactly what they (and partners) are doing with any and all information you provide.

    The problem is greatly exasperated by the bad behavior of the advertising and data industries. Google, Facebook, even Apple et al have all buried anti-consumer language in ridiculous EULAs and are actively participating in data mining efforts with companies like Epsilon, Axciom Digital etc. The giant databases with your every online move (cross Web sites and services, including many of your physical services like banking, credit and mobile phones) are all mapped to your very unique and personally identifiable information, and are for sale as we speak. They can look up your name and tell anyone just about anything. Not only do they freely sell this data to the highest bidder, they are shielded from those useless Privacy Policies. Or they just ignore them.

    The only way we can ever have a cogent, thoughtful conversation about privacy is if the industry comes clean. Until then, it’s us vs them, and any little change they make will continue to elicit knee-jerk reactions from people like me – deserved or not.

  • fjpoblam

    Tim, it’s not about whether face recognition itself is good or bad. Obviously, we have face involvement on such things as driver’s licenses and passports already.

    What some of us are concerned about is (1) for what the recognition is being used, and (2) whether the recognizer has asked in advance and acknowledged the use of the face recognition.

    Insofar as (1) is concerned, Google, Apple, and Facebook use the face recognition in targeted advertising and trade the recognized faces to parties with whom we may not have in advance given permission. (In contrast, I may not in advance be required to show my driver’s license or passport to all parties for targeted advertising. Further, I may not agree to have parties to whom I have revealed my driver’s license or passport reveal copies of same to third parties without my prior consent.)

    Insofar as (2) is concerned, those to whom I reveal my driver’s license or passport are reasonably required by law to identify themselves in advance and present reasons for which they require the information. I am as a result allowed to decline the presentation with appropriate consequences.

  • Srinagesh Eranki

    Tim,

    Privacy is a subjective issue. At best, the rule of law can be vague. The person at the center is best placed to determine whether his/her privacy has been violated. They know how much is too much etc ..

    Why won’t Facebook only ask the person concerned if he/she would like to tag themselves in a photo?

  • Peter Fleckenstein

    Tim, privacy is an individual issue. It’s about a matter of choice from the very beginning. When you have companies like Facebook who just decide to release whatever they want and implement it in the way they want then you’re basically denying those people choice.

    If you implement practices for the privacy regimes similar to insider trading then you’ve just advocated for the government to regulate our privacy. Talk about the ultimate Pandora’s Box.

    You said in the article:

    “What I like about Facebook’s approach is that they aren’t using the technology to actually tag people in photos; they are using the technology to alert people on your friend list that you might have appeared in a photo, and relying on those people to add the tags.”

    So if Facebook is using technology to alert people on my friend list that I’ve appeared in a photo then aren’t they tagging me in advance? I mean how else would it be effective? I don’t believe that they’re telling people it might be me as that would be just guessing and pretty soon you have a mess of guesses and a feature that in effect would become useless.

    The bottom line for me is that this whole Privacy issue is being framed incorrectly. You have companies saying “Hey, privacy is dead so just go along with it.” Those are the same companies who are selling your privacy at the expense of the individual.

    I think a lot of people have had enough of being told that they really don’t have control of their privacy. I may be crazy but I don’t think so.

  • http://www.tacticalinfosys.com Mary Haskett

    Transparency and accountability are the keys – as long as you know who has your data and what they are doing with it you will prevent most abuse.

  • http://blog.famebook.com Jan Simmonds

    With great respect in general, I think this and your ‘Contrarian Stance On Facebook Privacy’ are equally missing the point. In a superficial sense and in both cases you do make some valid points, but as with the significant majority reaction aggressively against Facebook’s rape of individual privacy and rather patronizing offer of a belt to hold our pants up after the fact; neither camp have properly addressed the core issue.

    700 million users prove we all like being connected, it doesn’t prove Facebook’s business model is the right one. I would suggest that 99% of people who buy a Coke wherever they are in the world, trust it will taste the same and be safe to drink. We have learned recently that roughly 70% of Facebook users don’t trust them, with over 50% showing a certain contempt. I’m sorry Tim, but even given your pedigree, I don’t see how this describes a good lasting business worthy of defending on any level.

    In my humble opinion we are only in the eye of the storm and whilst the privacy cat is as you say out of the bag, it is just part of the wider issue of who owns the data. Most people bought into Mark’s promise that Facebook allowed us to connect and share with people and that our privacy was paramount. In return for some advertising on our profiles we would enjoy the benefit of a platform which, as he predicted, would change the digital landscape. He offered us a digital home and implied that this would be treated with the same respect we would expect of our physical ones. We trusted him. Now we actually find whenever we come home from work, he’s in our house, installing more and more cameras in our bedrooms and checking out our refrigerators to find out what we like to eat. Sometimes he’s started bringing his paying friends round too to show them what we like. Because he’s imposed himself as the landlord of that home, we are powerless to act without losing that which we have nurtured privately to our taste.

    There are two flaws to this delusional attitude. Firstly people aren’t stupid and you can’t force them to keep creating content, so a policy based on pissing them off, would appear counter productive. Similarly, Facebook by its very nature is creating a ‘faceless’ flat crowd environment which by its very nature is devaluing content, stifling aspiration and by its very nature acting as a black hole for those brands who once stood out from the crowd and are now flatlined within it. Again a stupid business footing in my view. Sure some successes in gaming and FMCG brands, which incidentally are the only ones getting decent meaningful traction, can cloud one’s judgement, but remember the storm isn’t past. There are already a few sites cropping up who have worked out that the successful internet of the future, is one that can be trusted, offers a presumption of digital ownership and where we are encouraged to truly open up and share and perhaps benefit from earnings and a quality of life that everyone can benefit from and enjoy and underpinned by companies who would be earning from that new traded currency, what I truly believe are revenues that would dwarf Facebook’s within a few years and that would also define brands that would last many more generations.

  • http://TagMeNot.info A.Cammozzo

    Tim,
    as you know, “Face recognition” is a term that hides many different technologies, from innocuous face detection to identity verification, including some privacy-enhancing blurring techniques. Of course face recognition is here to stay. What is happening with this debate is precisely that we are figuring out the downsides of some of the face recognition technologies. The excited reaction we are observing today is an anticipation of what a privacy panic will look like if these technologies are used without restriction and people gets scared (Marc Hedlund, a few comments above, calls it a privacy shock).

    Facebook is gradually and silently escalating from a social network to a global face matching search engine. The kind of search engine Google reportedly made and decided not to roll out.

    The privacy implications of a global face recognition search engine are understandable (and understood, as Eric Schmidt has recently aknowledged) even if not immediately tangible to the single FB or Picasa user, that sees only his portion of the big picture, so to say.

    The fact is that now FB (and Google) developed the technical capability to parse millions of pictures looking for a given face and to find identity related to some face. This is not existing information “leaking out”: it’s new personal information that previously didn’t exist that is being created.

    Being a private face recognition search engine, FB will have to use its capabilities if asked by government agencies and will be tempted to offer it to other “monolithic entities” as you call them.

    Let’s ask FB: what kind of government agencies will have access to these capabilities, from which countries, and on which grounds?

    More thoughts on face recognition privacy issues on this and on the difficulties of opting out from public spaces in this page of the TagMeNot.info project.

    twitter/donttag

  • http://jeffjonas.typepad.com/jeff_jonas/ Jeff Jonas

    1. Anything that better informs consumers about what is computable … is a good thing in my opinion as this raises consumer awareness.

    2. That said, facial recognition when applied in 1 to many identification domains (ie., does this face match one of these 10M faces) doesn’t work well. Facial recognition involving 1 to 1 matching or 1 to few matching (ie., does this face look like you or anyone else in your friend list) does work pretty damn well. Why do I say this? Well, Facebook is in a unique position to actually make facial recognition work using friend lists to reduce the candidates thus achieving 1 to few match processing. Thus potentially unleashing facial recognition efficacy for “government agencies, repressive regimes, marketing data mining firms, insurance companies, and other monolithic entities” that could not achieve these results otherwise (without the candidate lists of your friends).

  • http://www.cloudera.com Jeff Hammerbacher

    Hey Jeff,

    Do you have a reference for your statement that “facial recognition when applied in 1 to many identification domains (ie., does this face match one of these 10M faces) doesn’t work well”? I’d be curious to learn more about the current performance limitations.

    Thanks,
    Jeff

  • https://twitter.com/#!/PrivacyCamp Shaun Dakin

    Tim,

    Thanks for the post on this issue. You are right.

    My take, increasingly, is that companies like Facebook have done a very poor job communicating the value of sharing and the value “big data” collecting and storing information about all of us. Of course, Facebook is “winning” in the marketplace. Zuckerberg essentially said that at the eG8 in Paris a few weeks ago.

    They are not “winning” in the communications war.

    IBM (as usual) is way ahead of the curve with their “smarter planet” series of communications (http://www.ibm.com/smarterplanet/us/en/?ca=v_smarterplanet). They are showing us a better future based on data.

    The future, our collective future, could be amazing (will be amazing) if we can learn to use all this data being collected, stored, and analyzed to “win the future”.

    I talked about the Future of Privacy at an Ignite talk in DC last year in which I assume (a huge assumption) that the issues of who controls / owns data have been taken care of (you and I do, not the corporations).

    Here is my talk > http://vimeo.com/webbmedia/shaun-ignite-2010

    Regards,

    Shaun Dakin
    Founder – Privacy Camp
    @PrivacyCamp

  • http://www.hablantia.com Peter Bennett

    Hi Tim,

    I could draw parallels with the lulzsec events, particularly this article http://risky.biz/lulzsec which essentially says that security has been broken for a long, long time, and eventually it takes something like lulzsec to point this out.

    With personal privacy it might be similar. Facebook’s business model is constantly poking people and prodding them to engage with this.

    First step is that enough people have to be interested in questiong of privacy and misuse of data, then there will be the incentive to create and engage with the tools for managing and tracking usage of this data, as you point out. At the moment there are too many knee-jerk responses.

    Another metaphor might be that you can’t get the immune response until the infection has hit (no matter how many insightful Radar posts try and immunize people ;-)

    Peter

  • gregorylent

    “putting our heads in the sand with what is already possible with” … say, guns … point being, it is ok to not do everything that is possible.

  • http://twitter.com/jotman Jotman

    To my knowledge, nowhere has Facebook ruled-out using the human-tagging as the basis for computer-sorting of faces in photos at a later date. And if someone has heard Facebook claim they won’t do this, you can’t believe them because anyone who trusts Facebook is an idiot.

  • http://www.upgrade-software.com/templatezone-highimpactemail5.html Stevo

    Very informative, thank you for your concise breakdown. I agree that we can’t ignore these innovations in technology, if we think that by ignoring these innovations they will simply dissipate or dissolve, we will only be fooling ourselves, and in turn allowing potential adversaries to expound, utilize and possibly exploit these advancements in technology. So let’s rather see what these technologies are about and how they can assist us in our day to day interactions, and then we can apply these innovations positively, toward our augmentation and the improvement of our communities and relationships.

  • https://twitter.com/#!/Gary_Davis_Asia Gary

    Tim, though I am a latecomer to this discussion but I am glad that I’ve had the chance to read both this post and your previous post on the subject, as well as all of the responses. Though I appreciate the examples of metaphor, I have read nothing that dispels my confidence that my “knee jerk” reaction and my “Maginot-line like approach” are legitimate, made so by compelling reason. I would need much more space than a blog or two to dive deeply into my convictions on this subject, so will confine my remarks to what I consider were the more egregious parts of your premise.
    You suggested that we “come to grips with it as a society by making it commonplace and useful”. Simply put I cannot think of any use in the context of Facebook or in any common place that would be a positive addition to my life.

    You suggested that we should move away from a “Maginot-line like” stance on privacy and suggest further that “Once we do that, we start to engage in a question of what uses are permitted, and what uses are not.” I do not wholly disagree with you that parsing the discussion is useful but I would favour a rigid stand on which data should be available, and argue my concerns at that point, rather than arguing each use case. If this qualifies as “Maginot-line like” then so stipulated. I definitely do not wish for Facebook or other consumer facing organizations to preemptively make those decisions.

    You write “What I like about Facebook’s approach is that they aren’t using the technology to actually tag people in photos; they are using the technology to alert people on your friend list that you might have appeared in a photo, and relying on those people to add the tags. This modified approach will result in better data, but also may mute just enough criticism that users will come to accept it.” I would ask why recruiting a crowd to be accomplices in building a better face profile of me on FB is good and who is it for? Further I would ask with some intensity “why muting criticism is a good thing?”

    In summary, if consumer facing application is the only rationale for passive acceptance of Facebook’s increasingly complex policies, I must disagree. I am grateful for the opportunity to do so and enjoy reading your comments on technology.

  • Michael

    Tim,

    For me face recognition seems very useful for all those governments (thus their agencies) who want to control their people more and more. There is proof that Facebook’s start-up is partly funded by CIA among others. I have no secrets, but do not want some organization, no matter which, poking around in my personal data. So I am not on
    Facebook and never will be.

    Michael

  • Veronica

    I see Facebooks’s face recognition strategy as the ticket to another irresponsibly bad precedent, right down the slippery slope to complete loss of privacy. Looking back at the writings of Huxley (Brave New World) and Orwell (1984), it occurs to me that both of these negative utopia creators were on the right track, but the reality of today’s digital inquisition against personal privacy is far more insidious than even they envisioned. It’s really alarming that people are being so desensitized to the depth of transgression this engenders — and the proof of this is apparent in the fact that so many are willing to share so much about themselves — without considering the serious consequences to society when one’s private moments are no longer private. Or perhaps this is just another symptom of the dumbing down of society in general?

    Big Brother is certainly here, and he’s profiting from our losses.

  • theApocalaypse

    The main problem with anything that facebook does is their inability/refusal to make the default settings for all of these “features” to be set in the off position.

    The other problem IMO is not a matter of what is and what isn’t private, it is as the author pointed out, the notion that by using their service we have granted them the right to profit from our data without giving us any compensation.

    THAT is how the world needs to change. Companies that do NOTHING worth billions of dollars because they are leveraging OUR information… Think about it.

  • Ji Yeon Lee

    Anyone who wants to disable the feature, follow the video instruction: See a Video Instruction

  • http://eelsecreto.com/ El Secreto

    Hey Tim,

    Great article, still I wonder which kind of future commercial uses do you see for face recognition technology.

    Thanks,