As of last week, users can simply check a box and YouTube’s facial recognition algorithm will pixellate any faces contained in a video. The move — which makes YouTube the first major video-sharing service to offer such a feature — has already been hailed by activist and civil liberty organizations like WITNESS and the Electronic Frontier Foundation.
A post on the official YouTube blog notes the software is far from perfect — it can be affected by things like lighting, angle and video quality — but calls it a step in the right direction:
“Whether you want to share sensitive protest footage without exposing the faces of the activists involved, or share the winning point in your 8-year-old’s basketball game without broadcasting the children’s faces to the world, our face blurring technology is a first step towards providing visual anonymity for video on YouTube.”
Of course, the technology can be used for evil uses as well, such as hiding the identity the next school bus bully. And regardless of its possible applications, how effective a safeguard it will provide depends on whether state authorities can get their hands on the raw footage. You can delete the original video once you’ve blurred all the faces, but it’s unclear if it remains accessible to Google, and thus to anyone with the appropriate court order (or any hacker with enough skill). Google and YouTube have yet to respond to my email asking for clarification.
Still, it’s refreshing to see Google adopt such strident pro-anonymity policies, for while this is an obvious win for activists and democratic agitators, it also provides the Joe Blows of the Internet with a something to defend themselves from increasingly pervasive facial recognition technology.
Facebook can already recognize your mug in photos, and the company’s acquisition of Face.com suggests they plan on further integrating facial recognition into the site. Similar features are already available in Google+ and iPhoto, and let’s not forget the role facial recognition played in the arrest of G20 activists in Toronto. Combined with ubiquitous video and public social media profiles, such technology could be kryptonite for privacy. Last year, a group of researchers from Carnegie Mellon University were able to identify people on Match.com by applying consumer facial recognition software to photos on Facebook (I’m sure Ashley Madison users are shaking in their boots). In a subsequent experiment, the researchers were able to identify a third of the people in a given public setting by comparing their faces to publicly available photos from social media sites. The implication is that, as facial recognition proliferates, anyone with your picture may be able to use it to unlock a trove of personal information, from your social media profiles, to your whereabouts to that one time you attended a controversial protest.
Facial recognition is obviously something we need to have a healthy public debate about, and YouTube’s decision will hopefully help catalyze the discussion. Should other sites follow their lead, we could begin to see our online norms move in step with advances in potentially privacy-invading technology. We need to pass laws that better regulate how this technology is used — by both private citizens and the state — and we need to move towards educating users so that they can give informed consent to sites like Facebook. At the very least, we need the dominant paradigm to be one of opting-in, rather than opting-out of facial recognition services.