When you hear the phrase “cameras everywhere” your first thought may be of ubiquitous surveillance cameras, watching your every move on behalf of the state, private businesses and corporations.
On second thought it may conjure up the hundreds of millions of cameras, mobile and Internet connections in the hands of ordinary citizens who are filming, sharing and remixing footage — cameras that can act as powerful tools to push for positive human rights change.
From Burma’s “Saffron Revolution” in 2007, to the Iranian political protests in the summer of 2009 and to this year’s Arab Spring and beyond, it’s clear that the promise for human rights when there are more and more cameras is great. But, so is the peril.
My organization, WITNESS, has just published a new report in which we analyze the challenges and opportunities of a more citizen-driven, bottom-up definition of “cameras everywhere.”
Among the questions we address is what practical and ethical responsibilities are there for site managers, technology developers, human rights video producers, aggregators and viewers to ensure that we increase the upside of ubiquitous video for human rights and mitigate the downsides?
Among the crucial areas in this regard, are two issues upon which we focused at the first ever Silicon Valley Human Rights Conference held recently in San Francisco.
Issue No. 1: Visual privacy and visual anonymity
One of the biggest challenges cameras everywhere present is that of visual privacy:
“Retaliation against human rights defenders caught on camera is commonplace, yet it is alarming how little discussion there is about visual privacy. Everyone is discussing and designing for privacy of personal data, but the ability to control one’s personal image is neglected. The human rights community’s long-standing focus on anonymity as an enabler of free expression must now develop a new dimension: the right to visual anonymity.”
— excerpt from the Cameras Everywhere report
As Syrian activist Alexander Page described in a recent Al-Jazeera article about pro-democracy YouTube videos: “We have realized that the Syrian regime was, although annoyed at what was happening, very fond of the information these videos provided, and used Mukhabarat [intelligence service operatives] to help them identify the faces.” Activists in Burma and in Iran had similar experiences of the regimes, in both instances using activist video to identify people. In Iran’s case, this use of video extended to crowd-sourcing identification using a website of pictures pulled from social media sites.
One area of the report takes a close look at issues of privacy and safety. Many risks in this area relate to the insecure intersection of mobile and video. We consider mobile as a communications medium that leaks user data. Also, the nature of digital photos and video are mediums that contain a wealth of potentially privacy-compromising information in the frame (visible to the naked eye) and layers of metadata embedded in the image.
One particular area that we believe needs more discussion is how to re-establish anonymity in a culture of growing use of video to expose injustice and whistle-blowing. Dating back to the Founding Fathers and right through to WikiLeaks, pseudonymity and anonymity have been a way to express unpopular opinions, protect vulnerable minorities, and allow for whistle-blowing. Within the human rights and civil liberties community, anonymity has often been understood as a key enabler of freedom of expression and speech.
During the recent events of the Arab Spring, the value of anonymity or pseudonymity for participants on social networks like Facebook has been debated (both accentuating the risks of using real names, and noting the challenges of anonymous mobilizing). Most recently, the introduction of Google + has generated a significant debate — the “nymwars“ about pseudonymity.
Yet in our research, we found very few people thinking about productive ways to enable “visual anonymity“ — i.e., how do we extend our thinking about anonymity beyond a culture of text or data to one increasingly mediated by the image?
There is a role that activists, social networks and video-sharing platforms can play in facilitating anonymity when it comes to video. At one level, permitting pseudonymous account names and the ability to easily strip out identifying metadata helps protect the uploaders and sharers. And for the person in the picture, enabling much simpler ways to anonymize people’s faces and voices would be a step in the right direction, not just for human rights activists in Syria, but also, for instance, a victim of trafficking in Illinois.
A WITNESS Labs collaboration with the Guardian Project shows how this could be done. The SecureSmartCam project includes the ObscuraCam app for anonymizing people’s faces in photos and videos shot on an Android phone — no complicated post-production process but a simple process to obscure a face or a background in a way that fits with an increasingly mobile-based, real-time photo- and video-sharing culture.
This ability to more selectively anonymize images grows particularly important in an age of increasing facial recognition, where the risks to an individual photographed or videoed are global, networked, and beyond the ability of one person to control. (For more on the risks of facial recognition see this recent article in the Atlantic.)
Issue No. 2: What technology providers can do
Technology providers are increasingly intermediaries for human rights activism. They should take a more proactive role in ensuring their tools are secure and integrating human rights into their content and user policies. – excerpt from the Cameras Everywhere report
The key vulnerability we identify at the network levels of human rights video lie with corporations and governments. It’s no secret that YouTube and Facebook have been pushed to the forefront in terms of their responsibilities to human rights defenders by the events of the Arab Spring. Facebook’s initial decisions to take down the pseudonymous accounts of anti-government mobilizers had an impact at critical moments of the uprising in Egypt, while conversely Facebook pages such as the Syrian Revolution have been critical in sharing information on the crisis in that country. Similarly, YouTube has been at the forefront as a space for citizens to share their experiences, both in the countries of the Arab Spring, but also from far-less widely covered human rights situations and ones where it is a small minority that is speaking out.
While not minimizing the positive role that social media and video-sharing sites have played in facilitating easier ways to communicate and organize, we do agree with Internet researcher and academic Ethan Zuckerman’s analysis in which he notes that “hosting your political movement on YouTube is a little like trying to hold a rally in a shopping mall. It looks like a public space, but it’s not — it’s a private space, and your use of it is governed by an agreement that works harder to protect YouTube’s fiscal viability than to protect your rights of free speech.”
Online service providers like Facebook and YouTube are private spaces that must largely prioritize as friction-free a user interface as possible. This approach may be at odds with important considerations with human rights content, such as issues of anonymity, consent and contextualization. In the report, we outline some concrete steps that we think technology providers could take to enhance the human rights-friendliness of their spaces around privacy and freedom of expression, or at least not actively hinder their usage for human rights and free speech.
These recommendations to technology companies and developers focus on four sets of changes — to policy, functionality, editorial content, and engagement. Making these changes would not only positively affect the entire environment for online and mobile video, but would also free up resources in civil society.
- Put human rights at the core of user and content policies: Re-evaluate current policies using human rights impact assessments; create human rights content categories that are not vulnerable to arbitrary take-downs and highlight key values around context and consent; and ensure content is preserved wherever possible.
- Put human rights at the heart of privacy controls and allow for anonymity: Make privacy policies more visible and privacy controls more functional using principles of privacy by design, and allow for visual privacy and anonymity with the help of new products, apps and services.
- Create dedicated digital human rights spaces: Support curation of human rights videos; facilitate user education and understanding of human rights issues; make take-down and editorial policies transparent; employ Creative Commons licensing; and support users in dealing with ethics and safety issues.
- Engage in wider technology-human rights debates and initiatives: Draw on expertise across companies in order to collaborate on human rights guidelines; participate in multi-stakeholder initiatives, such as the “Global Network Initiative”: http://globalnetworkinitiative.org/; and address supply chain and environmental impact issues.
As we move into an era of ubiquitous video (though ask an activist working a rural area of the D.R. Congo or West Papua about gaps in access to technology) there is a need to focus on skills, on ensuring people can film safely and effectively, and on enabling them to break through in a crowded information landscape to have their material seen and acted upon. And there is a need for new players, like technology companies, to step up to their role as enablers (or blockers) of the realization of human rights.
You can learn more about these issues, both as they relate to visual media technologies and other new communications technologies in the WITNESS “Cameras Everywhere” report and at the live-blog wrap-up of our panel at the Silicon Valley Human Rights Conference.
Sam Gregory, program director at WITNESS, is a human rights advocate, video producer and trainer who speaks and writes frequently about the opportunities and challenges posed by using video for human rights.