Internet Freedom: Protect, then project

A few weeks back, I offered a blogpost that was intended to spark some conversation about the idea of Internet Freedom. I’ve gotten a wealth of reactions to that post, laudatory and critical, and I’ve recently been more involved with more conversations about Internet Freedom than I’d strictly bargained for. It’s an issue near the front of many minds, as we wait to hear whether Google will stop censoring its Chinese search engine, as we ponder the implications of Treasury’s recent rule changes regarding social media, and we watch congressional hearings and the progress of multiple pieces of legislation. (My friend and colleague Rebecca MacKinnon, who’s been busy trying to educate Washington about these issues, offers a thorough overview of the complexities of this space, from the legislative and lobbying side.)

Some responses to my piece observed that the ideas I was expressing weren’t mine alone and reflected the thinking of a lot of smart people in the field. That’s absolutely true – I’ve been lucky enough to work with the Open Net Initiative and with partners at Berkman and Citizen Lab over the past few years, and owe a lot of the thinking on these issues to brilliant folks like Rebecca, John Palfrey, Ron Deibert, Jonathan Zittrain, Nart Villeneuve, Hal Roberts and others – sorry if that wasn’t clear in the previous piece. Commenters online and otherwise have questioned whether the theories of change I posited in my piece are the right ones or are a complete list – my hope wasn’t to offer a comprehensive list, but some propositions we could argue over. My friend Darius Cuplinskas rightly suggests that framing the idea of Internet Freedom in terms of regime change overstates and colors the case in a way that’s not helpful – while I’ve heard people drawing a line from internet freedom to regime change, he’s right, and it makes sense to think of a US strategy for internet freedom globally in more subtle terms. One commentator quoted my arguments to make a case nearly opposite of the one I was making, arguing that a Radio Free Europe strategy will work for Internet Freedom – I think he’s wrong, but that’s precisely the sort of argument I was hoping to spark.

But the main pushback I got was this: “Okay, so circumvention can’t be our sole strategy. So what should we do?”

I wish that were an easier question to answer. Internet censorship used to be something a small number of states engaged in, through a limited set of known techniques implemented by internet service providers. It’s more widespread and complex now, involving hacking and DDoS attacks, filtering by publishers as well as by internet providers using a wide range of techniques, the arrest and harassment of people who speak online and state sponsored use of participatory media. Facing a web of threats to internet freedom, we’re going to need to embrace a range of countermeasures. Here are some of them:

– My Berkman colleague Jonathan Zittrain suggests a more active response to censorship on behalf of publishers, a form of “mutual aid treaty“, where sites agree to host one another’s content to protect it from certain types of blockage. He’s refining the idea, and in a recent conversation, he proposed a structure of “link and mirror”, where web servers cache copies of content that’s linked to, serving it only if the website referred to is unavailable. It’s a model that could work well for static content, and might in the long term, prove helpful in ensuring access to more dynamic and complex content. It could complement other approaches to creating blocking resistance through mirroring – Psiphon is building caches of content from sites like the BBC and trying to ensure their accessibility in closed corners of the internet, while efforts like AccessNow are mirroring sensitive video, including videos from the Iranian protests, and trying to make them accessible through multiple channels.

– Hal Roberts and I are researching countermeasures that small publishers might pursue as a response to DDoS attacks. We’re looking into a principle of “graceful degredation”, where sites responding to DDoS attacks become less dynamic and interactive, but continue serving static pages. Under heavy attack, standalone sites might seamlessly fail-over to backup versions hosted by larger providers, like Blogger or, who’ve got dedicated staff who can fend off DDoS attacks. Again, the approach is meant to be complementary to other approaches in the field: building teams of white hat sysadmins who can spring into action and help defend sites when they’re under attack; creating DDoS resistant hosting architecture like that provided by services like Prolexic and Akamai; actively fighting DDoS using services like Peakflow from Arbor Networks.

– As Rebecca MacKinnon has suggested in her Senate testimony, companies that are targets of widespread internet censorship, like Google, should consider pursuing actions through the WTO and other bodies that treat censorship as an unfair barrier to trade. This isn’t likely to be a viable strategy for smaller players – it’s the sort of approach that requires expensive and extended engagement, but it’s a place where big companies could set a precedent that smaller players could invoke going forward.

Considering these and other approaches, I’ve become convinced that successful approaches to internet freedom are going to involve sustained and significant efforts from large content and service providers – companies like Google, Yahoo!, Microsoft, Facebook and others. Professor Zittrain’s architectural solution – mirror while you link – does a great job of keeping static content, like a news article or a protest video, online in the face of state-sponsored blocking or denial of service. But it’s much harder to mirror the sorts of complex community behavior that happens on a site like Facebook… or even in a blog comment thread. That sort of mirroring would require working closely with the companies that run those interactive services. The advice Hal and I are offering to human rights sites can be summarized as “When attacked, strip down your functionality and eventually shelter under the protection of someone much larger,” i.e., a major online service provider like Akamai or Google. And the companies best positioned to fight the trade battles Rebecca advocates for are large corporate entities.

All this reflects on one of the uncomfortable truths of the contemporary internet: it’s getting more centralized by the day. When we began building the commercial internet in 1994, it was highly decentralized – many websites operated their own servers provisioning connectivity from a range of internet service providers. It’s far more common now for sites to rely on large hosting companies like Rackspace. And usage of the internet increasingly focuses on a small set of key sites that dominate traffic charts.

Arbor Networks calls this trend, “the rise of the hyper-giants“. Based on their internet traffic monitoring, they believe that, “60 percent of all Internet content comes from, or terminates within, just 100 to 150 companies.” The concentration at the top is even sharper – 30 companies account for 30% of all internet traffic. And some of these hypergiants – Google, Facebook, Amazon, Yahoo! – are either blocked in closed societies (Google’s YouTube and Blogger as well as other services, Facebook), or restricted so that only locally-hosted versions can be accessed (Amazon and Yahoo!).

We’re not going to build functional mirrors of these sites. And providing access to them through proxy servers can be phenomenally expensive, as I argued in my previous post. If we believe that Chinese users need access to YouTube, we need YouTube to become an active participant in that battle.

Here’s the awkward bit. In this centralized contemporary internet, a great deal of the digital public space we celebrate under the banner of internet freedom is controlled by for-profit corporations. There’s nothing wrong with this, per se – much of the innovation in the Internet space has been conducted by for-profit companies. But this may present challenges for using these tools as an environment for free speech. These entities have no more legal obligation to allow open, unfettered political speech in their spaces than shopping malls do to host political rallies.

We often forget this, because so many people use platforms like Blogger or Facebook for political speech, and these platforms are usually receptive to being used in that way. But sometimes these hypergiants remind us that they’re not required to be “common carriers”, required (like telephone companies) to practice non-discrimination over the speech they permit on their platforms. Writing about Facebook’s decision to remove a Morrocan group promoting the separation of mosque and state, friend and colleague Jillian York points out that Facebook’s labyrinthine Terms of Service has been used to justify removing photos from a pro-breastfeeding group while allowing holocaust denial and pro-pedophile groups to remain. Her point: while Facebook is a powerful tool for organizing, anyone who uses the platform risks falling victim to an interpretation of terms of service that limits one’s ability to speak. (As web überdesigner Chris Blow notes in the comments on her post, decisions like this one may not reflect official Facebook policy so much as the attempts of individual Facebook censors to cope with the flood of 5 billion pieces of content per week.)

If we hope that providers like Facebook will project internet freedom into closed societies, we need them first to protect these rights on their platforms. The good news is that some of these companies take this idea very seriously. Google, Yahoo and Microsoft are working with academic and civil society groups in the framework of the Global Network Initiative to develop best practices for privacy and freedom of expression in the spaces they control.

This is no easy task. From very early on, people have proposed that we extend Article XIX rights into cyberspace – Robert Gelman offered an early attempt at writing a rights document for a digital age based on the universal declaration of human rights in 1997. More recently, Max Senges and the Internet Rights & Principles Coalition have been tackling the tough question of translating freedom of expression rights into an internet context, working within the context of the Internet Governance Forum. Rebecca tells me that she and GNI are hard at work on a set of best practices that would protect Article XIX rights in social media and publishing platforms.

It’s difficult to resolve an individual’s right to speech with a company’s rights to build certain types of online community spaces. Few of us would question Whyville’s right to create a kid-safe community and to restrain speech that would make such a community uncomfortable or dangerous for kids. But I’m not entirely comfortable urging activists in closed societies to use Facebook, when that speech platform is governed under a terms of service that requires users to agree “You will not use Facebook to do anything unlawful, misleading, malicious, or discriminatory.” Who determines what’s misleading or malicious? Illegal under whose laws?

Facebook has a right to constrain use of their platform in these ways. (I know that when I was responsible for abuse complaints for, I took full advantage of the fact that our terms of service allowed us to remove content entirely at our discretion.) But we have a responsibility to ask whether the sort of Internet Freedom we’re seeking is possible without the cooperation of online service providers in protecting and projecting these rights. (We might also act whether Internet Freedom is an idea that only applies when the US is talking about closed societies, or whether it’s an idea that applies universally.)

I see four possible new approaches to the problem of protecting free speech in a centralized internet (and I’m confident that readers will suggest more):

– Seek to mandate the protection of Article XIX rights in cyberspace. This could come about through legislation or a court ruling that sets a precedent that political speech has the same protections on a US-based hosting platform as in public space. (In the unlikely event that this came to pass, I’d expect to see online service providers respond with platforms that tagged/flagged potentially offensive content and warned viewers that the opinions expressed were those of the individual user, not the OSP.)

– Ask OSPs to propose a system where they rate and communicate their openness on different speech criteria. The goal would be to turn terms of service into highly readable documents and let users make better choices to use platforms optimized to protect speech or to facilitate certain types of community behavior. (This would follow in the footsteps of countless efforts to document and protect consumer rights online, starting with TRUSTe. These efforts haven’t been overwhelming successes, and there’s no guarantee that a company would emerge as willing to offer strong free speech protections even with this sort of certification. On the other hand, there are worthy efforts underway to make privacy marks user-readable and more useful, perhaps representing a trend towards more consumer choice in this space.)

– Have states build and maintain speech platforms explicitly designed to compete with commercial platforms and to provide spaces as open as possible for free expression. This idea sounds absurd on its face to US readers, but isn’t entirely out of line with other initiatives sponsored in the EU, including the development of Quaero, a search engine that was designed to provide alternatives to tools like Google. Then again, Quaero hasn’t really come to fruition, and I wonder whether the sorts of activists I’m advocating for would be more comfortable with a government-sponsored platform than a corporate one.

– Encourage the building of decentralized social networks, like the one Dave Winer has been suggesting as an alternative to Twitter. If Facebook weren’t a single site, but lots of connected sites running a Facebook protocol, we might expect diversity and competition in terms of what each platform offered as a speech environment. For profit or NGO providers could host highly open servers for activists, or they could run their own servers… much as we did in the 90s when everyone ran their own box and version of Apache. The problem – it’s much easier to build centralized software than distributed, and distributing a protocol doesn’t help us with the concentration of hosting platforms (i.e., we might move the problems of speech control from Facebook to Rackspace.)

There’s also strategy we’re following today – hope service providers protect speech and attempt to pressure them when they don’t. Search for “facebook petition” and you might get the impression that there are almost as many attempts to petition Facebook to make policy changes as there are to organize petition drives on Facebook. This method tends to reward the most shrill and the best connected – if you know someone within a company like YouTube or Yahoo!, it’s sometimes possible to get a decision about acceptable material reconsidered or reversed, but that’s hardly a model for sustainable free speech over time. And this strategy is proving more and more difficult for OSPs to manage. It’s one thing to weed out hate speech when it’s in the language you speak – when you manage a team of English speakers trying to determine what’s hateful in Arabic or Chinese, it’s no surprise that some of these decisions are questionable.

How does this connect to Secretary Clinton’s vision of Internet Freedom, a vision that articulates projection of freedoms abroad more than protections at home? If we continue centralizing the internet (and I don’t see a strong push in the other direction), the two are inextricably intertwined. We could fund any number of strategies to enable Iranian users to access YouTube, then discover that YouTube was rendered useless as a platform for speech by systemic campaigns to remove “controversial” videos organized by the “Iranian Cyber Army”.

Assume Global Network Initiative or Internet Rights & Principles Coalition comes up with a set of best practices that protect internet freedom for those able to access online speech platforms. Or assume that Facebook enters into a conversation about governance with its users that offers them real choice and influence over the future of their platform. Perhaps then we’ve got something worth projecting into closed societies.

As I discussed in my previous post, a site like YouTube could make it much harder for Chinese censors to block their content, if they were inclined to do so. YouTube has access to a large IP address space and could resolve to thousands of different IPs, then route those IPs to the YouTube servers. Anticensorship tools like Tor and Ultrareach have developed strategies that parcel out IP addresses to users in closed societies in ways that make it very difficult for censors to block all IPs in a timely fashion. (Roger Dingledine and Nick Matthewson of Tor proposed an anti-blocking design some years back that includes several intriguing ideas (notably in section 7.4 – Public bridges with central discovery) that would be extremely helpful in building a blocking-resistant YouTube. (Tor implemented several of these ideas in response to increased blocking in China with very good results.) Other strategies involve releasing IPs through non-web channels, like email or via dedicated software clients, strategies used to great effect by tools like Ultrareach and Freegate.

As with everything in the filtering and circumvention space, there are no bullet proof solutions. But we could get much closer towards internet freedom with companies like Google and Yahoo! actively fighting on behalf of Chinese, Iranian and Burmese internet users.

The obstacles to online service providers getting involved with blocking resistance aren’t technical ones – they’re business and conceptual ones. If Facebook embraces the idea that it’s a new public space for speech, they’ve got an incentive to build a brand around that vision of Internet Freedom and to expand it throughout the world. If they’ve got a different vision – so be it. But I suspect that the platforms who commit to protecting and projecting speech will see a wave of loyalty, support and usage.

This entry was posted in Berkman, Human Rights, ideas. Bookmark the permalink.