… My heart’s in Accra Ethan Zuckerman’s online home, since 2003

April 27, 2010

Eric Rescorla – How paranoid should we be?

Filed under: Geekery — Ethan @ 5:13 pm

Eric Rescorla is a man who speaks frankly about internet security. You might have guessed that from the name of his consultancy – RTFM – an acronym politely translated as “read the friendly manual”… except that it’s usually not politely translated. He’s the co-designer of secure HTTP, and now works on security issues with Skype.

In a conversation about cyberwarfare at Princeton, his assertion – “The internet is still too secure” – raises a few eyebrows. But his key message – “things could be a lot worse” – is a useful counterpoint to the rhetoric of cyberwarfare that Dr. Lin explores, and helps show the tensions between the computer security community and the defense community (not to mention with the human rights community.)

Rescorla asserts the following:

We have nearly unbreakable crypto primitives.
AES resists all practical attacks, RSA isn’t quite as strong as we’d like – but there is good new stuff in the pipeline, and there’s been significant progress made on addressing collisions in SHA-1. “There is no serious concern we’re going to run out of crypto any time soon.” The problem instead is getting the new stuff into use – we’re not using the strong stuff that we already have.

We think we know how to build secure protocols.
There’s been no basic change in security design since 2000. We’ve got good protocols for data object security (S/MIME, PGP), for stream security (SSL/TLS, SSH), for packet security (IPsec, STLS) and for authentication (Kerberos). Our work isn’t on deploying new protocols – it’s on addressing flaws in existing implementations – updating to newer hash functions after MD5/SHA-1 attacks, for instance. And it’s on gluing together existing protocols, none of which have really changed since 2004.

We can’t build systems that are reliably secure.
Good implementations are hard. We recently saw an attack on OpenSSL where a single “record of death” can crash the whole system. Debian managed to break their psuedorandom number generator, which meant that Debian keys were highly predictable and crackable with a table of only 32,000 keys. This probably affected 1% of internet connected machines.

This is just the security critical code. As Steven Bellovin at Columbia has pointed out, “Software has bugs. Security software has security relevant bugs.” And we’re bad at finding these bugs – audits are time consuming and there’s no evidence that they significantly affect the discovery rate for bugs. And affected implementations are slow to go away.

User practice is appaling.

Users are careless. They install random software from untrusted sources. They ignore our carefully constructed messages designed to prevent man in the middle attacks. And crypto doesn’t do much for you if your system’s been compromised.

Things could be a lot worse.

Despite all this, things aren’t so bad. Why is computer security so poor despite twenty years of work on the topic? We’ve been working on personal security for 10,000 years – why aren’t human beings invulnerable? Security people always think about the worst case scenario, but the actual attacks we experience are fairly primitive.

The Debian PRNG bug is about as bad as it gets, but there was no evidence of practical attacks in the field. You can mount a DOS on the whole internet pretty easily – publish bogus BGP routes, as Pakistani ISPs did while trying to block YouTube. As a practical matter, we can almost always get to YouTube. The internet shouldn’t work, but it does. Perhaps the question we need to be asking ourselves is “What’s a rational level of paranoia?”

April 23, 2010

Towards Texas Transparency

Filed under: Geekery,ideas — Ethan @ 3:49 pm

I’m at the LBJ School of Public Affairs at the University of Texas at Austin today. There’s a conference on financial transparency in Texas, featuring some excellent student work on making Texas’s finances on the local and state level more open and accessible.

The students presenting work offer an excellent model for how a state might pursue financial transparency. They suggest that:
– Data must be public, which equals being online
– The release of that data must be timely and user-friendly, which means accessible formats
– Data needs to follow the money, to allow citizens to monitor all aspects of allocation and spending
– Tranparent information must lead towards public participation

They offer these proposed payoffs to this sort of transparency:
– Efficiency – having data open and online means more efficient inter- and intra-agency cooperation
– Innovation – open data means that independent individuals, organizations and groups can use, remix and reformat in interesting ways
– Increased accountability, as citizens can review
– Increased participation – citizens can become more involved in government decisions through access to this data

The students propose centralizing information that’s scattered across dozens of existing websites into a one-stop shop, analagous to Alabama’s Open.Alabama.Gov site. This site could also make it possible to map spending, like Maryland’s site tracking the stimulus. It would include information in .CSV format, as the Texas comptroller’s office is currently doing.

They recognize the need to move beyond releasing data to energizing the community, and reference Sunlight Foundation‘s model that identifies a wide range of groups who could get energized by access to information. But they believe that there’s a need for educating the public, using something like North Carolina’s Budgeting 101 website, letting citizens understand the basics of how the budget works.

It’s not enough to do this work at the state level, the students argue. It needs to happen at the local level, with local governments publishing budgets, check registers and financial reports. The fight now is about formats – it’s not sufficient to release this as PDFs – it needs to be in sortable, searchable data. And not every group is equally open – while the Texas legislature is quite open, the appropriations process is not – the students recommend opening a set of appropriations documents, including the markup and decision documents, acknowledging the difficulty of releasing documents that are changing in real-time as negotiations take place.

They suggest that governments face four main challenges:
– The difficulty of working with outdated, incompatible software
– Limits to technical, financial and human resources capacity
– The perception that there’s risk from citizens misunderstanding the data released
– The lack of incentives and requirements to force governments to participate in this process

In the hopes of making this easier, the students are working with the UT Computer Science department to build a template that local governments could use to release their information. It’s encouraging to see such in-depth thinking about both the mechanics of and rationale for opening government financial data – here’s hoping the students are able to have an influence on the future of this movement within Texas.

It’s likely that these LBJ school students will have an ally in Victor Gonzales, the CTO for the Comptroller of Public Accounts. Gonzales explains the role of the comptrollers office – it’s the state’s monitor of revenue and spending, and the state’s purchaser. It’s also the main accountant for the state, and processes over a hundred billion dollars in checks and electronic fund transfers. As such, they’re very well positioned to provide a window into the state’s finances.

Gonzales has been building systems that allow citizens, groups and legislators track expenditure, drilling down to the check register level by agency, payee and object of expense. Putting this information online has already led to $10 million in savings – he gives the example of discovering how much money the state was spending on copier toner, and deciding to negotiate a new contract to get a better deal from a central supplier.

His principle in building these systems – start small and keep them simple. When he took the position, his first question was “what could we do by the end of the week?” Turned out, they were able to release information from their own shop and set a precedent for the rest of the government. The project is no longer so simple – it’s quite powerful, with a site called “Where the Money Goes“, which allows deep exploration of government spending, and will soon be complemented with a site called “Where the Money Comes From”.

He closes with a great story: looking at accounts published online, the comptroller’s office discovered that a government department had bought a goat. For a little while, they worried that someone was eating cabrito for lunch at government expense. Turns out the goat was for scientific research. Score another victory for transparency.

April 1, 2010

Is Vietnam conducting surveillance via malware?

Filed under: Geekery,Human Rights — Ethan @ 3:06 pm

I’ve been following reporting on the discovery of a new botnet in Vietnam with interest. McAfee and Google both posted information on the botnet on Tuesday the 30th, and the Wall Street Journal, Washington Post and New York Times all ran pieces on the phenomenon yesterday. Collectively, they offer an insight in just how difficult it is to report about internet abuse, hacking and “cyberwar”.

George Kurtz, CTO of McAfee, offered the most detailed technical report. McAfee has been investigating “Operation Aurora“, the attack on Google and other US companies that provoked Google to discontinue its google.cn search engine and redirect Chinese users to their uncensored Hong Kong engine. In the course of investigating these attacks, Kurtz reports that they discovered an apparently unrelated and unconnected botnet controlled by computers in Vietnam and apparently spread via a Vietnamese-language keyboard driver.

Vietnamese is a language that uses a complex set of diacritic marks to distinguish between characters and signify tone. To type in Vietnamese, you need a keyboard driver that can associate certain key combinations with the appropriate Unicode characters. Many Vietnamese speakers use VPSKeys, a driver that’s been distributed by the Vietnamese Professional Society, a group dedicated to connecting Vietnamese professionals in the diaspora. According to Kurtz, the Windows driver distributed by VPS has been compromised – if you download and install it, you’ll end up installing a rich set of trojan horse programs that will hijack your machine and enlist it in a botnet that appears to be controlled from within Vietnam.

Kurtz is clear that he doesn’t think VPS is intentionally distributing malware. Instead, “We believe that the perpetrators may have political motivations and may have some allegiance to the government of the Socialist Republic of Vietnam.” Writing for Google’s security team, Neel Mehta goes further: “These infected machines have been used both to spy on their owners as well as participate in distributed denial of service (DDoS) attacks against blogs containing messages of political dissent. Specifically, these attacks have tried to squelch opposition to bauxite mining efforts in Vietnam, an important and emotionally charged issue in the country.”

Mehta’s statement helped me figure something out that’s troubled me for a couple of years now. I’ve been fortunate enough to work with Vietnamese activists and dissidents in the US, and am aware of sophisticated attacks on people who’ve attracted the attention of the security forces. Some of these attacks have suceeded in accessing encrypted texts, including a text encrypted with PGP. We suspected that security forces weren’t breaking PGP (duh), but had physically accessed people’s computers (quite common in Vietnam), copied PGP private keyrings and installed keyloggers that captured users passphrases. I’m now inclined to think that the attack might have been much simpler – had someone compromised an earlier version of a Vietnamese-language keyboard driver, they could have easily inserted keylogging code and routines that sought out PGP keyrings.

Both the Washington Post and Wall Street Journal connect the Vietnam attacks to silencing political dissent about a Chinese mining project in Vietnam. There’s good circumstantial evidence for this – bauxitevietnam.info has been attacked in the past and blogger Nguyen Ngoc Nhu Quynh – aka Me Nam – was arrested last year in conjunction with her activities in opposition to the mine.

But it seems possible to me that what’s going on is more complex and sinister than just a denial of service attack. There’s no particular reason to harness the computers of Vietnamese-speaking users to launch a DDoS attack – there are existing, robust botnets that can be rented to attack whatever site you’d like. (I suppose a botnet built of Vietnam-based and diasporan users would be particularly effective at targeting targets within Vietnam… but bauxitevietnam.info was registered to a group in Hong Kong and there’s no reason to believe dissidents would be foolish enough to host an anti-government site within Vietnam.) But being able to intercept communications from anyone writing in Vietnamese and search for key phrases like “Kh?i 8406” would be a dream for a government with a long track record of tracking and harassing dissenters.

Here’s the problem – it’s almost impossible to know what’s actually going on. The ability to log user keystrokes isn’t just helpful for repressive governments – it’s a terrific tool for stealing banking passwords or other sensitive information. The Vietnam trojan could have been ordered by a government department, outsourced to private hackers to build and deploy… or engineered by enterprising criminals who saw an opportunity to infect a set of users through a vulnerability in VPS’s server… or created by a group of nationalist Vietnamese hackers operating independently of the government… and so on.

What’s scary about “cyberwar” – as far as I’m concerned – isn’t nightmare scenarios of nations shutting down each other’s electrical grids as a “force multiplier”. (This excellent oped from Marcus Ranum points out that some of these fears are a function of sloppy reporting and thinking that blurs the lines between hacking as prank, as crime and as military attack.) It’s the difficulty of figuring out whether a particular incident should be thought of as criminal or political activity. What’s appropriate response to state-led political/military actions (censure, sanction, etc.) is useless if the attack was criminal in nature, and vice versa.

Obviously, governments who decided to engage in cyberattacks would do their best to disguise them as criminal activity. This example suggests to me just how effective this disguise can be – as much as I worry about the government of Vietnam’s human rights record, it’s not hard for me to spin a scenario where this is a criminal attack, not a state-based one.

Hope that McAfee and others will release more information as they learn about the details of the trojan. If this turns out to be explicitly designed to spy on communications, it will be a fascinating development in the world of internet surveillance.

March 24, 2010

Makmende’s so huge, he can’t fit in Wikipedia

Filed under: Africa,Geekery,ideas,Just for fun,xenophilia — Ethan @ 11:31 am

“After platinum, albums go Makmende”

“They once made a makmende toilet paper, but there was a problem: It wouldn’t take shit from anybody!!!”

“Makmende hangs his clothes on a safaricom line and when they dry he stores them in a flashdisk!”

If those simple truths don’t make sense to you, you’re probably not a Kenyan blogger. For the past few days, Kenya’s blogosphere and twitterers have been in thrall to the latest African superhero, and what might be Kenya’s first viral internet meme. An article in a Wall Street Journal blog today confirmed that Makmende is receiving attention beyond East Africa, demonstrating that our Kenyan friends are just as capable as any Moldovan boy band of creating internet buzz.

The video for Just a Band’s single “Ha-He” features a badass protagonist straight out of blaxploitation films. Armed with an array of freeze-frame kung fu moves, Makmende brings justice to the mean streets of a hazy, sun-drenched city that seems caught somewhere between Nairobi and 1970s LA. Tongue is firmly in cheek, as the video credits introduce characters including “Taste of Daynjah”, “Wrong Number” and bad guys “The Askyua Matha Black Militants”.

archer at Mwanamishale fills the rest of us in on the meaning of the term, Makmende:

Makmende was a term used way back in the early to mid 1990s to refer to someone who thinks hes a superhero. For example, if a boy whos watched one too many kung-fu movies on TV decides to unleash his newly acquired combat skills, he would be asked Unajidai Makmende, eh? (Who do you think you are, Makmende?) Trust me, there was a Makmende in every hood!

Given the high production values of the video, the fact that it accompanies a sweet track from Just a Band, and that the video producers evidently released a set of photoshopped magazine covers featuring Makmende as GQ’s sole “Badass of the Year”, perhaps it’s not surprising that Kenyan netizens have taken the Makmende trend to the next level. He’s got a Facebook page, a Twitter account, and a dedicated website filled with thousands of testimonies to his badassitude: “Makmende uses viagra in his eyedrops, just to look hard.”

The obvious parallel is Chuck Norris Facts, an internet meme that manifested mostly through image macros that attest to the action star’s manliness. (“Chuck Norris counted to infinity. Twice.”) For now, the Makmende phenomenon appears to be largely text-based, with Kenyans around the world connecting the events of the day to Makmende’s movements: “is the massive pour in Nairobi as a result of Makmende’s tear after the WSJ feature?”

What he doesn’t have is a Wikipedia page. I searched this morning on the English-language Wikipedia and got a page telling me that Makmende had been deleted:

* 00:37, 24 March 2010 Flyguy649 (talk | contribs) deleted “Makmende” ? (CSD G3: Pure Vandalism)
* 22:53, 23 March 2010 Malik Shabazz (talk | contribs) deleted “Makmende” ? (G12: Unambiguous copyright infringement (CSDH))
* 18:30, 23 March 2010 JoJan (talk | contribs) deleted “Makmende” ? (G1: Patent nonsense, meaningless, or incomprehensible)

Looks like multiple attempts to establish a Makmende page have been shot down. Fair enough – the inclusionist/deletionist argument that’s gripped Wikipedia centers in part on the documentation of ephemeral culture. Perhaps an English language encyclopedia doesn’t need mention of every internet meme… though pages exist for Numa Numa, the song that inspired the viral video, the guy who performed in the viral video, and so on. Perhaps if Makmende reaches the heights of internet fame that memes like Eduard Khil or Back Dorm Boys have achieved, he’ll no longer be “patent nonsense, meaningless or incomprehensible.”

Here’s an interesting puzzle for Wikipedia. Makmende may never become particularly important to English speaking users outside of Kenya. But the phenomenon’s quite important within the Kenyan internet: it’s the first meme I can remember going truly viral and inspiring a wave of participation from Kenyans around the world. I recall a conversation at 2006 Wikimania in Cambridge where (friend and GV editor) Ndesanjo Macha, a major contributor to the Swahili Wikipedia, explained that the topics covered in that wikipedia were likely to be different from those included in the English wikipedia. (More articles on east African culture, less on Pokemon, perhaps.) Indeed, the Wikipedias in Gaelic, Welsh and Plattdtsch are cultural projects as much as attempts to make key reference materials available, as most speakers of these languages are fluent in other languages that have much larger Wikipedias.

Most Wikipedians seemed to accept the idea that different languages and cultures might want to include different topics in their encyclopedias. But what happens when we share a language but not a culture? Is there a point where Makmende is sufficiently important to English-speaking Kenyans that he merits a Wikipedia page even if most English-speakers couldn’t care less? Or is there an implicit assumption that an English-language Wikipedia is designed to enshrine landmarks of shared historical and cultural importance to people who share a language?

For me, Makmende’s a reminder that the internet isn’t as small and connected as we tend to believe it is. We occasionally catch glimpses over cultural walls when we use these tools. Sometimes we respond with fascination and seek to learn more. Often, our behavior’s not as admirable. danah boyd closed her talk on Digital Visibility at Supernova this past year with an uncomfortable observation about racism in Twitter:

Think of those who complained when the Trending Topics on Twitter reflected icons of the black community during the Black Entertainment Television awards. Tweets like: “wow!! too many negros in the trending topics for me. I may be done with this whole twitter thing.” and “Did anyone see the new trending topics? I don’t think this is a very good neighborhood. Lock the car doors kids.” and “Why are all the black people on trending topics? Neyo? Beyonce? Tyra? Jamie Foxx? Is it black history month again? LOL”. These tweets should send a shiver down your spine. Perhaps these people assumed that Twitter was a white-dominant space where blacks were welcome only if they were a minority.

danah goes on to point out that not everyone reacts to encountering topics outside of their comfort sphere with shock or surprise. I found it encouraging that the Wall Street Journal saw the emergence of a Kenyan meme as a chance to explore Kenyan internet culture rather than to turn away in ignorance or disinterest. Let’s hope the next time Makmende seeks a place in Wikipedia, he’s met with a bit more curiosity and less dismissal.

Roughly six seconds after I posted this piece, Twitter users reported a new version of the Makmende article on WIkipedia. Here’s hoping this one survives summary deletion…!

March 1, 2010

ChatRoulette survey (long bookmark)

Filed under: Geekery,long bookmark,Media — Ethan @ 3:46 pm

ChatRoulette: An Initial Survey

The fine folks at the Web Ecology Project pride themselves on researching web trends that are just starting to catch the attention of the media and other researchers. As such, we can count on them not only to offer insights into online, randomized chat site ChatRoulette, but into derivative works like CatRoulette. (Yes, I have considered surfing the site with Drew in front of the camera. Rachel told me not to.)

The survey – admittedly a first pass – has some big predictions from a fairly small data set. Alex Leavitt, Tim Hwang and friends sampled 201 sessions on the system, taking snapshots and logging off to see their potential correspondents. They also conducted 30 interviews, though were only able to talk to users who didn’t immediately close the connection, which may have skewed their sample set away from people using the system to find explicit content. (Someday, we’ll see a methodology section in a paper that debates the merits of logging onto a system while naked to get a more representative sample…)

The big takeaways: Yes, the folks using CR are male, 18-24. While some of them are looking for online sexual encounters, lots more are simply curious about the system or looking to chat. The authors frame the space as a “probabilistic online community”, with radically different dynamics than a traditional social network as it “mediates the encounters between its users, specifically by eliminating lasting connections in the framework of the platform”. It’s impossible within this framework to maintain traditional “friend” relationships – instead, we’d expect to see people creating online persona by wearing creative costumes/masks and developing that identity outside of the system, on blogs/tumblr/message boards. They further suggest that the fact that the majority of people on the site don’t appear to be seeking sexual imagery will lead towards a decline in explicit content. (That’s unclear to me – it’s quite possible that people not overtly seeking sexual content (i.e., focusing webcams on their genitals) are curious about what sort of explicit content they might come across as they switch cam partners.)

The paper includes my current candidate for “most enjoyable graph in a social science paper, 2010″:

Picture 1

February 22, 2010

Internet Freedom: Beyond Circumvention

Filed under: Geekery,Human Rights — Ethan @ 7:27 pm

Secretary Clinton’s recent speech on Internet Freedom has signaled a strong interest from the US State Department in promoting the use of the internet to promote political reforms in closed societies. It makes sense that the State Department would look to support existing projects to circumvent internet censorship. The New York Times reports that a group of senators is urging the Secretary to apply existing funding to support the development and expansion of censorship circumvention programs, including Tor, Psiphon and Freegate.

I’ve spent a good part of the last couple of years studying internet circumvention systems. My colleagues Hal Roberts, John Palfrey and I released a study last year that compared the strengths and weaknesses of different circumvention tools. Some of my work at Berkman is funded by a US state department grant that focuses on continuing to study and evaluate these sorts of tools and I spend a lot of time trying to coordinate efforts between tool developers and people who need access to circumvention tools to publish sensitive content.

I strongly believe that we need strong, anonymized and useable censorship circumvention tools. But I also believe that we need lots more than censorship circumvention tools, and I fear that both funders and technologists may overfocus on this one particular aspect of internet freedom at the expense of other avenues. I wonder whether we’re looking closely enough at the fundamental limitations of circumvention as a strategy and asking ourselves what we’re hoping internet freedom will do for users in closed societies.

So here’s a provocation: We can’t circumvent our way around internet censorship.

I don’t mean that internet censorship circumvention systems don’t work. They do – our research tested several popular circumvention tools in censored nations and discovered that most can retrieve blocked content from behind the Chinese firewall or a similar system. (There are problems with privacy, data leakage, the rendering of certain types of content, and particularly with usability and performance, but the systems can circumvent censorship.) What I mean is this – we couldn’t afford to scale today’s existing circumvention tools to “liberate” all of China’s internet users even if they all wanted to be liberated.

Circumvention systems share a basic mode of operation – they act as proxies to let you retrieve blocked content. A user is blocked from accessing a website by her ISP or that ISP’s ISP. She wants to read a page from Human Rights Watch’s webserver, which is accessible at IP address But that IP address is on a national blacklist, and she’s prevented from receiving any content from it. So she points her browser to a proxy server at another address – say – and asks a program on that server to retrieve a page from the HRW server. Assuming that isn’t on the national blacklist, she should be able to receive the HRW page via the proxy.

During the transaction, the proxy is acting like an internet service provider. Its ability to provide reliable service to its users is constrained by bandwidth – bandwidth to access the destination site and to deliver the content to the proxy user. Bandwidth is costly in aggregate, and it costs real money to run a proxy that’s heavily used.

Some systems have tried to reduce these costs by asking volunteers to share them – Psiphon, in its first design, used home computers hosted by volunteers around the world as proxies, and used their consumer bandwidth to access the public internet. Unfortunately, in many countries, consumer internet connections are optimized to download content and are much slower when they are uploading content. These proxies could get the homepage at hrw.org pretty quickly, but they took a very long time to deliver the page to the user behind the firewall. Psiphon is no longer primarily focused on trying to make proxies hosted by volunteers work. Tor is, but Tor nodes are frequently hosted by universities and companies who have access to large pools of bandwidth. Still, available bandwidth is a major constraint to the usability of the Tor system. The most usable circumvention systems today – VPN tools like Relakks or Witopia – charge users significant sums annually to defray bandwidth costs.

Let’s assume that systems like Tor, Psiphon and Freegate receive additional funding from the State Department. How much would it cost to provide proxy internet access for… well, China? China reports 384 million internet users, meaning we’re talking about running an ISP capable of serving more than 25 times as many users as the largest US ISP. According to CNNIC, China consumes 866,367 Mbps of international internet bandwidth. It’s hard to get estimates for what ISPs pay for bandwidth, though conventional wisdom suggests prices between $0.05 and $0.10 per gigabyte. Using $0.05 as a cost per gigabyte, the cost to serve the Internet to China would be $13,608,000 per month, $163.3 million a year in pure bandwidth charges, not counting the costs of proxy servers, routers, system administrators, customer service. Faced with a bill of that magnitude, the $45 million US senators are asking Clinton to spend quickly looks pretty paltry.

There’s an additional complication – we’re not just talking about running an ISP – we’re talking about running an ISP that’s very likely to be abused by bad actors. Spammers, fraudsters and other internet criminals use proxy servers to conduct their activities, both to protect their identities and to avoid systems on free webmail providers, for instance, which prevent users from signing up for dozens of accounts by limiting an IP address to a certain number of signups in a limited time period. Wikipedia found that many users used open proxies to deface their system and now reserve the right to block proxy users from editing pages. Proxy operators have a tough balancing act – for their proxies to be useful, people need to be able to use them to access sites like Wikipedia or YouTube… but if people use those proxies to abuse those sites, the proxy will be blocked. As such, proxy operators can find themselves at war with their own users, trying to ban bad actors to keep the tool useful for the rest of the users.

I’m skeptical that the US State Department can or wants to build or fund a free ISP that can be used by millions of simultaneous users, many of whom may be using it to commit clickfraud or send spam. I know – because I’ve talked with many of them – that the people who fund blocking-resistant internet proxies don’t think of what they’re doing in these terms. Instead, they assume that proxies are used by users only in special circumstances, to access blocked content.

Here’s the problem. A nation like China is blocking a lot of content. As Donnie Dong notes in a recent blogpost, five of the ten most popular websites worldwide are blocked in China. Those sites include YouTube and Facebook, sites that eat bandwidth through large downloads and long sessions. Perhaps it would be realistic to act as an ISP to China if we were just providing access to Human Rights Watch – it’s not realistic if we’re providing access to YouTube.

Proxy operators have dealt with this question by putting constraints on the use of their tools. Some proxy operators block access to YouTube because it’s such a bandwidth hog. Others block access to pornography, both because it uses bandwidth and to protect the sensibilities of their sponsors. Others constrain who can use their tools, limiting access to the tools to people coming from Iranian or Chinese IPs, trying to reduce bandwidth use by American high school kids who’ve got YouTube blocked by their school. In deciding who or what to block, proxy operators are offering their personal answers to a complicated question: What parts of the internet are we trying to open up to people in closed societies? As we’ll address in a moment, that’s not such an easy question to answer.

Let’s imagine for a moment that we could afford to proxy China, Iran, Myanmar and others’ international traffic. We figure out how to keep these proxies unblocked and accessible (it’s not easy – the operators of heavily used proxy systems are engaged in a fast-moving cat and mouse game) and we determine how to mitigate the abuse challenges presented by open proxies. We’ve still got problems.

Most internet traffic is domestic. In China, we estimate (Hal’s got a paper coming out shortly) that roughly 95% of total traffic is within the country. Domestic censorship matters a great deal, and perhaps a great deal more than censorship at national borders. As Rebecca MacKinnon documented in “China’s Censorship 2.0“, Chinese companies censor user-generated content in a complex, decentralized way. As a result, a good deal of controversial material is never published in the first place, either because it’s blocked from publication or because authors decline to publish it for fear of having their blog account locked or cancelled. We might assume that if Chinese users had unfettered access to Blogger, they’d publish there. Perhaps not – people use the tools that are easiest to use and that their friends use. A seasoned Chinese dissident might use Blogger, knowing she’s likely to be censored – an average user, posting photos of his cat, would more likely use a domestic platform and not consider the possibility of censorship until he found himself posting controversial content.

In promoting internet freedom, we need to consider strategies to overcome censorship inside closed societies. We also need to address “soft censorship”, the co-opting of online public spaces by authoritarian regimes, who sponsor pro-government bloggers, seed sympathetic message board threads, and pay for sympathetic comments. (Evgeny Morozov offers a thoroughly dark view of authoritarian use of social media in How Dictators Watch Us On The Web.)

We also need to address a growing menace to online speech – attacks on sites that host controversial speech. When Turkey blocks YouTube to prevent Turkish citizens from seeing videos that defame Ataturk, they prevent 20 million Turkish internet users from seeing the content. When someone – the Myanmar government, patriotic Burmese, mischievous hackers – mount a distributed denial of service attack on Irrawaddy (an online newspaper highly critical of the Myanmar government), they (temporarily) prevent everyone from seeing it.

Circumvention tools help Turks who want to see YouTube get around a government block. But they don’t help Americans, Chinese or Burmese see Irrawaddy if the site has been taken down by DDoS or hacking attacks. Publishers of controversial online content have begun to realize that they’re not just going to face censorship by national filtering systems – they’re going to face a variety of technical and legal attacks that seek to make their servers inaccessible.

There’s quite a bit publishers can do to increase the resilience of their sites to DDoS attack and to make their sites more difficult to filter. To avoid blockage in Turkey, YouTube could increase the number of IP addresses that lead to the webserver and use a technique called “fast-flux DNS” to give the Turkish government more IP addresses to block. They could maintain a mailing list to alert users to unblocked IP addresses where they could access YouTube, or create a custom application which disseminates unblocked IPs to YouTube users who download the ap. These are all techniques employed by content sites that are frequently blocked in closed societies.

YouTube doesn’t take these anti-blocking measures for at least two reasons. One, they’ve generally preferred to negotiate with nations who filter the internet to try to make their sites reachable again than to work against them by fighting filtering. (This attitude may be changing now that Google has announced their intention not to cooperate with Chinese censorship.) Second, YouTube doesn’t really have an economic incentive to be unblocked in Turkey. If anything, being blocked in Turkey (and perhaps even in China) may be to their economic advantage.

Sites that enable user-created content are supported by advertising traffic. Advertisers are generally more excited about reaching users in the US (who’ve got credit cards, more disposable income and are inclined to buy online) than users in China or Turkey. Some suspect that the introduction of “lite” versions of services like Facebook are designed to serve users in the developing world at lower cost, since those users rarely create income. In economic terms, it may be hard to convince Facebook, YouTube and others to continue providing services to closed societies, where they have a tough time selling ads. And we may need to ask more of them – to take steps to ensure that they remain accessible and useful in censorious countries.

In short:
– Internet circumvention is hard. It’s expensive. It can make it easier for people to send spam and steal identities.
– Circumventing censorship through proxies just gives people access to international content – it doesn’t address domestic censorship, which likely affects the majority of people’s internet behavior.
– Circumventing censorship doesn’t offer a defense against DDoS or other attacks that target a publisher.

To figure out how to promote internet freedom, I believe we need to start addressing the question: “How do we think the Internet changes closed societies?” In other words, do we have a “theory of change” behind our desire to ensure people in Iran, Burma, China, etc. can access the internet? Why do we believe this is a priority for the State Department or for public diplomacy as a whole?

I think much work on internet censorship isn’t motivated by a theory of change – it’s motivated by a deeply-held conviction (one I share) that the ability to share information is a basic human right. Article 19 of the Universal Declaration of Human Rights states that “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” The internet is the most efficient system we’ve ever built to allow people to seek, receive and impart information and ideas, and therefore we need to ensure everyone has unfettered internet access. The problem with the Article 19 approach to censorship circumvention is that it doesn’t help us prioritize. It simply makes it imperative that we solve what may be an unsolvable problem.

If we believe that access to the internet will change closed societies in a particular way, we can prioritize access to those aspects of the internet. Our theory of change helps us figure out what we must provide access to. The four theories I list below are rarely explicitly stated, but I believe they underly much of the work behind censorship circumvention.

The suppressed information theory: if we can provide certain suppressed information to people in closed societies, they’ll rise up and challenge their leaders and usher in a different government. We might choose to call this the “Hungary ’56 theory” – reports of struggles against communist governments around the world, reported into Hungary via Radio Free Europe, encouraged Hungarians to rebel against their leaders. (Unfortunately, the US didn’t support the revolutionaries militarily – as many in Hungary had expected – and the revolution was brutally quashed by a Soviet invasion.)

I generally term this the “North Korea theory”, because I think a state as closed as North Korea might be a place where un-suppressed information – about the fiscal success of South Korea, for instance – could provoke revolution. (Barbara Demick’s beautiful piece in the New Yorker, “The Good Cook“, gives a sense for how little information most North Koreans have about the outside world and how different the world looks from Seoul.) But even North Korea is less informationally isolated than we think – Dong-A Ilbo reports an “information belt” along the North Korea/China border where calls on smuggled mobile phones are possible from North to South Korea. Other nations are far more open – my friends in China tend to be extremely well informed about both domestic and international politics, both through using circumvention tools and because Chinese media reports a great deal of domestic and international news.

It’s possible that access to information is a necessary, though not sufficient, condition for political revolution. It’s also possible that we overestimate the power and potency of suppressed information, especially as information is so difficult to suppress in a connected age.

The Twitter revolution theory: if citizens in closed societies can use the powerful communications tools made possible by the Internet, they can unite and overthrow their oppressors. This is the theory that led the State Department to urge Twitter to put off a period of scheduled downtime during the Iran elections protests. While it’s hard to make the case that technologies of connection are going to bring down the Iranian government (see Cameron Abadi’s piece in FP on the limitations of using Facebook to organize in Iran), good counterexamples exist, like the role of the mobile phone in helping to topple President Estrada in the Philippines.

There’s been a great deal of enthusiasm in the popular press for the Twitter revolution theory, but careful analysis reveals some limitations. The communications channels opened online tend to be compromised quickly, used for disinformation and for monitoring activists. And when protests get out of hand, governments of closed societies don’t hesitate to pull the plug on networks – China has blocked internet access in Xinjiang for months, and Ethiopia turned off SMS on mobile phone networks for years after they were used to organize street protests.

The public sphere theory: Communication tools may not lead to revolution immediately, but they provide a new rhetorical space where a new generation of leaders can think and speak freely. In the long run, this ability to create a new public sphere, parallel to the one controlled by the state, will empower a new generation of social actors, though perhaps not for many years.

Marc Lynch made a pretty persuasive case for this theory in a talk last year about online activism in the Middle East. It’s possible to make this case by looking at samizdat (self-published, clandestine media) in the former Soviet Union, which was probably more important as a space for free expression than it was as a channel for disseminating suppressed information. The emergence of leader like Vaclav Havel, whose authority was rooted in cultural expression as well as political power, makes the case that simply speaking out is powerful. But the long timescale of this theory makes it hard to test.

The theory we accept shapes our policy decisions. If we believe that disseminating suppressed information is critical – either to the public at large or to a small group of influencers – we might focus our efforts on spreading content from Voice of America or Radio Free Europe. Indeed, this is how many government forays into censorship circumvention began – national news services began supporting circumvention tools so their content (painstakingly created in languages like Burmese or Farsi) would be accessible in closed societies. This is a very efficient approach to anticensorship – we can ignore many of the problems associated with abusing proxies and focus on prioritizing news over other high-bandwidth uses, like the video of the cat flushing the toilet. Unfortunately, we’ve got a long track record that shows that this form of anticensorship doesn’t magically open closed regimes, which suggests that increasing our bet on this strategy might be a poor idea.

If we adopt the Twitter Revolution theory, we should focus on systems that allow for rapid communication within trusted networks. This might mean tools like Twitter or Facebook, but probably means tools like LiveJournal and Yahoo! Groups which gain their utility through exclusivity, allowing small groups to organize outside the gaze of the authorities. If we adopt the public sphere approach, we want to open any technologies that allow public communication and debate – blogs, Twitter, YouTube, and virtually anything else that fits under the banner of Web 2.0.

What does all this mean in terms of how the State Department should allocate their money to promote Internet Freedom? My goal was primarily to outline the questions they should be considering, rather than offering specific prescriptions. But here are some possible implications of these questions:

– We need to continue supporting circumvention efforts, at least in the short term. But we need to disabuse ourselves of the idea that we can “solve” censorship through circumvention. We should support circumvention until we find better technical and policy solutions to censorship, not because we can tear down the Great Firewall by spending more.

– If we want more people using circumvention tools, we need to find ways to make them fiscally sustainable. Sustainable circumvention is becoming an attractive business for some companies – it needs to be part of a comprehensive internet freedom strategy, and we need to develop strategies that are sustainable and provide low/zero cost access to users in closed societies.

– As we continue to fund circumvention, we need to address usage of these tools to send spam, commit fraud and steal personal data. We might do this by relying less on IP addresses as an extensive, fundamental means of regulating bad behavior… but we’ve got to find a solution that protects networks against abuse while maintaining the possibility of anonymity, a difficult balancing act.

– We need to shift our thinking from helping users in closed societies access blocked content to helping publishers reach all audiences. In doing so, we may gain those publishers as a valuable new set of allies as well as opening a new class of technical solutions.

– If our goal is to allow people in closed societies to access an online public sphere, or to use online tools to organize protests, we need to bring the administrators of these tools into the dialog. Secretary Clinton suggests that we make free speech part of the American brand identity – let’s find ways to challenge companies to build blocking resistance into their platforms and to consider internet freedom to be a central part of their business mission. We need to address the fact that making their platforms unblockable has a cost for content hosts and that their business models currently don’t reward them for providing service to these users.

– The US government should treat internet filtering – and more aggressive hacking and DDoS attacks – as a barrier to trade. The US should strongly pressure governments in open societies like Australia and France to resist the temptation to restrict internet access, as their behavior helps China and Iran make the case that their censorship is in line with international norms. And we need to fix US treasury regulations make it difficult and legally ambiguous for companies like Microsoft and projects like SourceForge to operate in closed societies. If we believe in Internet Freedom, a first step needs to be rethinking these policies so they don’t hurt ordinary internet users.

The danger in heeding Secretary Clinton’s call is that we increase our speed, marching in the wrong direction. As we embrace the goal of Internet Freedom, now is the time to ask what we’re hoping to accomplish and to shape our strategy accordingly.

Thanks to Hal Roberts, Janet Haven and Rebecca MacKinnon for help editing and improving this post. They’re responsible for the good parts – you can blame the rest on me.

January 13, 2010

Four possible explanations for Google’s big China move

Filed under: Geekery,Global Voices,Human Rights — Ethan @ 3:45 pm

Yesterday, Google announced a major change in their policy in engaging with China – they will no longer censor search results on Google.cn to comply with Chinese policy. This almost certainly means that Google.cn will be blocked by the Great Firewall and that Google will no longer be able to operate in China.

While this aspect of Google’s announcement is sparking a great deal of conversation online, it comes at the end of a bombshell of an announcement – Google’s decision follows what appears to be a coordinated act of espionage aimed at its servers by Chinese attackers. The attack resulted, Google reports, in a theft of their intellectual property. They also report that a goal of the attack was to access the GMail accounts of Chinese human rights activists and supporters of Chinese human rights around the world. MacWorld reports that the attack targeted an internal system that Google had built to comply with search warrant requests for information on users. When it became clear that this internal system – evidently set up for the benefit of Chinese authorities – was being attacked and used to compromise Google’s internal networks, Google began discussions about disengaging from the world’s largest internet market.

There’s at least four ways to read Google’s decision:

Google decided to stop being evil.
Google has received reams of bad press from their decision to comply with Chinese government regulations and censor search results for Chinese users. It’s never been entirely clear to me why Google’s received more criticism than Microsoft – who admit they censored Chinese bloggers, and whose Chinese-language tools prevent posting of articles about human rights and democracy – or Yahoo, who turned over information on user Shi Tao to Chinese authorities that led to ten years imprisonment for “leaking state secrets”. I suspect we want to hold Google to a higher standard because they’ve put forth an informal motto: “Don’t be evil”, and compromising with the Chinese government looks like a violation of that stance.

Google’s taken steps to minimize the exposure of user data in China – services like Gmail, which contain sensitive personal data, or which permit publishing, like Blogger, are hosted in the US, not China. (This has made it harder for these tools to achieve market share against Chinese competitiors.) They censored in a more transparent fashion than some of their competitors, displaying a message at the bottom of each page, stating that sites had been removed from the results to comply with regulations. Google is a founding member of the Global Network Initiative, a partnership between industry, academia and the nonprofit community designed to develop best practices for engaging in closed societies like China.

In my opinion – shaped, no doubt, by the fact that I’ve got a lot of friends within Google and have worked closely with the company in a couple of contexts – Google was a lot less evil than some of its competitors. But continued involvement in China continued to be a thorn in the side of Google on the PR front, and I know many people within the company questioned whether engaging in China was worth the compromises it entailed. The move to leave the Chinese market may be an example of Google returning to its core values and demonstrating an unwillingness to compromise.

Google retreated from a very tough market.
Google wasn’t doing all that well in the Chinese search market – they were a distant second to Baidu, and faced extreme challenges in gaining market share. Google’s main properties – google.com and related sites – are frequently inaccesible through the Great Firewall, and Google’s Chinese site – google.cn – was subject to a great deal of scrutiny from the Chinese press and from regulators. CCTV ran an “expos” on Google.cn, demonstrating – horror of horrors! – that the internet includes links to pornography – this story led to increased oversight of Google’s Chinese site. Friends within Google tell me that it was a constant struggle to respond to complaints from Chinese regulators, and that they believed competitors like Baidu were reporting Google’s alleged violations to regulators, increasing scrutiny on the company.

The situation within Google China was already quite complicated. Kai-Fu Lee, Google’s China chief, quit in September, giving no clear reasons for his departure. His departure started speculation that Google might be discovering that they couldn’t be competitive in a Chinese market without making even larger compromises to corporate ideals.

It’s hard to imagine Google walking away from a market as potentially lucrative as China, even if they were in a tough battle for second place. And they certainly didn’t walk away quietly. By (obliquely) accusing the Chinese government of involvement in corporate espionage and challenging the government to shut the company down for providing uncensored search, “Google has taken the China corporate communications playbook, wrapped it in oily rags, doused it in gasoline and dropped a lit match on it.” (Those evocative words are from top Chinablogger Imagethief.) This isn’t a temporary strategic retreat – this is a retreat where you detonate the bridges behind you.

Google abandoned Chinese users.
Despite its second place in the market behind Baidu, there are millions of dedicated Google users in China, and many of them are deeply disappointed today and worried about losing access to services they’ve grown to depend on. Reading their comments in translation on Global Voices, thanks to Bob Chen, it’s clear the frustration is less with Google than with the Chinese authorities. One translated tweet is especially poignant:

The sin of facebook is that it helps people know who they wanna know. The sin of Twitter is that it allows people to say what they wanna say. The sin of Google is that it lets people find what they wanna find, and Youtube let us see what we wanna see. So, they are all kicked away.

Bob also shares a joke about China in the years after Google’s departure:

People born in 90s: Today I stepped out of the Great Firewall and saw a foreign website named Google. Shit, it is all but a copy of Baidu.
Born in 00s: What do you mean by stepping out of Great Firewall?
Born in 10s: What do you mean by website?
Born in 20s: What is “foreign”?

Perhaps most striking is a campaign to lay flowers in front of Google’s headquarters in Beijing. Rebecca MacKinnon reports that Tsinghua University’s security department has banned students from taking flowers to Google headquarters without permission.

(Here’s a sympathetic view of Google’s decision to pull out from Chinese activist Michael Anti, who’s been censored in the past by Microsoft.)

Google is about to join the front lines of the anticensorship wars.
Hal Roberts, John Palfrey and I published a study of tools designed to subvert and circumvent internet censorship a few months back, based on research we conducted over the course of three years. In the course of that research, we ended up with a simple realization about the design of censorship circumvention software:

A robust anti-censorship system has, at minimum, three components:
– Lots of non-contiguous IP addresses, making it difficult for censors to block the entry points into the system
– Huge amounts of bandwidth that can access the public internet, as a censorship circumvention system is basically an ISP
– Multiple methods to feed fresh IP addresses to your users

This isn’t a complete definition, of course – good anticensorship systems use SSL encryption to prevent keyword blocking, but that’s a solved problem. The three components above tend to be very hard for small anti-circumvention projects to solve. It’s very hard to obtain lots and lots of IP addresses, and very expensive to provision sufficient bandwidth… unless you’re Google, in which case, these obstacles should be trivial. There’s still lots of work that needs to be done ensuring that users of circumvention systems get fresh IP addresses, but a Google-backed anticensorship system (perhaps operated in conjunction with some of the smart activists and engineers who’ve targeted censorship in Iran and China?) would be massively more powerful (and threatening!) than the systems we know about today.

These tools would have a built-in market – the millions of users who were enjoying Google’s tools from within China – and could radically change the landscape of the internet freedom field. An emphasis on internet freedom tools would allow Google to engage with a smaller Chinese market, but would allow them to maintain a toe in the waters while maintaining a stance of disengagement with the Chinese government.

Is Google going to do this? I have no idea. I hope so. They could have done so previously, but it would have been viewed as a shot across China’s bow. Now that they’ve launched a torpedo, that shot across the bow seems more likely.

At Global Voices, we were thrilled that Google chose to partner with us and Thompson/Reuters in offering the Breaking Borders Award “to honor outstanding web projects initiated by individuals or groups that demonstrate courage, energy and resourcefulness in using the Internet to promote freedom of expression.” It would be very exciting to see Google becoming one of those groups using their energy, resourcefulness and resources to combat censorship online… and it would certainly take some corporate courage on their part.

We’ll know a lot more about what Google’s doing in the next few days. Responses are already piling up online. Evgeny thinks Google is bluffing, or simply retreating from an unsuccesful market position. Jonathan Zittrain sees this as a masterstroke, aligning Google’s business with its values, and shares my hope that Google will dedicate major resources to censorship circumvention. Dharmishta Rood links to a bevy of reactions from around the web. I’m anxiously awaiting Rebecca’s analysis, which she promises when she finishes two other articles that are due. (Man, I know that feeling.)

December 8, 2009

Bye, bye Beacon… and other bad ad ideas

Filed under: Berkman,Geekery,Media — Ethan @ 3:59 pm

There are ideas that, when you first encounter them, you say, “That can’t possibly be a good idea.”

That’s how I and colleagues at the Berkman Center felt when we saw a preview of Facebook’s Beacon “feature” in November of 2007. Introduced in time for that year’s Christmas shopping season, Beacon used a cookie set on one website (Overstock.com, for example) to display information on Facebook (information that you’d just bought a DVD on Overstock) in your events stream. The geeks in the crowd were nervous because the new feature looked a lot like a cross-site scripting attack, while user advocates like David Weinberger thought the feature represented Facebook either trying to change the nature of privacy or misunderstanding user privacy norms.

Suffice it to say, we thought it was a bad idea. So did Facebook users, who organized online campaigns to protest the feature. Some sued the company. And Facebook, as part of the settlement of a class action suit, recently sent a fascinating email to some users. I received it this morning and it reads as follows:

Facebook is sending you this notice of a proposed class action settlement that may affect your legal rights as a Facebook member who may have used the Beacon program. This summary notice is being sent to you by Court Order so that you may understand your rights and remedies before the Court considers final approval of the proposed settlement on February 26, 2010.

This is not an advertisement or attorney solicitation.

This is not a settlement in which class members file claims to receive compensation. Under the proposed settlement, Facebook will terminate the Beacon program. In addition, Facebook will provide $9.5 million to establish an independent non-profit foundation that will identify and fund projects and initiatives that promote the cause of online privacy, safety, and security.

For full details on the settlement and further instructions on what to do to opt out of, object to, or otherwise comment upon the proposed settlement, please go to http://www.BeaconClassSettlement.com.

Please do not reply to this email.

Commenting on the settlement – which doesn’t pay affected users anything (fair enough – it’s a mostly free site), but creates a new non-profit foundation to work on online privacy issues – some have noted the irony that you need to choose to opt out of the class should you want to retain your right to sue Facebook over Beacon. (Part of the frustration with Beacon is that you had to choose to opt out of the system and it wasn’t especially easy to turn it off…)

I’d add another irony. As David Weinberger suggested, privacy norms are changing online. I shopped on Overstock.com for the first time in a couple of years, looking at birthstone jewelry to give my wife as a congratulatory gift for giving birth to our child. I bought a necklace… which proved to be sorta chintzy and ugly, and which I promptly returned. I’ve run into a dozen Overstock ads on different sites, each of which urges me to repurchase the ugly necklace I rejected, or similarly dreadful blue topaz jewelry.

It’s the same sort of cross-site behavior I found so uncomfortable in Beacon, though it’s not using the cookie information to publish on my behalf, simply to (ineptly) target ads to me. Perhaps David’s right, and Facebook has succeeded in changing social norms around purchasing. Or perhaps most of us are so good at ignoring web ads that it hardly matters that Overstock is taking what it knows about us and displaying it on other websites.

Perhaps it’s just that I’ve discovered that I really dislike blue topaz, but I can’t help thinking every time I see an Overstock ad, “That can’t possibly be a good idea.”

September 29, 2009

Herkko Hietanen: The social future of television

Filed under: Berkman,Geekery,Media — Ethan @ 1:30 pm

Herkko Hietanen of the Helsinki Institute for Information Technology tells his audience at the Berkman Center that “television is really broken.” The medium isn’t rising to its full potential, isn’t providing consumers with programs when and where they want them. To set the scheduled for what you want to watch, you need to be at your television. And there are frustrating geographic restrictions on programming – Herkko wonders why it’s hard to watch Finnish TV in the US. Television was created to be consumed – it lacks interactivity with broadcasters and other viewers. It forces consumers to sit through irrelavent commercials.

It could be so much better, he tells us. And it would be very hard to pitch a VC on a model of ad supported, broadcast over the air television today.

We can think of the VCR as an early attempt to fix the TV. Predictably, it made rights owners fearful, and they brought in the lawyers. Those lawyers lost. The Sony Betamax decision determined that manufacturers were not liable for direct copyright infringement because the technology had substantial noninfringing uses. Herkko now sees great potential in networked video recorders to fix what’s wrong with television.

He offers a quick history of television:

– In the early days, TV aired over radio masts, and was essentially a local medium, with some local character and color. But governments quickly intervened and restricted who could construct an antenna and use spectrum

– To try to solve the problem of broadcasting over large areas, post-WWII strategies proposed a network of B-29 superfortress aircraft flying above the US, transmitting TV. It’s not actually a stupid idea, he tells us. But it was obviated by the rise of satellite TV, initially used to broadcast programs throughout TV networks, and later moved to consumer dishes. Eventually, companies figured out how to encrypt, and therefore commercialize, their systems.

– Community antennas represent a different model for distributing signals. Communities in valleys had difficulty receiving broadcast signals. This was bad for business for local electronics stores. Some responded by building shared antennas, then running cables to homes to share that signal. This has moved towards a highly profitable business, the cable television business, which manages to charge substantial amounts of money for service that’s often poorer than what’s available for free via broadcast television. (Herkko points out that in Boston, he can actually get more HD channels via broadcast than via basic cable.) But there are other benefits of cable – you can deliver individual streams, including individual streams of pornography, which is always a profitable business.

Herkko sees a near-fatal embrace between content providers and cable companies. They’re co-dependent, and scared of alienating one another. But this dependency can limit innovation in services. We’ve seen less development around on-demand video, the ability to watch on demand than we might expect than we might expect. Instead, we’ve seen “enhancements” like DRM and the broadcast flag, and heavy litigation against anyone entering the markets. Basically, we see a lot of intelligence added at the center of a network, with dumb, constrained edges that are prevented from innovating.

He looks in some detail at recent litigation surrounding Cablevision. Cablevision designed a system that used very dumb set top boxes to communicate a user’s desire to record a show to a central recorder. Users could then retrieve recordings from the central server via the set top box. For Cablevision, this reduced capital expenditure per client, allowing a huge central storage service rather than millions of individual, expensive, failure-prone hard drives. “Every network lined up to sue Cablevision for direct copyright infringement.” But the court saw that Cablevision was making copies for each user, not just sharing a single copy, and ruled in Cablevision’s favor. This contrasts to mp3.com, which had a single copy of each song, not multiple copies.

Home PVRs are either closed and user-friendly, or open and very hard for ordinary people to use, like MythTV. TIVO is the friendly market leader, but it’s a walled garden. Herkko sees the market leadership as related to friendly relationships with networks, pointing out that their main competitor, ReplayTV, was sued out of existence by broadcasters.

“When recorders get connected, users get connected,” Herkko argues. Connected users create social networks , and social networks can enable tinkering at the edges. But rightsholders will try to protect their rights from attack, which will push them towards stupid networked recorders that just record shows. There may be space for innovation from intelligent, open client players. This is a way of turning television tubes into internet tubes, bringing around the “3G” of television.

What might this mean? Users could label ads for one another, which might allow the blocking of targetted ads. Communities could discuss and enhance television, which might be useful for programs like Lost. We could imagine gambling happening alongside live television sports. And he predicts that we’ll see communities imposing collective standards on television, allowing and community or a congregation to collectively filter television for their viewers.

It doesn’t make economic sense to have individual recorders – Cablevision’s model is a better one, in technical terms. And this idea of centralized recording – or even networked individual recorders – opens some big new opportunities. These systems might even save advertising, as they’ll give us far better data on who’s watching what, enabling systems like Nielsen to provide accurate data again. They’d also enable mobile TV – anything you can store online could be streamed to your mobile device. But these systems also store a great deal of valuable information – he notes a recent paper on Facebook that suggests the possibility of determining someone’s sexual orientation by examining a network of friends – could we tell if someone’s gay based on what they watch?

Herkko ends with the observation that social television isn’t a new concept. We’ve seen lots of experimentation with split screens, which allow chat alongside live broadcast. “But television is a lean-back experience,” Herkko offers – you don’t want to share screen estate with your friends. Instead, he believes that social interactions will be before and after the show.

September 24, 2009

Harvard Forum: Focus and Faith

Filed under: Developing world,Geekery,Global Voices,ICT4D,ideas,Media — Ethan @ 2:29 pm

Canada’s International Development Research Center and Harvard’s Berkman Center are convening a conversation today and tomorrow at Harvard on the future of information and communication technology and development (ICT4D). Global Voices will be participating in the event as a media partner, and I and Jen Brea will be twittering and live-blogging the event. You can find out far more about who’s around the table and what we’re planning on talking about on the Global Voices special coverage page, which includes links to the background papers prepared by participants.

We’re here in part so that you can have a voice in the discussions. Please feel free to post questions on Twitter, using the #idrc09 tag, or as comments on Global Voices posts – we’ll try hard to work those questions into the coversation here at Harvard. You may also want to use Berkman’s “question tool“, which will be used to put questions to the panelists at a public event this evening.

Rohinton Medhora of IDRC notes that we’ve spent much of this conference considering what’s changed in the world of ICT in the past six years. We’ve not talked much about how development and poverty have changed. The first Harvard forum, six years ago, looked at how ICT might apply “here, there and everywhere.” The critical example from that discussion was Mohammed Yunus’s story about women learning to use mobile phones and to build businesses. This forum’s story might be Amyarta Sen’s story about using a phone and resulting photos to change public opinion in Pakistan.

He offers a model – data – information – knowledge – wisdom – to help understand how ICT might affect education. “I suspect that ICT is only a small element in the gap from knowledge to wisdom.” Education is the great leveler in society, and we don’t yet understand how ICTs play out in the education field.

ICTs are moving from natural monopolies to public goods, merit good, and club goods. We’re seeing confusion on the regulatory side. In many cases, regulators don’t know what to make of technological developments – should LAN houses be considered as gambling houses? We’ve got a wide range of regulatory structures, and they’re very different in terms of mobile phones versus broadcast media, despite the increasing overlaps in these technologies.

Rohinton wonders about Mike Best’s idea of a set of “grand challenges” for ICT4D. We often talk about the unpredictable nature of the development of information technologies. “It’s not that these things are ‘unpredictable’ – it’s that our confidence interval is wider and wider.” This may mean it’s hard to figure out what those big questions are, but doesn’t change the importance of raising and answering them.

Yochai Benkler is worried that we’re oversimplifying the relationships between markets and states (or other authorities). Ronaldo Lemos’s stories about working with the International Development Bank to allow
musicians in Brazil to distribute music and build their own labels so they can make a living shows the complexity of these relationships. The formal market for digital music in Brazil is dysfunctional – tracks cost $1.50, an absurd price in a medium-income country – and so the next steps are to create markets that actually work and find reasonable prices.

“Opposing market versus state, market versus regulation, market versus social organization is too stark… We need to get beyond these dichotomies, towards an integrated market that allows people to innovate and make a living off of it.” Open platforms at the physical layer are part of this. But we need to realize that people are using these platforms to try to avoid the bureaucrats, both the state leaders and the corporate ones. There are ongoing tensions between freedom and control and that control can be markets and profits, political power, or patriarchies.

Yochai worries that there’s “pressure on those of us coming from left intellectual traditions to accept the idea that it’s okay for musicians to make money, that it’s okay for Onno Purbo to charge for community wireless workshops.” We need to expand our dialog beyond a discussion of pure market incentives versus state interventions. He recommends moving beyond talking about “incentives” to “motivations”. Motivations allows us to consider factors like solidarity, not just market forces. Introducing these factors helps us explain why people will support musicians, paying an average of
$1.25 a song, $8 an album for tracks they’re invited to download for free – voluntarily – as they have to support Jane Sibbery for years.

We need to understand that unserious applications – like LAN Houses – can lead to very serious implications. World of Warcraft may turn out to be an excellent environment to train leaders, or to help teenagers find adult authority figures they can rely on. (Joi Ito tells a story about an 18 year old kid who came to him, as WoW guild leare, for advice on whether he should join the military. Joi was the only adult who’s had his back for years, which made him the logical person to ask for this advice.) Because government influences and can undermine what we can do for development, we need to accept that open systems don’t always behave in ways we anticipate, and be open to the idea that we need to take seriously things we’re tempted to ignore.

Michael Spence acknowledges that we might not want to base our theories of economic development on Milton Friedman, but suggests that the great economist did get one important thing right – he made the point that you can’t solve problems without paying attention to incentives. “We fail his test all the time” in the field of development economics. And because we don’t think about incentives, we end up with Nash equilibriums that favor the powerful and leave the weak at a disadvantage, whether they’re in the public or private sector.

He asks us to think about focus, faith and measurement. “The problem of measuring the impact of ict4d is too hard to solve.” He urges us not to let it trip us up too badly. To explain the difficulty of studying effectiveness, he references the 1949 Communist takeover in China. “China in the 1950s did the best job any country has done educating children, at least through elementary school.” In a few years, literacy rates for men and women approached 90%. But China didn’t see significant economic benefits, because happened, because other aspects of the state and the economy were mismanaged and broken. When other aspects of economic management changed, the “potential asset” of a literate population rapidly turned into a real asset, one that’s helping the country grow at a profound rate.

“You can have progress in areas that affect people’s education, or access to information, but it might not have a visible effect,” because it’s blocked by other factors. Spence asks us to consider information technology in developing nations. Nations like the US made heavy IT investments for over thirty years and we saw few, if any, measurable gains. Recently, we’ve seen a steady 3% productivity increase, which we believe comes from taking the “potential asset” of IT and unlocking it via the Internet.

“Development economists try to measure impact of education via regression analysis. The results they turn up are mixed or negligible. But no one sensible would make policy decisions based on those results.”

With that, Spence asks us to have faith. “Assume that education and IT in various aspects are going to turn out to be terribly important.” And then get on with it and don’t worry much about measurement.

Education, in particular, is an area in which we need to have a great deal of faith. “Assuming some preconditions, development is the process of acquiring knowledge, not just by individuals but within systems.” He warns us off the term “knowledge economy” – it’s not that we’ve gone from shovelling coal to shovelling bits – we’re engaged in the process of making our citizens and systems more knowledgeable. To the extent that IT systems are knowledge systems, we need to keep our focus on education, on health, and on e-government, to the extent that government controls access to essential services.

He ends with a warning about stability. “A huge, important application of modern IT is the global supply chain and financial system. The financial trading superstructure is impossible without IT.” We need to think about the stability of these systems because the instability we just experienced wasn’t accurately predicted by anything. Our problem may be models – we interpret systems via models, and if those models are insufficiently accurate, we can see stability where we might need to anticipate instability.

We end with parting shots from dialog participants, who felt that points weren’t emphasized enough. I made the case that ICT was critical not just for education and entrepreneurship, but for creating an inclusive public sphere, and asked the room to take seriously the phenomenon of particiatory media, not just through blogs and viral videos, but through mobile phone calls made to community radio stations. Ineke Buskens warns us that, in a profoundly sexist world, attempts to treat ICT as gender-neutral will end up perpetuating power imbalances. Bill Melody warns us that the developed world is likely to ignore infrastructure, now that infrastructure works well, and that development projects can’t abandon infrastructure efforts. Clotilde Fonseca urges us to continue building pilot and demonstration projects so we can experiment with creative ideas that could be scaled and replicated. David Malone warns that we need to protect human rights from governments, which are inherently authoritarian and prone to exercise control.

In other words, to sum up… there’s a lot to sum up. As Mike Best observed last night, this field appears to be plagued by the problem that we need to consider dozens of factors simultaneously. If there’s a conclusion from today’s discussions, it’s that we all need a good bit of reminding of the key factors that need consideration to make sure we’ve got a sufficiently broad view of these issues.

« Newer PostsOlder Posts »

Powered by WordPress