Susan Benesch is one of the leading thinkers on countering hate speech online. She’s a fellow at the Berkman Center this year, and I’m terribly sorry to be missing her talk at Berkman this afternoon. (Instead, I’m watching from home so I can be primary caretaker for my son for a couple of weeks while Rachel gets to travel.) She teaches international human rights at American University and is the founder of the Dangerous Speech Project, which tries to understand the spread of speech that incites people to violence.
Susan’s talk is available online and I’ve tried to blog it remotely while cursing my inability to teleport across the state. The talk is wonderfully titled “Troll Wrastling for Beginners: Data-Driven Methods to Decrease Hatred Online”. Unlike most conventional online wisdom, Benesch believes you should engage with the trolls, in part because it may be the most successful path to countering dangerous speech. The approaches states have taken to dangerous speech – punishment and censorship – don’t work very well, and some evidence suggests that they work even worse online than offline. She suggests the case of Anwar al-Awlaki, who was ultimately killed by a drone strike – despite being punished (via summary execution from a US drone), his online speeches continue to be influential and may have influenced the Boston Marathon bombers. Censoring that speech doesn’t work well in an online environment as it’s likely to move onto other platforms.
So what about “don’t feed the trolls”? Benesch points out that there are several implicit assumption in that advice. We assume that if we ignore a troll, they will stop (which, in turn, tends to assume behavior that’s only on a signal platform.) There’s an assumption that online hate is created by trolls; in the few experiments that look at racist and sexist speech, at least half is produced by non-trolls. We tend to assume that all trolls have the same motivations and that they will respond to the same controls. And finally, we assume that the trolls are the problem – we need to consider effects on the audience.
(Benesch doesn’t define trolls until pressed by the audience and points out that it’s a term she uses with tongue in cheek, most of the time – she acknowledges that different trolls have different motivations. Her goal is to move away from considering trolls as the problem and towards understanding dangerous speech as a broader phenomenon.)
One of the benefits of online speech environments, Benesch posits, is that we can examine the effect of speech on people. In offline environments, it’s very hard to measure what reactions dangerous speech leads to – in online environments, it may be possible to track both responses and effects.
Benesch’s suggestion is that we should approach dangerous speech through counterspeech, in effect, talking back to the trolls and to others. In explaining her logic, she notes that the internet doesn’t create hate speech – in some cases, it may disinhibit us from speaking. But more often, the internet creates an environment where we are aware of speech we otherwise wouldn’t hear. Most of us wouldn’t have been aware of what speech is shared at a KKK meeting, and many of us wouldn’t have heard the sexist jokes that were told in locker rooms. Now speech is crossing between formerly closed communities.
This is a new feature of human life, Benesch suggests, and while it causes a great deal of pain, it’s also an opportunity. We can “toss speech back across those boundaries to see what effect it has.” For the most part, we don’t know what will happen when we expose speech this way, and it’s possible the effects could be very positive. She asks us to consider norm formation in teenagers – most 16 year olds, she argues, have historically developed opinions from a small, homogenous community around them. That’s no longer the case, and it positive opportunity for teens to develop a broader and more nuanced worldview.
Believing in counterspeech means having faith that it’s possible to shift norms in speech communities. Benesch asks “What is the likelihood an American politician will use the N-word in public?” While there’s a constitutionally protected right to use such an offensive term, the probability of a speaker using the term is near zero. Yet, she argues, 60 years ago there were places in the US where you likely could not have been elected without using that word. “People’s behavior shifts dramatically in response to community norms,” she suggests, and as many of 80% of people are likely to follow the norms of speech consistent with a space and a situation, even trolls.
One of Benesch’s case studies for counterspeech comes from Kenya, where dangerous speech was a key component to violence in the wake of 2007’s disputed election. With over a thousand killed and hundreds of thousands displaced, the 2007-8 unrest was one of the ugliest chapters in the nation’s history, and as Kenya prepared for elections in 2013, many Kenyans were worried about inflammatory and dangerous speech online.
Benesch worked with Kenya data scientists at the iHub and the team at Ushahidi to build Umati (from the Swahili word for crowd), which collected reports of online hate speech. What they found was a wave of inflammatory speech from Facebook, and astonishingly little dangerous speech on Twitter. This disparity is not well explained by platform usage – Twitter is extremely popular in Kenya. Instead, it’s explained by counterspeech.
When inflammatory speech was posted on Twitter, prominent Kenyan twitter users (often members of the #KOT, Kenyans on Twitter, community) responded by criticizing the poster, often invoking the need to keep discourse in the country civil and productive. This counterspeech was surprisingly successful – Benesch tells the story of a Twitter user who posted that he would be okay with the disappearance of another ethnic group, and was immediately called out by other Twitter users. Within a few minutes, he had tweeted, “Sorry, guys, what I said wasn’t right and I take it back”.
This isn’t the behavior of a troll, Benesch argues. If the user in question were simply looking for attention, he wouldn’t have backed down when his inflammatory tweets met with spontaneous counterspeech. This online counterspeech is especially important when online speech is magnified by broadcast media, as it is in both Kenya and the US – it’s possible for television and newspapers to magnify not just the hateful speech but the attempts to counteract it.
By studying successful examples of counterspeech, Benesch is trying to develop a taxonomy of counterspeech and determine when and where different forms are most useful. She takes inspiration from examples like that of a young man in the US tweeting angrily about Nina Davuluri being named Miss America. The young man inaccurately and disparagingly referred to Davuluri as “an Arab”, and was immediately countered on Twitter by people who called out his racism. Within a few hours, he’d tweeted something resembling an apology to Davuluri herself.
Benesch wonders, “Can we put together the ideas of counterspeech and the idea of influencing 16 year olds?” It’s not realistic to believe we’re going to change the behavior of hardcore haters, she tells us, but we only need to influence a critical mass of people within a community, not the outliers.
Twitter and Facebook aren’t the only environments for inflammatory speech online – anyone who’s participated in online gaming knows that there’s toxic and hostile speech in online environments. Riot Games was concerned about the speech surrounding their popular game League of Legends and cooperated with academic researchers to understand speech in their game universe. The study found that fully half of the inflammatory messages were coming from users we wouldn’t normally consider to be trolls – they came from people who generally behaved like other game players, but were having a bad day and lashed out in ways that were inflammatory. They also discovered that very small changes in the platform – changes in language used to prompt players, apparently minor changes like font and text color – could improve behavior substantially.
Facebook’s “compassion research” project works on similar ideas, trying to get people to use Facebook in more pro-social ways. When you try to flag content on Facebook as offensive, Facebook first prompts you to engage with the person who offended you, suggesting language to communicate to the other user: “Could you take this down? It hurts my feelings.” As with Riot Games, they’ve found that small prompts can lead to dramatic behavior changes.
Benesch has been using these insights to consider problems of inflammatory speech in Myanmar (a topic I learned a little about in my visit to the country earlier this month.) In Myanmar, Facebook is the dominant internet platform, not just the dominant social media platform – if you search for information in Myanmar, you’re probably searching Facebook. In this environment, a rising tide of highly inflammatory speech inciting Buddhists against Muslims, particularly against the Rohingya people, is especially concerning. Not only does Facebook in Myanmar lead to echo chambers where no one may be willing to challenge inflammatory speech with counterspeech, but some of the mechanisms that work elsewhere may not work in Myanmar.
In a country that’s suffered under a military dictatorship for half a century, the idea of “reporting” people for their speech can be very frightening. Similarly, being encouraged to engage with someone who posted something offensive when you have reason to fear this person, or his friends, might threaten your life, isn’t a workable intervention. Any lessons from Facebook’s compassion research needs to be understood in specific human contexts. Benesch asks how you should respond to offensive speech as a Facebook user in Myanmar: you can like the post, but you can’t unlike it. If you respond in the comments thread, you’re participating in a space where the page owner can eliminate or bury your comment. This points to the challenge of using a private space as a quasi-public space.
We need more research on questions like this, Benesch offers. We need to understand different responses to dangerous speech, from “don’t feed the trolls” to counterspeech, to see what’s effective. We need to understand whether counterspeech that seeks to parody or use humor is more effective than direct confrontation. And we need to understand discourse norms in different communities as what works in one place is unlikely to work in another. Louis Brandeis advised that the remedy for bad speech is more speech. As researchers, we can go further and investigate which speech is a helpful counter to bad speech.
I’ll admit that the topic of Benesch’s research made me uneasy when we first met. I’m enough of a first amendment absolutist that I tend to regard talk of “dangerous speech” as an excuse for government control of speech. I had a great meeting with Benesch just before I went to Myanmar, and was much better prepared for the questions I fielded there than if I hadn’t benefitted from her wisdom. She’s done extensive work understanding what sorts of speech seems to drive people to harm one another, and she’s deeply dedicated to the idea that this speech can be countered more effectively than it could be censored or banned.
The conversation after her talk gave me a sense for just how challenging this work is – it’s tricky to define inflammatory speech, dangerous speech, trolling, etc. What might be a reasonable intervention to counter speech designed to incite people to violence might not be the right intervention to make a game community more inviting. On the other hand, counterspeech may be more important in ensuring that online spaces are open and inviting to women and to people of different races and faiths than they are right now, even if inflammatory speech never descends to the level of provoking violence.
For people interested in learning more about this topic, I’d recommend the links on the Berkman talk page as well as this essay from Cherian George, who was at the same meeting I attended in Myanmar and offered his thoughts on how the country might address its inflammatory speech online. I’m looking forward to learning more from Susan’s work and developing a more nuanced understanding of this complicated topic.
Engin Onder and Zeynep Tufekci visited the Berkman Center today to talk about the rise of citizen reporting in Turkey. Tufekci is a leading scholar of online media and protest, and Onder is one of the founders of 140journos, an exciting citizen media group that’s been central to documenting Turkey’s protests in Gezi Park and across the nation.
Zeynep Tufekci offers an overview of the press situation in Turkey to provide context for Engin’s work with 140journos. There’s no golden age of press freedom in Turkey to look back to, she warns. After the military coup in 1980, the 1980s were a decade marked by military censorship. In the 1990s, Turkish media suffered from censorship around Kurdish issues, but there were media outlets that took journalism seriously within existing constraints.
In the 2000s, the concentration of power by AKP after their second election led to large conglomerates moving into the media business and buying up the press. Energy companies ended up buying leading newspapers, firing columnists and steering the paper’s editorial direction towards the government… and, coincidently, would win the next major government energy contract. Zeynep describes the situation as “ridiculous”, noting that a multiday clash in the heart of the nation’s biggest city was broadcast by CNN International, while CNN Turk broadcast a document on penguins. Talking to a Turkish journalist about the situation, the journalist explained a layered system of censorship: “First, I censor myself. Then my editor censors me, taking my already soft story and make it softer. And if that’s not still soft enough, the government may call a newspaper or TV station and demand coverage change.” Should an outlet not comply, they face massive tax bills, which mysteriously disappear when the media becomes more compliant.
While the press is heavily constrained, Zeynep tells us, the internet is largely open. Websites have been blocked, but it was very easy to get around censorship using proxies. The blocking of YouTube, she tells us, wasn’t a serious obstacle to viewing content, as even the prime minister admitted he used proxies to access it. Instead, it was a tax strategy, trying to get Google to come to Turkey and pay taxes. That’s changing, however, and the new censorship regime promised is significantly more serious, including deep packet inspection.
Zeynep tells us of the Roboski Massacre, a bombing in the village of Uludere, in Kurdish areas where informal smuggling is part of the local economy. The village was bombed by military jets, killing 34 people. It was unclear whether this was a mistake by the military, or a conscious attack on the Kurdish population.
Every newsroom in the country knew about the story and all waited to hear whether they could publish about it. A Turkish journalist, Serdar Akinan, decided to fly to the area and took a minibus to the village, encountering the massive funeral procession. He took an instagram photo and shared it on Twitter… which broke the media blackout and led everyone to start publishing news of the bombing. Akinan lost his job for this reporting and now works for an independent news organization.
The story of 140journos starts there, Zeynep tells us. Engin Onder introduces himself as a non-journalist from Istanbul, a former passive news consumer before media and news broke down. “We felt so sad about this issue, and thought we can do some stuff.” Onder runs a group of creative professionals called Institute of Public Minds, a group that operates creatively in physical and digital public spaces.
In early 2012, in the wake of the Roboski Massacre, Onder and his colleagues felt compelled to start building their own media systems to address the weaknesses of the professional media. Roboski wasn’t the only trigger – a set of pro-secularism protests in 2007 and a union protest in Ankara in 2009 also received no media coverage.
Akinan’s coverage of the Roboski massacre was the inspiration for Engin and his friends Cem and Safa. All three were heavy Twitter users, and they realized that Twitter and online services might be sufficient infrastructure to report the news, as it was all Akinan needed to break this critical story. They brainstormed names, and settled on 140journos, honoring Twitter’s character limit and using slang to poke fun at the professional status of journalist.
Cem had been kicked out of his house because his politics so sharply diverged from his father’s. His father read and watched only media from one conglomerate, while Cem began reading underground and alternative newspapers – for Cem, 140journos is about “hacking his father”, creating media that could sway his parents. Safa is a conservative and religious guy, who helps counterbalance the team. Engin tells us that he had only attended one rally before starting the project.
Before the Gezi protests, 140journos reported on key court cases using nothing more than a 3G mobile phone. At some point in a key trial, the judge demanded that journalists with press cards leave – the 140journos remained and continued tweeting from their phones. That led to discovery of the network by mainstream journalists (who probably resented 140journos for being able to remain in the courtroom.)
140journos made a point of visiting a wide range of public protests, including conservative protests against fornication. They believed it was important to ensure different groups understood each other and saw the diversity of protest movements.
Media coverage of 140journos had been pretty condescending, focusing on the youth of the participants, not on the quality of their reporting. Zeynep, on the other hand, took their work seriously, declaring “This is not ‘citizen journalism’ – this is ‘journalistic citizenship’.”
Once the Gezi Park protests broke out, 140journos found themselves at the heart of a massive movement in Istanbul. Part of the mantra of the Gezi movement was, “the media is dead – be the media”. This helps explain why, during a moment the police were spraying tear gas on Taksim Park, a protester was holding up an iPad and taking photos. Gezi brought a culture of documentation to Turkish protest movements.
The tools of the trade, Ergin tells us, include Facebook, Twitter, Soundcloud, Vine, Instagram, as well as tools that help mine social media platforms. Tineye, Topsy, Google Image Search helped they find traffic cameras, which were also helpful. Google Maps allowed the team to identify where documentations took place, as did Yandex Panorama (similar to Google Streetview, but with coverage of Turkey.) When they heard the sames of people involved with the protests, they sought them out via Facebook, then scheduled in person or phone interviews. Internally, the team coordinated using WhatsAp.
During the protests, 140journos were tweeting hundreds of times a day. They noted different media usage patterns in different parts of the world. Istanbulis use a wide range of media types. Ankarans favor livestreaming. In Izmir, there was less content produced, more a complaint about what the media wasn’t covering.
When the culture of protest documentation became common, the role for 140journos changed into a practice of curating and verifying, not frontline reporting. They decided they couldn’t participate in the protests, and never physically appeared in the park so they could cover the protests with a level of detachment and neutrality. They may have sympathized with the protesters, but their role was as journalists, not activists.
To explain the working method, Ergin gives us an example from Rize, a conservative town that’s the home of the Prime Minister. A crowd, allegedly armed with knives, gathered in front of the office of a secularist group. Seeking to verify what was going on, they searched online, found a blurry photo of the protesters outside the office and started reading signs on the street. They began calling shops on the street and interviewing witnesses of the standoff. Ironically, one of the businesses nearby was a TV station which, unsurprisingly, was not reporting on the situation. Eventually, they also found a nearby traffic camera, and used a combination of the interviews and the street camera to confirm the story and report on it.
After the Gezi Park protests, Engin argues that the content of citizen journalism has been legitimized, the quality of citizen journalism content has been refined and the value of credibility has been strengthened throughout their network. There’s now a network of citizen journalists aside from 140journos, and 140journos often uses these networks to vet their work. 140journos builds their reporting on lists of citizens they’ve verified live in different Turkish cities – when an event takes place, they lean on those local sources.
In a remarkable twist, Veli Encü, a survivor of the Roboski Massacre, has become a correspondent. When warplanes fly over Uludere, he immediately reports to the network so that people can watch and ensure another massacre doesn’t take place. Cem’s father, who used to isolate himself in conservative media, has now become an activist and a much broader reader. And 140journos is now producing a radio show driven by citizen media, broadcasting once a week, and projecting their work onto the sides of public buildings to attract attention and open dialog with a broad range of participants.
We move into a Q&A, which I opened by asking whether the rise of citizen journalism has shamed Turkish journalists into changing their behavior. Engin is uncertain. He notes that the CEO of CNN Turk underestimates citizen journalism, likely seeing it as providing misinformation and poisoning public discourse. But media workers are starting to work as pirates, with 10 or more professional journalists contributing anonymously with stories they otherwise couldn’t get published. Zeynep suggests that there has been a significant change post-Gezi, with more actual news carried live. 140journos was a catalyst, she argues, but so were marches where people stood outside TV stations, waved money and begged reporters to do their jobs. There’s another cultural shift, both note. Citizens are willing to put themselves at personal risk to capture images from the frontline of protests.
A Berkman fellow asks whether there are any Turkish tools being used to produce this media. For better or worse, Engin explains, the tools used are those of social media, and almost all are hosted in the US, but available for no cost online. Furthermore, the journalism the team is doing is wholly non-commercial – they support themselves through other jobs and engage in their reporting as part of their civic engagement.
In the next few weeks, 140journos is planning to release two new tools. One will use elements of gamification to help increase the practice of verifying and factchecking reporting. The other will provide background detail on locations throughout Turkey on a data-enhanced map, which can be used as a way to provide context and background information on stories the network releases.
Another question asks whether there are any plans to monetize content. Engin is insistent that the priority is building better content, not working on sustainability. Another questioner asks whether coming internet censorship will make it difficult for 140journos to share content. Engin explains that the group has so many friends in the Pirate Party that they won’t have trouble finding VPNs, or helping their readers find VPNs. At the same time, he notes that it’s unclear how these admittedly draconian laws will actually be implemented. Engin notes that his group is non Anonymous (or anonymous) – they strongly believe they are doing nothing illegal, merely reporting the news.
Another question asks whether the Turkish government will begin mining online data to identify protesters. Zeynep explains that this isn’t necessary – every phone in Turkey is registered to an individual’s national ID, and the government has the identity of everyone who has appeared at protests. While there have been occasional arrests of people who tweeted to incite violence, there have not been widespread roundups of people involved with these demonstrations. Engin notes that the government probably cannot shut down the internet in Turkey without collapsing the government entirely.
Zeynep closes the conversation by noting her amazement when she discovered that 140journos was four college students, working in their free time. She draws an analogy to the groups that coordinated logistics during the Tahrir protests, who used social media to build a logistics team, inspired by a local cupcake shop that used Twitter in that fashion. Zeynep suggests that we’re seeing a technological shift that makes certain kinds of mobilization significantly easier than it ever had been before.
Some years back, I gave a talk at O’Reilly’s ETech conference that urged the audience to spend less time thinking up clever ways dissidents could blog secretly from inside repressive regimes and more time thinking about the importance of ordinary participatory media tools, like blogs, Facebook and YouTube, for activism. I argued that the tools we use for sharing cute pictures of cats are often more effective for activism than those custom-designed to be used by activists.
Others have been kind enough to share the talk, referring to “the Cute Cat theory”. An Xiao Mina, in particular, has extended the idea to explain the importance of viral, humorous political content on the Chinese internet.
I’ve meant to write up a proper academic article on the ideas I expressed at ETech for years now, and finally got the chance as part of a project organized by Danielle Allen and Jennifer Light at the Institute for Advanced Studies. They invited a terrific crew of scholars to collaborate on a book titled “Youth, New Media and Political Participation”, now in review for publication by MIT Press. The volume is excellent – several of my students at MIT have used Tommie Shelby’s “Impure Dissent: Hip Hop & the Political Ethics of Marginalized Black Urban Youth“, which will appear in the volume, as a key source in their work on online dissent and protest.
I’m posting a pre-press version of my chapter both so there’s an open access version available online and because a few friends have asked me to expand on comments I made on social media and the “Arab Spring” at the University of British Columbia and in Foreign Policy. (I also thought it would be a nice tie-in to the Gawkerization of Foreign Policy, with their posting today of 14 Hairless Cats that look like Vladimir Putin.)
Abstract: Participatory media technologies like weblogs and Facebook provide a new space for political discourse, which leads some governments to seek controls over online speech. Activists who use the Internet for dissenting speech may reach larger audiences by publishing on widely-used consumer platforms than on their own standalone webservers, because they may provoke government countermeasures that call attention to their cause. While commercial participatory media platforms are often resilient in the face of government censorship, the constraints of participatory media are shaping online political discourse, suggesting that limits to activist speech may come from corporate terms of service as much as from government censorship.
Look for the Allen and Light book on MIT Press next Spring – it’s an awesome volume and one I’m proud to be part of.
Last year, Sweden took on an experiment in social media as a form of nation branding by turning over its national Twitter account, @sweden, to a different citizen each week. Citizens are nominated and evaluated by a panel, but their tweets aren’t reviewed or edited, which led some observers to predict the experiment would be a social media disaster.
Those predictions came true, more or less, with the week Sonja Abrahamsson took over the account. She spent the week offending as many people as possible, with offhand observations about Jews, people with AIDS, and the suggestion that her life would be easier if she had Down’s syndrome. In other words, she used @sweden to troll anyone who was paying attention. (Trolls, of course, hail from Scandinavian folklore and may be native to Sweden, so perhaps this behavior is simply part of the national character.)
Nasser has continued on this theme, reacting to some comments from readers and provoking responses from others, like the exchange below.
At the first Global Voices summit, eight years ago, Hossein Derakhshan offered a model for understanding the role social media could play in helping people understand life in another part of the world. Blogs could act as windows, bridges and as cafés, offering us a glimpse into life in another corner of the world, a connection to some place different than where we already are, and, maybe, a space to gather and have a conversation.
Sweden’s experiment proposes to use Twitter as a window. Inviting “ordinary” Swedes to tweet about everyday life promises a picture of life in Sweden that’s likely to be different from impressions we get of the nation through news, through entertainment media or through our interaction with Swedes in our social networks. Ideally, it gives the sort of multifaceted picture we might have of the nation if we had lots of Swedish friends in our social network, including “inbetweeners” like Naseer and trolls like Sonja.
But the Swedish experiment is an attempt at building bridges as well. For one thing, the experiment asks participants to tweet in English rather than Swedish so the conversation is accessible to a wider audience. Nasser’s decision to start his stint representing @sweden by telling his story is a form of bridging as well – by understanding his personal story, we’ve got a better chance of paying attention to the trivia of his everyday existence. And it’s possible that the comments on some of his posts will open a café of sorts, a conversation about what it means to be Swedish, bicultural, racist or nationalist.
I’m interested to see that my neighbors to the north, in Vermont, are trying a similar model, hoping that showing tweets from Vermont will help portray the state as younger and more tech savvy than we might otherwise assume. I’ll be interested to see whether more Swedes or Vermonters use Twitter to tell their personal story and build a relationship while they’re opening a window into their lives.
Scholars of social media spend a lot of time studying Twitter. Twitter’s not the largest social network in the world – Facebook has at least twice as many users – but it’s massive and influential, particularly in the world of journalism, where smart practitioners have learned to report on stories using accounts from Twitter. And Twitter is something of a model organism for social media researchers. Most relationships and content on Twitter are public, while relationships and content on Facebook are often private. There’s an ecosystem of tools that use Twitter’s API to understand popular topics and networks of influence on Twitter, and countless research projects that use Twitter’s API to understand behavioral dynamics on social networks.
By contrast, there’s little scholarly research in English on Sina Weibo, China’s most popular microblogging network. (The top article on Google Scholar that comes up for a search on “twitter” has 637 cites. Top article for “sina weibo” has 9 cites.) The service is structurally similar to Twitter, with @usernames, hashtags, reposting, and URL shortening (using the t.cn site instead of t.co used by Twitter.) In one sense, the service is richer than Twitter, as posts can contain both 140 characters (which may contain significantly more information than 140 alphanumeric characters, as the 140 characters in Chinese are ideograms), and an embedded image or video. And Sina Weibo offers an API and supports an ecosystem of tools and applications that interact with Weibo data. Oh, and Sina Weibo has almost as many users as Twitter – 250 million in October 2011, as compared to roughly 300 million for Twitter at the end of 2011.
The obvious reason for the lack of English language research is that most English-speaking social media scholars don’t read Chinese very well. But this a lame excuse for ignoring a powerful media tool. John Kelly of Morningside Analytics doesn’t speak Persian, but he’s done groundbreaking research mapping links in the Iranian blogosphere. Colleagues at the Berkman Center are using Media Cloud (built by researchers who speak no Russian) to understand conversations taking place in Russian blogs versus those in state-influenced media. Language is a powerful, but not insurmountable, barrier to researching a media space. In both the cases I mention above, English-speaking researchers worked with translators to understand novel social media phenomena.
I sometimes wonder whether English-speaking scholars pay insufficient attention to Chinese social media due to an assumption that Chinese media has been censored to the point of sterility. I often speak about internet censorship, and American audiences in particular are quick to share their knowledge of the “great firewall”, the “fifty cent party” and other aspects of Chinese internet censorship. Because Chinese censorship has been widely reported in American media, I suspect many Americans know more about what’s not on the Chinese internet than what’s present. (David Talbot of Technology Review wrote an excellent article about “China’s Internet Paradox” which makes the case that the Chinese internet is freer and more complicated than most audiences think.)
One of the best ways to get a sense for the complexity of Sina Weibo is through WeiboScope, a tool created by Cedric Sam and colleagues at the University of Hong Kong. WeiboScope uses Sina Weibo’s API to collect posts from 200,000 Sina Weibo users. His sample is a subset of Sina Weibo’s most popular users, and contains only users who have at least 1000 followers. (His blog, the Rice Cooker, offers lots of details on building and deploying the system.) Taking advantage of the fact that many Sina Weibo posts include images, WeiboScope offers a visual version of Weibo “trending topics”, showing the images associated with the most retweeted posts.
A first glance at WeiboScope offers a sense for what’s hot in the Chinese internet. There’s lots of images of pop stars, and lots of pretty women showing off cleavage. Dig a bit further and there’s some hope for the xenophiles amongst us: internet memes that need to translation. Sam the Seagull – a bird who steals Doritos from an Aberdeen convenience store – has been kicking around the internet since at least 2007, and an animated GIF of the thieving bird is the second most popular post today. Other memes appear to be shared in realtime – this comparison of pollution in a Chinese city versus the skies above Australia featured on WeiboScope today, and also appeared on Reddit this morning.
Dig a bit deeper and there’s quite a bit of political content. Take this deeply disconcerting image:
The face of the mammarilly-enhanced cow is that of Niu Gensheng, CEO of Mengniu Dairy, one of the companies implicated in the 2008 Melamine scandal, where companies apparently added a toxic chemical to milk powder to increase protein content in their products. Mengniu recently revealed that some of their milk is testing positive for another toxin, apparently because cows were fed moldy feed. The company’s share price dropped 24% on this news today, knocking more than $1 billion of the company’s value. The text accompanying the Gensheng cartoon warns the executive of the dangers of angering 1.3 billion people. Another post, the most popular today, links to an article on Songshuhui.net that argues that Chinese people should stop drinking milk. While the article doesn’t explicitly mention Mengniu, it references scandals about milk, and it’s likely that the conversation about eschewing milk is directly related to the Mengniu news. Another popular post suggests a boycott of Mengniu, reminding readers that Saatchi & Saatchi, which had worked to rebrand the company, left after the tainted milk scandal of 2008.
I suspect some readers will note that the story I’m featuring about popular dissent is about consumer issues, not about direct opposition to the government. It’s worth remembering that popular protest often focuses more on economic and social issues than on overtly political issues – the Occupy movement in the US has been triggered by frustration with banks at least as much as it is with frustration with US politics. And there’s more directly political content on Weibo as well – this post talks about a family’s house that’s demolished by the government and a man’s protests in Beijing. This isn’t to say that Sina Weibo isn’t censored – it is. But the speed of Weibo means that stories can be widely discussed before censors declare a topic off limits, as we saw with extensive online coverage of the July high speed train collision. And the popularity of Weibo gives Chinese authorities a classic Cute Cats problem – censoring the service too heavily would alienate the 250 million people who use it, including the majority who are largely interested in scantily dressed celebrities.
I should note: I don’t speak or read Chinese. That means that my interpretation of the Mengniu cow could be deeply mistaken. But it also means that it’s possible to puzzle out a breaking story in Chinese media using WeiboScope, Google Translate and a few web searches.
Here’s hoping tools like WeiboScope will help make the Chinese internet seem like less of a foreign land and more like a near neighbor.
Oiwan Lam at Global Voices has posted about online activism around Mengniu, with some wonderful (and generally less disturbing!) images. And An Xiao offers a great reaction post to the ideas I’m putting forward here, including a clever inversion of the Cute Cat Theory: “with Chinese political memes, the cute cats are the activist message.” Very interesting, something I’m still digesting.