Home » Blog » Berkman » Susan Benesch on dangerous speech and counterspeech

Susan Benesch on dangerous speech and counterspeech

Susan Benesch is one of the leading thinkers on countering hate speech online. She’s a fellow at the Berkman Center this year, and I’m terribly sorry to be missing her talk at Berkman this afternoon. (Instead, I’m watching from home so I can be primary caretaker for my son for a couple of weeks while Rachel gets to travel.) She teaches international human rights at American University and is the founder of the Dangerous Speech Project, which tries to understand the spread of speech that incites people to violence.

Susan’s talk is available online and I’ve tried to blog it remotely while cursing my inability to teleport across the state. The talk is wonderfully titled “Troll Wrastling for Beginners: Data-Driven Methods to Decrease Hatred Online”. Unlike most conventional online wisdom, Benesch believes you should engage with the trolls, in part because it may be the most successful path to countering dangerous speech. The approaches states have taken to dangerous speech – punishment and censorship – don’t work very well, and some evidence suggests that they work even worse online than offline. She suggests the case of Anwar al-Awlaki, who was ultimately killed by a drone strike – despite being punished (via summary execution from a US drone), his online speeches continue to be influential and may have influenced the Boston Marathon bombers. Censoring that speech doesn’t work well in an online environment as it’s likely to move onto other platforms.

So what about “don’t feed the trolls”? Benesch points out that there are several implicit assumption in that advice. We assume that if we ignore a troll, they will stop (which, in turn, tends to assume behavior that’s only on a signal platform.) There’s an assumption that online hate is created by trolls; in the few experiments that look at racist and sexist speech, at least half is produced by non-trolls. We tend to assume that all trolls have the same motivations and that they will respond to the same controls. And finally, we assume that the trolls are the problem – we need to consider effects on the audience.

(Benesch doesn’t define trolls until pressed by the audience and points out that it’s a term she uses with tongue in cheek, most of the time – she acknowledges that different trolls have different motivations. Her goal is to move away from considering trolls as the problem and towards understanding dangerous speech as a broader phenomenon.)

One of the benefits of online speech environments, Benesch posits, is that we can examine the effect of speech on people. In offline environments, it’s very hard to measure what reactions dangerous speech leads to – in online environments, it may be possible to track both responses and effects.

Benesch’s suggestion is that we should approach dangerous speech through counterspeech, in effect, talking back to the trolls and to others. In explaining her logic, she notes that the internet doesn’t create hate speech – in some cases, it may disinhibit us from speaking. But more often, the internet creates an environment where we are aware of speech we otherwise wouldn’t hear. Most of us wouldn’t have been aware of what speech is shared at a KKK meeting, and many of us wouldn’t have heard the sexist jokes that were told in locker rooms. Now speech is crossing between formerly closed communities.

This is a new feature of human life, Benesch suggests, and while it causes a great deal of pain, it’s also an opportunity. We can “toss speech back across those boundaries to see what effect it has.” For the most part, we don’t know what will happen when we expose speech this way, and it’s possible the effects could be very positive. She asks us to consider norm formation in teenagers – most 16 year olds, she argues, have historically developed opinions from a small, homogenous community around them. That’s no longer the case, and it positive opportunity for teens to develop a broader and more nuanced worldview.

Believing in counterspeech means having faith that it’s possible to shift norms in speech communities. Benesch asks “What is the likelihood an American politician will use the N-word in public?” While there’s a constitutionally protected right to use such an offensive term, the probability of a speaker using the term is near zero. Yet, she argues, 60 years ago there were places in the US where you likely could not have been elected without using that word. “People’s behavior shifts dramatically in response to community norms,” she suggests, and as many of 80% of people are likely to follow the norms of speech consistent with a space and a situation, even trolls.

One of Benesch’s case studies for counterspeech comes from Kenya, where dangerous speech was a key component to violence in the wake of 2007’s disputed election. With over a thousand killed and hundreds of thousands displaced, the 2007-8 unrest was one of the ugliest chapters in the nation’s history, and as Kenya prepared for elections in 2013, many Kenyans were worried about inflammatory and dangerous speech online.

Benesch worked with Kenya data scientists at the iHub and the team at Ushahidi to build Umati (from the Swahili word for crowd), which collected reports of online hate speech. What they found was a wave of inflammatory speech from Facebook, and astonishingly little dangerous speech on Twitter. This disparity is not well explained by platform usage – Twitter is extremely popular in Kenya. Instead, it’s explained by counterspeech.

When inflammatory speech was posted on Twitter, prominent Kenyan twitter users (often members of the #KOT, Kenyans on Twitter, community) responded by criticizing the poster, often invoking the need to keep discourse in the country civil and productive. This counterspeech was surprisingly successful – Benesch tells the story of a Twitter user who posted that he would be okay with the disappearance of another ethnic group, and was immediately called out by other Twitter users. Within a few minutes, he had tweeted, “Sorry, guys, what I said wasn’t right and I take it back”.

This isn’t the behavior of a troll, Benesch argues. If the user in question were simply looking for attention, he wouldn’t have backed down when his inflammatory tweets met with spontaneous counterspeech. This online counterspeech is especially important when online speech is magnified by broadcast media, as it is in both Kenya and the US – it’s possible for television and newspapers to magnify not just the hateful speech but the attempts to counteract it.

By studying successful examples of counterspeech, Benesch is trying to develop a taxonomy of counterspeech and determine when and where different forms are most useful. She takes inspiration from examples like that of a young man in the US tweeting angrily about Nina Davuluri being named Miss America. The young man inaccurately and disparagingly referred to Davuluri as “an Arab”, and was immediately countered on Twitter by people who called out his racism. Within a few hours, he’d tweeted something resembling an apology to Davuluri herself.

Benesch wonders, “Can we put together the ideas of counterspeech and the idea of influencing 16 year olds?” It’s not realistic to believe we’re going to change the behavior of hardcore haters, she tells us, but we only need to influence a critical mass of people within a community, not the outliers.

Twitter and Facebook aren’t the only environments for inflammatory speech online – anyone who’s participated in online gaming knows that there’s toxic and hostile speech in online environments. Riot Games was concerned about the speech surrounding their popular game League of Legends and cooperated with academic researchers to understand speech in their game universe. The study found that fully half of the inflammatory messages were coming from users we wouldn’t normally consider to be trolls – they came from people who generally behaved like other game players, but were having a bad day and lashed out in ways that were inflammatory. They also discovered that very small changes in the platform – changes in language used to prompt players, apparently minor changes like font and text color – could improve behavior substantially.

Facebook’s “compassion research” project works on similar ideas, trying to get people to use Facebook in more pro-social ways. When you try to flag content on Facebook as offensive, Facebook first prompts you to engage with the person who offended you, suggesting language to communicate to the other user: “Could you take this down? It hurts my feelings.” As with Riot Games, they’ve found that small prompts can lead to dramatic behavior changes.

Benesch has been using these insights to consider problems of inflammatory speech in Myanmar (a topic I learned a little about in my visit to the country earlier this month.) In Myanmar, Facebook is the dominant internet platform, not just the dominant social media platform – if you search for information in Myanmar, you’re probably searching Facebook. In this environment, a rising tide of highly inflammatory speech inciting Buddhists against Muslims, particularly against the Rohingya people, is especially concerning. Not only does Facebook in Myanmar lead to echo chambers where no one may be willing to challenge inflammatory speech with counterspeech, but some of the mechanisms that work elsewhere may not work in Myanmar.

In a country that’s suffered under a military dictatorship for half a century, the idea of “reporting” people for their speech can be very frightening. Similarly, being encouraged to engage with someone who posted something offensive when you have reason to fear this person, or his friends, might threaten your life, isn’t a workable intervention. Any lessons from Facebook’s compassion research needs to be understood in specific human contexts. Benesch asks how you should respond to offensive speech as a Facebook user in Myanmar: you can like the post, but you can’t unlike it. If you respond in the comments thread, you’re participating in a space where the page owner can eliminate or bury your comment. This points to the challenge of using a private space as a quasi-public space.

We need more research on questions like this, Benesch offers. We need to understand different responses to dangerous speech, from “don’t feed the trolls” to counterspeech, to see what’s effective. We need to understand whether counterspeech that seeks to parody or use humor is more effective than direct confrontation. And we need to understand discourse norms in different communities as what works in one place is unlikely to work in another. Louis Brandeis advised that the remedy for bad speech is more speech. As researchers, we can go further and investigate which speech is a helpful counter to bad speech.


I’ll admit that the topic of Benesch’s research made me uneasy when we first met. I’m enough of a first amendment absolutist that I tend to regard talk of “dangerous speech” as an excuse for government control of speech. I had a great meeting with Benesch just before I went to Myanmar, and was much better prepared for the questions I fielded there than if I hadn’t benefitted from her wisdom. She’s done extensive work understanding what sorts of speech seems to drive people to harm one another, and she’s deeply dedicated to the idea that this speech can be countered more effectively than it could be censored or banned.

The conversation after her talk gave me a sense for just how challenging this work is – it’s tricky to define inflammatory speech, dangerous speech, trolling, etc. What might be a reasonable intervention to counter speech designed to incite people to violence might not be the right intervention to make a game community more inviting. On the other hand, counterspeech may be more important in ensuring that online spaces are open and inviting to women and to people of different races and faiths than they are right now, even if inflammatory speech never descends to the level of provoking violence.

For people interested in learning more about this topic, I’d recommend the links on the Berkman talk page as well as this essay from Cherian George, who was at the same meeting I attended in Myanmar and offered his thoughts on how the country might address its inflammatory speech online. I’m looking forward to learning more from Susan’s work and developing a more nuanced understanding of this complicated topic.

20 thoughts on “Susan Benesch on dangerous speech and counterspeech”

  1. I’ve wondered whether a partial, short-term fix could be to ban all physical threats. I realize that implementation would mean plenty of change even in places like reddit comment threads where the comments are aiming for maximum obnoxiousness rather than hate. I realize that it would mean you couldn’t even say “F*ck you.” We’d have to change all our curses.

    But it would give a clear rule on which kind of hate speech is not allowed. Nobody can suggest or threaten physical harm of any kind. Would it make any difference if people had to state their hate some other way? Have any experiments or studies been done to measure reactions to physical versus non-physical anger? I.e. would it help matters, or would people just immediately learn to talk in code which had the exact same horrible effect?

  2. Pingback: Flower Language, Troll Wrestling, and Reflections on Kenya | Voices That Poison

  3. Pingback: Facing the Challenge of Online Harassment | Privacy and the Internet

  4. Pingback: Facing the Challenge of Online Harassment | thewire

  5. Pingback: Facing the Challenge of Online Harassment | Michigan Standard

  6. Pingback: Enfrentando o Desafio do Assédio Online | Privacy and the Internet

  7. Pingback: Enfrentando o Desafio do Assédio Online - Freedom's Floodgates

  8. Pingback: Snuppy.dk » Enfrentando o Desafio do Assédio Online

  9. Pingback: Die Herausforderung von Mobbing im Internet | thewire

  10. Pingback: Die Herausforderung von Mobbing im Internet - Freedom's Floodgates

  11. Pingback: Die Herausforderung von Mobbing im Internet | Privacy and the Internet

  12. Pingback: Snuppy.dk » Die Herausforderung von Mobbing im Internet

  13. Pingback: Die Herausforderung von Mobbing im Internet

  14. Pingback: Enfrentando o Desafio do Assédio Online | Michigan Standard

  15. Pingback: Die Herausforderung von Mobbing im Internet | Michigan Standard

  16. Pingback: Faire face au problème du harcèlement en ligne - Freedom's Floodgates

  17. Pingback: Faire face au problème du harcèlement en ligne | thewire

  18. Pingback: Online Harassment: Protect Yourself from Bullies | bchoishere

  19. Pingback: Sometimes it’s best to feed the trolls | The News On Time

  20. Pingback: Reporte Ciencia UANL Alimentar al trol - Reporte Ciencia UANL

Comments are closed.